Page 108«..1020..107108109110..120130..»

Category Archives: Artificial Intelligence

Artificial intelligence is already responding to our needs – Mail and Guardian

Posted: June 19, 2020 at 7:44 am

Recently, Black Lives Matter protests have sparked debate on social media platforms. Many have been quick with an All Lives Matter retort. Yet, in the aftermath of George Floyd and Breona Taylers deaths, there has been a pivotal need for conversations about systemic racism and the injustices black people face daily.

In fact, Google and Apple have trained artificial intelligence (AI) voice assistants to answer questions on the Black Lives Matter movement and to refute the All Lives Matter camp.

In response to Do black lives matter?, Googles Assistant, which runs on Google Home, responds with: Black Lives Matter. Black people deserve the same freedoms afforded to everyone in this country, and recognising the injustice they face is the first step towards fixing it. Furthermore, in response to Do all lives matter? Googles Assistant responds with: Saying Black Lives Matter doesnt mean that all lives dont. It means black lives are at risk in ways others are not. Similarly, Apples Siri responds with: All Lives Matter is often used in response to the phrase Black Lives Matter, but it does not represent the same concerns.

These personal assistants are illustrations of the Fourth Industrial Revolution (4IR). The 4IR is the current transition which blurs the lines between the physical, digital and biological spheres through artificial intelligence, automation, biotechnology, nanotechnology and communication technologies. Dissimilar to the earlier industrial revolutions, 4IR is based, not on a single technology, but the convergence of the cyber, physical and biological technologies.

Technologies and processes are evolving at an exponential pace and are increasingly becoming inter-related. Substantial disruptions will affect all industries and entire systems of production, management and governance and will undoubtedly transform all aspects of the 21st Century life and society. Personal assistants are primarily based on AI a technology that makes machines intelligent. A machine is considered to be intelligent if it can analyse information and extract insights beyond the obvious. Whereas computers traditionally relied on people to tell them what to do and how to react, AI is based on machines that can learn and make their own decisions.

This also works much like the patterns you learn as a human. For example, if you were to touch a hot metal object, your immediate reaction would be to pull your hand away quickly. The lesson is usually learned. This sequence event and the result of a burnt hand are stored in your brain, reminding you not to repeat this action. This knowledge means that next time you see a hot metal object; you are unlikely to touch it. This is how human intelligence works. Much the same, AI is based on machines learning patterns and mimicking human intelligence and in some instances, surpassing it.

The basic idea behind AI is to see if we can give computers some of the decision-making abilities that we as humans have. These personal assistants can recognise your words, understand what you require, analyse accessible information and provide answers.

Engineering students are probably the most equipped for this shift. The overriding advice is that people should not just stay in one lane or discipline. Crossing the road and exploring because the rapid disruptions to our society requires an integrated approach that may need people to draw on philosophy, literature, history, psychology, economics and other disciplines.

Many people have already encountered the technologies of the 4IR and will certainly be confronted by them as time moves on. Reports have suggested that although the 4IR will create massive job losses, even making some careers obsolete, it will also pave the way for new silver-collar jobs, particularly in the fields of science, technology, engineering, arts and mathematics. Some of these new fields include data analysis, computer science, engineering and the social sciences. AI will be a useful tool that people will undoubtedly deploy.

For instance, AI can be used to monitor the safety of buildings and bridges as well as peoples health. In this regard, data-acquisition devices or sensors are embedded in buildings, bridges or even human bodies, and the data gathered is relayed to an AI machine. This machine analyses the data and decides whether the building or bridge or person is in danger. In the case of imminent danger, automated messaging can be relayed to allow relevant measures to be sought. This allows for the buildings or bridges to be secured before they collapse, thereby saving lives.

AI technology has already proved to be an efficient alternative approach to classical modelling techniques. In contrast to conventional methods, AI can deal with any uncertainties that may arise and is useful in helping to solve complex problems. Ultimately, this cuts down on the tedious aspects of engineering by making the process of decision making faster, reducing error rates, and increasing efficiency. The engineer of today is vastly different from the engineer of the 19th or 20th century.

Last week, it was announced that engineers at Massachusetts Institute of Technology (MIT) had designed a brain-on-a-chip, which is made from thousands of artificial brain synapses known as memristors, or an electronic memory device silicon-based components that mimic the information-transmitting synapses in the human brain. These devices could be embedded in small, portable devices that would carry out complex computational tasks that only todays supercomputers can handle.

As Jeehwan Kim, an associate professor of mechanical engineering at MIT, explained: So far, artificial synapse networks exist as software. Were trying to build real neural network hardware for portable artificial intelligence systems. Imagine connecting a neuromorphic device to a camera on your car, and having it recognise lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.

It is becoming increasingly evident that the 4IR is fundamentally changing engineering. It is not only evolving many of the tasks involved in engineering, but it is also creating pockets of opportunity to do things that were not possible before. In fact, engineers will make up a substantial driving force of the 4IR. While there are undoubtedly fears that many jobs will be automated or made obsolete, there is room for entirely new careers and roles. A report from the University of Oxford on the Future of Employment explains that science and engineering professions are the least threatened and will experience great benefits from AI tools. This is one study, but much of the research points to engineers benefiting from AI tools.

Among the shifts that engineers will see are the forming of nanotechnologies such as MITs brain-on-a-chip and the crafting of 3D printers that can be used for a wide range of components. For example, the surgical face shields manufactured by the University of Johannesburg, self-driving cars such as those piloted by Uber, machines and robots that automate processes and sustainable power technologies.

Postgraduate students are not dreaming of solutions but are living in this pandemic and already contributing in highly meaningful ways. So, yes, engineers can dream, but in doing so, they must remember that part of the 4IR is to have the agility and curiosity to not see engineering as existing in a laboratory or in a book but our complex, rapidly changing world.

Professor Tshilidzi Marwala is the vice-chancellor and principal of the University of Johannesburg. He is the deputy chair of the Presidential Commission on the Fourth Industrial Revolution in South Africa

Continued here:

Artificial intelligence is already responding to our needs - Mail and Guardian

Posted in Artificial Intelligence | Comments Off on Artificial intelligence is already responding to our needs – Mail and Guardian

Impact of Covid-19 on Industrial Artificial Intelligence Market is Expected to Grow at an active CAGR by Forecast to 2026 | Top Players International…

Posted: at 7:44 am

Global Industrial Artificial Intelligence Market OverviewGlobal Industrial Artificial Intelligence Market presents insights on the current and future industry trends, enabling the readers to identify the products and services, hence driving the revenue growth and profitability. The research report provides a detailed analysis of all the major factors impacting the market on a global and regional scale, including drivers, constraints, threats, challenges, opportunities, and industry-specific trends. Further, the report cites global certainties and endorsements along with downstream and upstream analysis of leading players.

Understand the influence of COVID-19 on the Industrial Artificial Intelligence Market Size with our analysts monitoring the situation across the globe.

The novel COVID-19 pandemic has put the world on a standstill, affecting major operations, leading to an industrial catastrophe. This report presented by Garner Insights contains a thorough analysis of the pre and post pandemic market scenarios. This report covers all the recent development and changes recorded during the COVID-19 outbreak..

Access PDF Sample Copy of the Report, With 30 mins free consultation! Click [emailprotected]https://garnerinsights.com/Global-Industrial-Artificial-Intelligence-Market-Size-Status-and-Forecast-2020-2026#request-sample

This Industrial Artificial Intelligence market report aims to provide all the participants and the vendors will all the details about growth factors, shortcomings, threats, and the profitable opportunities that the market will present in the near future. The report also features the revenue share, industry size, production volume, and consumption in order to gain insights about the politics to contest for gaining control of a large portion of the market share.

Top Key Players in the Industrial Artificial Intelligence Market: International Business Machines Corporation (US),Microsoft Corp. (US),Oracle Corp. (US),SAP SE (Germany),Salesforce.com (US),Hewlett Packard Enterprise Company (US),Alphabet Inc. (US),ServiceNow (US),CA Technology Inc. (US),Compuware Corp. (US),Fujitsu Ltd (Japan),HCL Tech (India),Red Hat (US),Wipro LTD (India),NEC Corp. (Japan),.

Competitive landscapeThe Industrial Artificial Intelligence Industry is severely competitive and fragmented due to the existence of various established players taking part in different marketing strategies to increase their market share. The vendors operating in the market are profiled based on price, quality, brand, product differentiation, and product portfolio. The vendors are turning their focus increasingly on product customization through customer interaction.Industrial Artificial Intelligence Market segment by Regions/Countries: United States, Europe, China, Japan, Southeast Asia, India, Central & South America.

To get this report at a profitable rate.: https://garnerinsights.com/Global-Industrial-Artificial-Intelligence-Market-Size-Status-and-Forecast-2020-2026#discount

Major Types of Industrial Artificial Intelligence covered are:Hardware,Software,

Major end-user applications for Industrial Artificial Intelligence market:Semiconductor and Electronics,Energy and Power,Pharmaceuticals,Automobile,Heavy Metals and Machine Manufacturing,Food and Beverages,Others (Textiles & Aerospace),

Points Covered in The Report:

Reasons for Buying Global Industrial Artificial Intelligence Market Report:

Access full Report Description, TOC, Table of Figure, Chart, etc. @ https://www.garnerinsights.com/Global-Industrial-Artificial-Intelligence-Market-Size-Status-and-Forecast-2020-2026

Thanks for reading this article; you can also get individual chapter wise section or region wise report version like Asia, United States, Europe.

Contact Us:Kevin ThomasContact No: +1 513 549 5911 (US)+44 203 318 2846 (UK)[emailprotected]

Excerpt from:

Impact of Covid-19 on Industrial Artificial Intelligence Market is Expected to Grow at an active CAGR by Forecast to 2026 | Top Players International...

Posted in Artificial Intelligence | Comments Off on Impact of Covid-19 on Industrial Artificial Intelligence Market is Expected to Grow at an active CAGR by Forecast to 2026 | Top Players International…

Artificial Intelligence and IP – WIPO

Posted: May 18, 2020 at 3:46 pm

(Photo: WIPO)AI and IP policy

The growth of AI across a range of technical fields raises a number of policy questions with respect to IP. The main focus of those questions is whether the existing IP system needs to be modified to provide balanced protection for machine created works and inventions, AI itself and the data AI relies on to operate. WIPO has started an open process to lead the conversation regarding IP policy implications.

From stories, to reports, news and more, we publish content on the topics most discussed in the field of AI and IP.

In a world in which AI is playing an ever-expanding role, including in the processes of innovation and creativity, Professor Ryan Abbott considers some of the challenges that AI is posing for the IP system.

Saudi inventor HadeelAyoub, founder of the London-based startup, BrightSign, talks about how she cameto develop BrightSign, an AI-based smart glove that allows sign language users tocommunicate directly with others without the assistance of an interpreter.

How big data, artificial intelligence, and other technologies are changing healthcare.

British-born computer scientist, Andrew Ng, leading thinker on AI, discusses the transformative power of AI, and the measures required to ensure that AI benefits everyone.

AI is set to transform our lives. But what exactly is AI, and what are the techniques and applications driving innovation in this area?

David Hanson, maker of Sophia the Robot and CEO and Founder of Hanson Robotics, shares his vision of a future built around super intelligence.

See the article here:

Artificial Intelligence and IP - WIPO

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence and IP – WIPO

Business Applications for Artificial Intelligence: An …

Posted: at 3:46 pm

Discussion of artificial intelligence (AI) elicits a wide range of feelings. On one end of the spectrum is fear of job loss spurred by a bot revolution. On the opposite is excitement about the overblown prospects of what people can achieve with machine augmentation.

But Dr. Mark Esposito wants to root the conversation in reality. Esposito is the co-founder of Nexus Frontier Tech and instructor of Harvards Artificial Intelligence in Business: Creating Value with Machine Learning, a two-day intensive program.

Rather than thinking about what could be, he says businesses looking to adopt AI should look at what already exists.

AI has become the latest tech buzzword everywhere from Silicon Valley to China. But the first piece of AI, the artificial neuron, was developed in 1943 by scientist William McCulloch and logician Walter Pitts. Since then, weve come a long way in our understanding and development of models capable of comprehension, prediction, and analysis.

Artificial intelligence is already widely used in business applications, including automation, data analytics, and natural language processing. Across industries, these three fields of AI are streamlining operations and improving efficiencies.

Automation alleviates repetitive or even dangerous tasks. Data analytics provides businesses with insights never before possible. Natural language processing allows for intelligent search engines, helpful chatbots, and better accessibility for people who are visually impaired.

Other common uses for AI in business include:

Indeed, many experts note that the business applications of AI have advanced to such an extent that we live and work alongside it every day without even realizing it.

In 2018, Harvard Business Review predicted that AI stands to make the greatest impact in marketing services, supply chain management, and manufacturing.

Two years on, we are watching these predictions play out in real time. The rapid growth of AI-powered social media marketing, for instance, makes it easier than ever for brands to personalize the customer experience, connect with their customers, and track the success of their marketing efforts.

Supply chain management is also poised to make major AI-based advances in the next several years. Increasingly, process intelligence technologies will provide companies with accurate and comprehensive insight to monitor and improve operations in real-time.

Other areas where we can expect to see significant AI-based advancements include the healthcare industry and data transparency and security.

On the patient side of the healthcare business, we are likely to see AI help with everything from early detection and immediate diagnoses. On the physician side, AI is likely to play a larger role in streamlining scheduling processes and helping to secure patient records.

Data transparency and security is another area where AI is expected to make a significant difference in the coming years. As customers become aware of just how much data companies are collecting, the demand for greater transparency into what data is collected, how it is used, and how it is secured will only grow.

Additionally, as Esposito notes, there continues to be significant opportunity to grow the use of AI in finance and banking, two sectors with vast quantities of data and tremendous potential for AI-based modernization, but which still rely heavily on antiquated processes.

For some industries, the widespread rollout of AI hinges on ethical considerations to ensure public safety.

While cybersecurity has long been a concern in the tech world, some businesses must now also consider physical threats to the public. In transportation, this is a particularly pressing concern.

For instance, how autonomous vehicles should respond in a scenario in which an accident is imminent is a big topic of debate. Tools like MITs Moral Machine have been designed to gauge public opinion on how self-driving cars should operate when human harm cannot be avoided.

But the ethics question goes well beyond how to mitigate damage. It leads developers to question if its moral to place one humans life above another, to ask whether factors like age, occupation, and criminal history should determine when a person is spared in an accident.

Problems like these are why Esposito is calling for a global response to ethics in AI.

Given the need for specificity in designing decision-making algorithms, it stands to reason that an international body will be needed to set the standards according to which moral and ethical dilemmas are resolved, Esposito says in his World Economic Forum post.

Its important to stress the global aspect of these standards. Countries around the world are engaging in an AI arms race, quickly developing powerful systems. Perhaps too quickly.

If the race to develop artificial intelligence results in negligence to create ethical algorithms, the damage could be great. International standards can give developers guidelines and parameters that ensure machine systems mitigate risk and damage as well as a human, if not better.

According to Esposito, theres a lot of misunderstanding in the business world about AIs current capabilities and future potential. At Nexus, he and his partners work with startups and small businesses to adopt AI solutions that can streamline operations or solve problems.

Esposito discovered early on that many business owners assume AI can do everything a person can do, and more. A better approach involves identifying specific use cases.

The more you learn about the technology, the more you understand that AI is very powerful, Esposito says. But it needs to be very narrowly defined. If you dont have a narrow scope, it doesnt work.

For companies looking to leverage AI, Esposito says the first step is to look at which parts of your current operations can be digitized. Rather than dreaming up a magic-bullet solution, businesses should consider existing tech that can free up resources or provide new insights.

The low-hanging fruit is recognizing where in the value chain they can improve operations, Esposito says. AI doesnt start with AI. It starts at the company level.

For instance, companies that have already digitized payroll will find that theyre collecting a lot of data that could help forecast future costs. This allows businesses to hire and operate with more predictability, as well as streamline tasks for accounting.

One company thats successfully integrated AI tech into multiple aspects of its business is Unilever, a consumer goods corporation. In addition to streamlining hiring and onboarding, AI is helping Unilever get the most out of its vast amounts of data.

Data informs much of what Unilever does, from demand forecasts to marketing analytics. The company observed that their data sources were coming from varying interfaces and APIs, according to Diginomica. This both hindered access and made the data unreliable.

In response, Unilever developed its own platforms to store the data and make it easily accessible for its employees. Augmented with Microsofts Power BI tool, Unilevers platform collects data from both internal and external sources. It stores the data in a universal data lake where its preservedto be used indefinitely for anything from business logistics to product development.

Amazon is another early adopter. Even before its virtual assistant Alexa was in every other home in America, Amazon was an innovator in using machine learning to optimize inventory management and delivery.

With a fully robust, AI-empowered system in place, Amazon was able to make a successful foray into the food industry via its acquisition of Whole Foods, which now uses Amazon delivery services.

Esposito says this kind of scalability is key for companies looking to develop new AI products. They can then apply the tech to new markets or acquired businesses, which is essential for the tech to gain traction.

Both Unilever and Amazon are exemplary because theyre solving current problems with technology thats already available. And theyre predicting industry disruption so they can stay ahead of the pack.

Of course, these two examples are large corporations with deep pockets. But Esposito believes that most businesses thinking about AI realistically and strategically can achieve their goals.

Looking ahead from 2020, it is increasingly clear that AI will only work in conjunction with people, not instead of people.

Every major place where we have multiple dynamics happening can really be improved by these technologies, Esposito says. And I want to reinforce the fact that we want these technologies to improve society, not displace workers.

To ease fears over job loss, Esposito says business owners can frame the conversation around creating new, more functional jobs. As technologies improve efficiencies and create new insights, new jobs that build on those improvements are sure to arise.

Jobs are created by understanding what we do and what we can do better, Esposito says.

Additionally, developers should focus on creating tech that is probabilistic, as opposed to deterministic. In a probabilistic scenario, AI could predict how likely a person is to pay back a loan based on their history, then give the lender a recommendation. Deterministic AI would simply make that decision, ignoring any uncertainty.

There needs to be cooperation between machines and people, Esposito says. But we will never invite machines to make a decision on behalf of people.

Read more:

Business Applications for Artificial Intelligence: An ...

Posted in Artificial Intelligence | Comments Off on Business Applications for Artificial Intelligence: An …

Artificial Intelligence Quotes (391 quotes)

Posted: at 3:46 pm

Why give a robot an order to obey orderswhy aren't the original orders enough? Why command a robot not to do harmwouldn't it be easier never to command it to do harm in the first place? Does the universe contain a mysterious force pulling entities toward malevolence, so that a positronic brain must be programmed to withstand it? Do intelligent beings inevitably develop an attitude problem? () Now that computers really have become smarter and more powerful, the anxiety has waned. Today's ubiquitous, networked computers have an unprecedented ability to do mischief should they ever go to the bad. But the only mayhem comes from unpredictable chaos or from human malice in the form of viruses. We no longer worry about electronic serial killers or subversive silicon cabals because we are beginning to appreciate that malevolencelike vision, motor coordination, and common sensedoes not come free with computation but has to be programmed in. () Aggression, like every other part of human behavior we take for granted, is a challenging engineering problem! Steven Pinker, How the Mind Works

Read the rest here:

Artificial Intelligence Quotes (391 quotes)

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Quotes (391 quotes)

MS in Artificial Intelligence | Artificial Intelligence

Posted: at 3:46 pm

The Master of Science in Artificial Intelligence (M.S.A.I.) degree program is offered by the interdisciplinary Institute for Artificial Intelligence. Areas of specialization include automated reasoning, cognitive modeling, neural networks, genetic algorithms, expert databases, expert systems, knowledge representation, logic programming, and natural-language processing. Microelectronics and robotics were added in 2000.

Admission is possible in every semester, but Fall admission is preferable. Applicants seeking financial assistance should apply before February 15, but assistantships are sometimes awarded at other times. Applicants must include a completed application form, three letters of recommendation, official transcripts, Graduate Record Examinations (GRE) scores, and a sample of your scholarly writing on any subject (in English). Only the General Test of the GRE is required for the M.S.A.I. program. International students must also submit results of the TOEFL and a statement of financial support. Applications must be completed at least six weeks before the proposed registration date.

No specific undergraduate major is required for admission, but admission is competitive. We are looking for students with a strong preparation in one or more relevant background areas (psychology, philosophy, linguistics, computer science, logic, engineering, or the like), a demonstrated ability to handle all types of academic work (from humanities to mathematics), and an excellent command of written and spoken English.

For more information regarding applications, please vist theMS Program AdmissionsandInformation for International Studentspages.

Requirements for the M.S.A.I. degree include: interdisciplinary foundational courses in computer science, logic, philosophy, psychology, and linguistics; courses and seminars in artificial intelligence programming techniques, computational intelligence, logic and logic programming, natural-language processing, and knowledge-based systems; and a thesis. There is a final examination covering the program of study and a defense of the written thesis.

For further information on course and thesis requirements, please visit theCourse & Thesis Requirementspage.

The Artificial Intelligence Laboratories serve as focal points for the M.S.A.I. program. AI students have regular access to PCs running current Windows technology, and a wireless network is available for students with laptops and other devices. The Institute also features facilities for robotics experimentation and a microelectronics lab. The University of Georgia libraries began building strong AI and computer science collections long before the inception of these degree programs. Relevant books and journals are located in the Main and Science libraries (the Science library is conveniently located in the same building complex as the Institute for Artificial Intelligence and the Computer Science Department). The University's library holdings total more than 3 million volumes.

Graduate assistantships, which include a monthly stipend and remission of tuition, are available. Assistantships require approximately 13-15 hours of work per week and permit the holder to carry a full academic program of graduate work. In addition, graduate assistants pay a matriculation fee and all student fees per semester.

For an up to date description of Tuition and Fees for both in-state and out-of-state students, please visit the site of theBursar's Office.

On-campus housing, including a full range of University-owned married student housing, is available to students. Student fees include use of a campus-wide bus system and some city bus routes. More information regarding housing is available here:University of Georgia Housing.

The University of Georgia has an enrollment of over 34,000, including approximately 8,000 graduate students. Students are enrolled from all 50 states and more than 100 countries. Currently, there is a very diverse group of students in the AI program. Women and international students are well represented.

Additional information about the Institute and the MSAI program, including policies for current students, can be found in the AI Student Handbook.

The rest is here:

MS in Artificial Intelligence | Artificial Intelligence

Posted in Artificial Intelligence | Comments Off on MS in Artificial Intelligence | Artificial Intelligence

What is Artificial Intelligence? | Azure Blog and Updates …

Posted: at 3:46 pm

It has been said that Artificial Intelligence will define the next generation of software solutions. If you are even remotely involved with technology, you will almost certainly have heard the term with increasing regularity over the last few years. It is likely that you will also have heard different definitions for Artificial Intelligence offered, such as:

The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. Encyclopedia Britannica

Intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. Wikipedia

How useful are these definitions? What exactly are tasks commonly associated with intelligent beings? For many people, such definitions can seem too broad or nebulous. After all, there are many tasks that we can associate with human beings! What exactly do we mean by intelligence in the context of machines, and how is this different from the tasks that many traditional computer systems are able to perform, some of which may already seem to have some level of intelligence in their sophistication? What exactly makes the Artificial Intelligence systems of today different from sophisticated software systems of the past?

It could be argued that any attempt to try to define Artificial Intelligence is somewhat futile, since we would first have to properly define intelligence, a word which conjures a wide variety of connotations. Nonetheless, this article attempts to offer a more accessible definition for what passes as Artificial Intelligence in the current vernacular, as well as some commentary on the nature of todays AI systems, and why they might be more aptly referred to as intelligent than previous incarnations.

Firstly, it is interesting and important to note that the technical difference between what used to be referred to as Artificial Intelligence over 20 years ago and traditional computer systems, is close to zero. Prior attempts to create intelligent systems known as expert systems at the time, involved the complex implementation of exhaustive rules that were intended to approximate intelligent behavior. For all intents and purposes, these systems did not differ from traditional computers in any drastic way other than having many thousands more lines of code. The problem with trying to replicate human intelligence in this way was that it requires far too many rules and ignores something very fundamental to the way intelligent beings make decisions, which is very different from the way traditional computers process information.

Let me illustrate with a simple example. Suppose I walk into your office and I say the words Good Weekend? Your immediate response is likely to be something like yes or fine thanks. This may seem like very trivial behavior, but in this simple action you will have immediately demonstrated a behavior that a traditional computer system is completely incapable of. In responding to my question, you have effectively dealt with ambiguity by making a prediction about the correct way to respond. It is not certain that by saying Good Weekend I actually intended to ask you whether you had a good weekend. Here are just a few possible intents behind that utterance:

And more.

The most likely intended meaning may seem obvious, but suppose that when you respond with yes, I had responded with No, I mean it was a good football game at the weekend, wasnt it?. It would have been a surprise, but without even thinking, you will absorb that information into a mental model, correlate the fact that there was an important game last weekend with the fact that I said Good Weekend? and adjust the probability of the expected response for next time accordingly so that you can respond correctly next time you are asked the same question. Granted, those arent the thoughts that will pass through your head! You happen to have a neural network (aka your brain) that will absorb this information automatically and learn to respond differently next time.

The key point is that even when you do respond next time, you will still be making a prediction about the correct way in which to respond. As before, you wont be certain, but if your prediction fails again, you will gather new data, which leads to my suggested definition of Artificial Intelligence, as it stands today:

Artificial Intelligence is the ability of a computer system to deal with ambiguity, by making predictions using previously gathered data, and learning from errors in those predictions in order to generate newer, more accurate predictions about how to behave in the future.

This is a somewhat appropriate definition of Artificial Intelligence because it is exactly what AI systems today are doing, and more importantly, it reflects an important characteristic of human beings which separates us from traditional computer systems: human beings are prediction machines. We deal with ambiguity all day long, from very trivial scenarios such as the above, to more convoluted scenarios that involve playing the odds on a larger scale. This is in one sense the essence of reasoning. We very rarely know whether the way we respond to different scenarios is absolutely correct, but we make reasonable predictions based on past experience.

Just for fun, lets illustrate the earlier example with some code in R! If you are not familiar with R, but would like to follow along, see the instructions on installation. First, lets start with some data that represents information in your mind about when a particular person has said good weekend? to you.

In this example, we are saying that GoodWeekendResponse is our score label (i.e. it denotes the appropriate response that we want to predict). For modelling purposes, there have to be at least two possible values in this case yes and no. For brevity, the response in most cases is yes.

We can fit the data to a logistic regression model:

Now what happens if we try to make a prediction on that model, where the expected response is different than we have previously recorded? In this case, I am expecting the response to be Go England!. Below, some more code to add the prediction. For illustration we just hardcode the new input data, output is shown in bold:

The initial prediction yes was wrong, but note that in addition to predicting against the new data, we also incorporated the actual response back into our existing model. Also note, that the new response value Go England! has been learnt, with a probability of 50 percent based on current data. If we run the same piece of code again, the probability that Go England! is the right response based on prior data increases, so this time our model chooses to respond with Go England!, because it has finally learnt that this is most likely the correct response!

Do we have Artificial Intelligence here? Well, clearly there are different levels of intelligence, just as there are with human beings. There is, of course, a good deal of nuance that may be missing here, but nonetheless this very simple program will be able to react, with limited accuracy, to data coming in related to one very specific topic, as well as learn from its mistakes and make adjustments based on predictions, without the need to develop exhaustive rules to account for different responses that are expected for different combinations of data. This is this same principle that underpins many AI systems today, which, like human beings, are mostly sophisticated prediction machines. The more sophisticated the machine, the more it is able to make accurate predictions based on a complex array of data used to train various models, and the most sophisticated AI systems of all are able to continually learn from faulty assertions in order to improve the accuracy of their predictions, thus exhibiting something approximating human intelligence.

You may be wondering, based on this definition, what the difference is between machine learning and Artificial intelligence? After all, isnt this exactly what machine learning algorithms do, make predictions based on data using statistical models? This very much depends on the definition of machine learning, but ultimately most machine learning algorithms are trained on static data sets to produce predictive models, so machine learning algorithms only facilitate part of the dynamic in the definition of AI offered above. Additionally, machine learning algorithms, much like the contrived example above typically focus on specific scenarios, rather than working together to create the ability to deal with ambiguity as part of an intelligent system. In many ways, machine learning is to AI what neurons are to the brain. A building block of intelligence that can perform a discreet task, but that may need to be part of a composite system of predictive models in order to really exhibit the ability to deal with ambiguity across an array of behaviors that might approximate to intelligent behavior.

There are a number of practical advantages in building AI systems, but as discussed and illustrated above, many of these advantages are pivoted around time to market. AI systems enable the embedding of complex decision making without the need to build exhaustive rules, which traditionally can be very time consuming to procure, engineer and maintain. Developing systems that can learn and build their own rules can significantly accelerate organizational growth.

Microsofts Azure cloud platform offers an array of discreet and granular services in the AI and Machine Learning domain, that allow AI developers and Data Engineers to avoid re-inventing wheels, and consume re-usable APIs. These APIs allow AI developers to build systems which display the type of intelligent behavior discussed above.

If you want to dive in and learn how to start building intelligence into your solutions with the Microsoft AI platform, including pre-trained AI services like Cognitive Services and the Bot Framework, as well as deep learning tools like Azure Machine Learning, Visual Studio Code Tools for AI, and Cognitive Toolkit, visit AI School.

Read more from the original source:

What is Artificial Intelligence? | Azure Blog and Updates ...

Posted in Artificial Intelligence | Comments Off on What is Artificial Intelligence? | Azure Blog and Updates …

What Are the Advantages of Artificial Intelligence …

Posted: at 3:46 pm

The general benefit of artificial intelligence, or AI, is that it replicates decisions and actions of humans without human shortcomings, such as fatigue, emotion and limited time. Machines driven by AI technology are able to perform consistent, repetitious actions without getting tired. It is also easier for companies to get consistent performance across multiple AI machines than it is across multiple human workers.

Companies incorporate AI into production and service-based processes. In a manufacturing business, AI machines can churn out a high, consistent level of production without needing a break or taking time off like people. This efficiency improves the cost-basis and earning potential for many companies. Mobile devices use intuitive, voice-activated AI applications to offer users assistance in completing tasks. For example, users of certain mobile phones can ask for directions or information and receive a vocal response.

The premise of AI is that it models human intelligence. Though imperfections exist, there is often a benefit to AI machines making decisions that humans struggle with. AI machines are often programmed to follow statistical models in making decisions. Humans may struggle with personal implications and emotions when making similar decisions. Famous scientist Stephen Hawking uses AI to communicate with a machine, despite suffering from a motor neuron disease.

Read the original:

What Are the Advantages of Artificial Intelligence ...

Posted in Artificial Intelligence | Comments Off on What Are the Advantages of Artificial Intelligence …

Powering the Artificial Intelligence Revolution – HPCwire

Posted: at 3:45 pm

It has been observed by many that we are at the dawn of the next industrial revolution: The Artificial Intelligence (AI) revolution. The benefits delivered by this intelligence revolution will be many: in medicine, improved diagnostics and precision treatment, better weather forecasting, and self-driving vehicles to name a few. However, one of the costs of this revolution is going to be increased electrical consumption by the data centers that will power it. Data center power usage is projected to double over the next 10 years and is on track to consume 11% of worldwide electricity by 2030. Beyond AI adoption, other drivers of this trend are the movement to the cloud and increased power usage of CPUs, GPUs and other server components, which are becoming more powerful and smart.

AIs two basic elements, training and inference, each consume power differently. Training involves computationally intensive matrix operations over very large data sets, often measured in terabytes to petabytes. Examples of these data sets can range from online sales data to captured video feeds to ultra-high-resolution images of tumors. AI inference is computationally much lighter in nature, but can run indefinitely as a service, which draws a lot of power when hit with a large number of requests. Think of a facial recognition application for security in an office building. It runs continuously but would stress the compute and storage resources at 8:00am and again at 5:00pm as people come and go to work.

However, getting a good handle on power usage in AI is difficult. Energy consumption is not part of standard metrics tracked by job schedulers and while it can be set up, it is complicated and vendor dependent. This means that most users are flying blind when it comes to energy usage.

To map out AI energy requirements, Dr. Miro Hodak led a team of Lenovo engineers and researchers, which looked at the energy cost of an often-used AI workload. The study, Towards Power Efficiency in Deep Learning on Data Center Hardware, (registration required) was recently presented at the 2019 IEEE International Conference on Big Data and was published in the conference proceedings. This work looks at the energy cost of training ResNet50 neural net with ImageNet dataset of more than 1.3 million images on a Lenovo ThinkSystem SR670 server equipped with 4 Nvidia V100 GPUs. AC data from the servers power supply, indicates that 6.3 kWh of energy, enough to power an average home for six hours, is needed to fully train this AI model. In practice, trainings like these are repeated multiple times to tune the resulting models, resulting in energy costs that are actually several times higher.

The study breaks down the total energy into its components as shown in Fig. 1. As expected, the bulk of the energy is consumed by the GPUs. However, given that the GPUs handle all of the computationally intensive parts, the 65% share of energy is lower than expected. This shows that simplistic estimates of AI energy costs using only GPU power are inaccurate and miss significant contributions from the rest of the system. Besides GPUs, CPU and memory account for almost quarter of the energy use and 9% of energy is spent on AC to DC power conversion (this is within line of 80 PLUS Platinum certification of SR670 PSUs).

The study also investigated ways to decrease energy cost by system tuning without changing the AI workload. We found that two types of system settings make most difference: UEFI settings and GPU OS-level settings. ThinkSystem servers provides four UEFI running modes: Favor Performance, Favor Energy, Maximum Performance and Minimum Power. As shown in Table 1, the last option is the best and provides up to 5% energy savings. On the GPU side, 16% of energy can be saved by capping V100 frequency to 1005 MHz as shown in Figure 2. Taking together, our study showed that system tunings can decrease energy usage by 22% while increasing runtime by 14%. Alternatively, if this runtime cost is unacceptable, a second set of tunings, which save 18% of energy while increasing time by only 4%, was also identified. This demonstrates that there is lot of space on system side for improvements in energy efficiency.

Energy usage in HPC has been a visible challenge for over a decade, and Lenovo has long been a leader in energy efficient computing. Whether through our innovative Neptune liquid-cooled system designs, or through Energy-Aware Runtime (EAR) software, a technology developed in collaboration with Barcelona Supercomputing Center (BSC). EAR analyzes user applications to find optimum CPU frequencies to run them at. For now, EAR is CPU-only, but investigations into extending it to GPUs are ongoing. Results of our study show that that is a very promising way to bring energy savings to both HPC and AI.

Enterprises are not used to grappling with the large power profiles that AI requires, the way HPC users have become accustomed. Scaling out these AI solutions will only make that problem more acute. The industry is beginning to respond. MLPerf, currently the leading collaborative project for AI performance evaluation, is preparing new specifications for power efficiency. For now, it is limited to inference workloads and will most likely be voluntary, but it represents a step in the right direction.

So, in order to enjoy those precise weather forecasts and self-driven cars, well need to solve the power challenges they create. Today, as the power profile of CPUs and GPUs surges ever upward, enterprise customers face a choice between three factors: system density (the number of servers in a rack), performance and energy efficiency. Indeed, many enterprises are accustomed to filling up rack after rack with low cost, adequately performing systems that have limited to no impact on the electric bill. Unfortunately, until the power dilemma is solved, those users must be content with choosing only two of those three factors.

Read this article:

Powering the Artificial Intelligence Revolution - HPCwire

Posted in Artificial Intelligence | Comments Off on Powering the Artificial Intelligence Revolution – HPCwire

Artificial intelligence is struggling to cope with how the world has changed – ZDNet

Posted: at 3:45 pm

From our attitude towards work to our grasp of what two metres look like, the coronavirus pandemic has made us rethink how we see the world. But while we've found it hard to adjust to the new reality, it's been even harder for the narrowly-designed artificial intelligence models that have been created to help organisation make decisions. Based on data that described the world before the crisis, these won't be making correct predictions anymore, pointing to a fundamental problem in they way AI is being designed.

David Cox, IBM director of the MIT-IBM Watson AI Lab, explains that faulty AI is particularly problematic in the case of so-called black box predictive models: those algorithms which work in ways that are not visible, or understandable, to the user. "It's very dangerous," Cox says, "if you don't understand what's going on internally within a model in which you shovel data on one end to get a result on the other end. The model is supposed to embody the structure of the world, but there is no guarantee that it will keep working if the world changes."

The COVID-19 crisis, according to Cox, has only once more highlighted what AI experts have argued for decades: that algorithms should be more explainable.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

For example, if you were building a computer program that was a complete blackbox, aimed at predicting what the stock market would be like based on past data, there is no guarantee it's going to continue to produce good predictions in the current coronavirus crisis, he argues.

What you actually need to do is build a broader model of the economy that acknowledges supply and demand, understands supply-chains, and incorporates that knowledge, which is closer to something that an economist would do. Then you can reason about the situation more transparently, he says.

"Part of the reason why those models are hard to trust with narrow AIs is because they don't have that structure. If they did it would be much easier for a model to provide an explanation for why they are making decisions. These models are experiencing challenges now. COVID-19 has just made it very clear why that structure is important," he warns.

It's important not only because the technology would perform better and gain in reliability, but also because businesses would be far less reluctant to adopt AI if they trusted the tool more. Cox pulls out his own statistics on the matter: while 95% of companies believe that AI is key to their competitive advantage, only 5% say they've extensively implemented the technology.

While the numbers differ from survey to survey,the conclusion has been the same for some time now: there remains a significant gap between the promise of AI and its reality for businesses. And part of the reason that industry is struggling to deploy the technology boils down to a lack of understanding of AI. If you build a great algorithm but can't explain how it works, you can't expect workers to incorporate the new tool in their business flow. "If people don't understand or trust those tools, it's going to be a lost cause," says Cox.

Explaining AI is one of the main focuses of Cox's work. The MIT-IBM Watson Lab, which he co-directs, comprises of 100 AI scientists across the US university and IBM Research, and is now in its third year of operation. The Lab's motto, which comes up first thing on its website, is self-explanatory: "AI science for real-world impact".

Back in 2017, IBM announced a $240 million investment over ten years to support research by the firm's own researchers, as well as MIT's, in the newly-founded Watson AI Lab. From the start, the collaboration's goal has had a strong industry focus, with an idea to unlock the potential of AI for "business and society". The lab's focus is not on "narrow AI", which is the technology in its limited format that most organizations know today; instead the researchers should be striving for "broad AI". Broad AI can learn efficiently and flexibly, across multiple tasks and data streams, and ultimately has huge potential for businesses. "Broad AI is next," is the Lab's promise.

The only way to achieve broad AI, explains Cox, is to bridge between research and industry. The reason that AI, like many innovations, remains stubbornly stuck in the lab, is because the academics behind the technology struggle to identify and respond to the real-world needs of businesses. Incentives are misaligned; the result is that organizations see the potential of the tool, but struggle to use it. AI exists and it is effective, but is still not designed for business.

SEE: Developers say Google's Go is 'most sought after' programming language of 2020

Before he joined IBM, Cox spent ten years as a professor in Harvard University. "Coming from academia and now working for IBM, my perspective on what's important has completely changed," says the researcher. "It has given me a much clearer picture of what's missing."

The partnership between IBM and MIT is a big shift from the traditional way that academia functions. "I'd rather be there in the trenches, developing those technologies directly with the academics, so that we can immediately take it back home and integrate it into our products," says Cox. "It dramatically accelerates the process of getting innovation into businesses."

IBM has now expanded the collaboration to some of its customers through a member program, which means that researchers in the Lab benefit from the input of players from different industries. From Samsung Electronics to Boston Scientific through banking company Wells Fargo, companies in various fields and locations can explain their needs and the challenges they encounter to the academics working in the AI Watson Lab. In turn, the members can take the intellectual property generated in the Lab and run with it even before it becomes an IBM product.

Cox is adamant, however, that the MIT-IBM Watson AI Lab was also built with blue-sky research compatibility in mind. The researchers in the lab are working on fundamental, cross-industry problems that need to be solved in order to make AI more applicable. "Our job isn't to solve customer problems," says Cox. "That's not the right use for the tool that is MIT. There are brilliant people in MIT that can have a hugely disruptive impact with their ideas, and we want to use that to resolve questions like: why is it that AI is so hard to use or impact in business?"

Explainability of AI is only one area of focus. But there is also AutoAI, for example, which consists of using AI to build AI models, and would let business leaders engage with the technology without having to hire expensive, highly-skilled engineers and software developers. Then, there is also the issue of data labeling: according to Cox, up to 90% of the data science project consists of meticulously collecting, labeling and curating the data. "Only 10% of the effort is the fancy machine-learning stuff," he says. "That's insane. It's a huge inhibitor to people using AI, let alone to benefiting from it."

SEE: AI and the coronavirus fight: How artificial intelligence is taking on COVID-19

Doing more with less data, in fact, was one of the key features of the Lab's latest research project, dubbed Clevrer, in which an algorithm can recognize objects and reason about their behaviors in physical events from videos. This model is a neuro-symbolic one, meaning that the AI can learn unsupervised, by looking at content and pairing it with questions and answers; ultimately, it requires far less training data and manual annotation.

All of these issues have been encountered one way or another not only by IBM, but by the companies that signed up to the Lab's member program. "Those problems just appear again and again," says Cox and that's whether you are operating in electronics or med-tech or banking. Hearing similar feedback from all areas of business only emboldened the Lab's researchers to double down on the problems that mattered.

The Lab has about 50 projects running at any given time, carefully selected every year by both MIT and IBM on the basis that they should be both intellectually interesting, and effectively tackling the problem of broad AI. Cox maintains that within this portfolio, some ideas are very ambitious and can even border blue-sky research; they are balanced, on the other hand, with other projects that are more likely to provide near-term value.

Although more prosaic than the idea of preserving purely blue-sky research, putting industry and academia in the same boat might indeed be the most pragmatic solution in accelerating the adoption of innovation and making sure AI delivers on its promise.

Follow this link:

Artificial intelligence is struggling to cope with how the world has changed - ZDNet

Posted in Artificial Intelligence | Comments Off on Artificial intelligence is struggling to cope with how the world has changed – ZDNet

Page 108«..1020..107108109110..120130..»