What Soldiers, Doctors, and Professors Can Teach Us About Artificial Intelligence During COVID-19 – Education Week

Getty

Artificial intelligence technology can tell doctors when a scan reveals a tumor, can help the military distinguish between a truck and a school bus as a target, and can answer a high volume of college students questions.

Sectors of our economy such as the military, health care, and higher education are much further along than the K-12 system in incorporating artificial intelligence systems and machine learning into their operations. And many of those useseven when they are not specifically for educationcan spark ideas for applications in K-12 that may be more pertinent than ever imagined.

With the coronavirus upending traditional ways of delivering education, AI technologieswhich are designed to model human intelligence and solve complex problemsmay be able to help with logistical challenges such as busing and classroom social distancing, provide support to overwhelmed teachers, and glean new information about remote learning.

AI techniques and systems are like the internal combustion engineyou can use them to power a lot of different things, said David Danks, a professor of philosophy and psychology at Carnegie Mellon University in Pittsburgh, who studies cognitive science, machine learning, and how AI affects people. The exact same thing can be used to predict whether someone has cancer, or whether students understand a concept, or to classify somebody as a bad guy you want to go after.

Of course, there are lots of potential trouble spots when thinking about the role of AI in K-12 education. Artificial intelligence learns from the data that are fed into it, and if that input includes bad data or data applied incorrectly, poor or biased decisions may result. At the same time, the use of AI in K-12 raises very serious data-privacy concerns because such technologies would likely be used to personalize education or make important decisions for individual students.

But even with those concerns, AI advocates say other sectors are already offering lessons learned for how the technologies could be used in K-12 for teaching and learning and the management of schools. That is especially the case with the military, health-care, and higher education fields.

Here is a look at what K-12 educators, policymakers, and planners could learn from those three sectors:

Nearly every military in the world believes that advances in AI will play a critical role in shaping the future of military power. But there are big disagreements about what is possible and what is wise.

SimulationsMilitary leaders are using AI simulations to assess military tactics and determine the likely outcome of strategic plans. Plugging different variables into these scenarioseverything from weather predictions to the timing of attacks and estimating troop numberscan show how outcomes might change. Also, soldiers can get important practice in simulated real-world settings with low risk.

> K-12 Applications: AI-powered simulations could be useful for planning purposes for everything from scheduling to determining the most effective models for social distancing when students return to their school buildings amid the COVID-19 outbreak. Some companies are already using simulations to train educators on successful techniques to help students with social-emotional learning, trauma, and mental-health issues.

MaintenanceTanks, airplanes, submarines, trucksall that military equipment needs to be maintained to keep troops safe and operations running smoothly. Some high-tech AI systems can predict when parts need replacing before they break or when systems need tuneup. Artificial intelligence has helped the military optimize in-flight refueling of jets to make the dangerous technique safer and more efficient.

> K-12 Applications: School districts also rely on a lot of equipmentthink buses, computers, air conditioning systems, and more. AI-powered smart programs are already being used in some schools to fine-tune building operations, lower energy costs, and manage maintenance and repairs.

LogisticsThe backbone of the military revolves around logistics and supply-chain management. How to get equipment and personnel from point A to point B most efficiently and cost effectively is something that AI systems are tackling for the military.

> K-12 Applications: The uses are widespread: AI systems could optimize scheduling, the distribution of laptops, cafeteria operations, and bus routes. In fact, the Boston school district has saved more than $5 million using a high-tech AI system that streamlined bus routes.

At its core, AI is really about using big data to be able to help predict what will happen so we can show up at the right time with the right solution.

ScanningArtificially intelligent technologies can analyze radiology and CT scans looking for abnormalities. Programs can quickly sift through images much faster than humans and identify patterns based on vast data. These techniques can identify tumors and health issues and suggest treatments, which are then reviewed by medical professionals.

> K-12 Applications: Programs powered by artificial intelligence could do a better job identifying student risk factors and recommending earlier and more targeted academic or mental-health interventions. The goal isnt to replace teacher decisions but to save teachers time and to amplify their own expertise. Using big data and AI to spot patterns might be applied to other situations, such as taking student temperatures to check for COVID-19 before they enter school buildings or being able to target outbreaks more quickly.

PersonalizationAccess to massive amounts of digital medical data and the use of AI to analyze it are making it easier to personalize medical treatments for patients. AI can predict how someones current health behaviors are likely to affect their future health outcomes. High-tech systems can design much more sophisticated drug and treatment strategies tailored to an individual patients biology or type of cancer, for example.

> K-12 Applications: Many education companies already talk about being able to help personalize the learning experience for students, but this is still just an emerging effort in most places. Some K-12 programs are using artificial intelligence to collect data on student behavior and academic engagement and then guide students through suggested individualized lessons. CENTURY Tech, a London-based company, for example, uses an AI platform that tracks student interactions and behavior patterns and academic performance to create personalized learning paths.

TrainingArtificial-intelligence-powered programs are being used to train medical professionals in many ways. AI company Kognito, for example, uses its health simulations to help doctors and nurses practice discussing and interacting with patients around sensitive topics like obesity, mental health, and suicide. Through conversations with virtual humans, medical practitioners can practice and model effective techniques.

> K-12 Applications: Kognito has a version of its product that is designed for educators, training them to lead conversations with students around social-emotional learning and mental health, using research-based language and techniques. An expanded version of this technology could be applied in other areas. About 15,000 K-12 schools currently have access to Kognito simulations.

Cost SavingsEarly medical intervention, making sure patients adhere to treatment, and supply-chain management are all ways that AI can affect the bottom line in various aspects of health care.

> K-12 Applications: The same goes for schools and districts. AI-powered programs could predict what supplies are needed and where with more accuracy, analyze budget trends, and identify spending patterns in areas ripe for savings, especially given that K-12 budgets are likely to be slashed significantly as the economy struggles through COVID-19.

Theres not an obvious wall between higher education and K-12 [around uses for AI].

Remote LearningWhat if teachers could have more information in real time about whether their students grasp concepts or are struggling when learning online? Whitehill is exploring the idea of an AI-based program that uses a video camera to take many small snapshots of students as they learn remotely to analyze their facial reactions. Such a program would provide teachers with real-time feedback on students cognitive and emotional states. (But that program is also just the kind of technological approach that would prompt intense criticism from student-data-privacy advocates.)

Virtual Teachers AssistantWhen Georgia Tech interactive-computing professor Ashok Goel was having a hard time answering all the questions coming from the hundreds of students in his online computer science class, he created an artificially intelligent tutor he dubbed Jill Watson. She was able to answer many of the students more routine questions, freeing up time for Goel to do higher-level work. Since that first experiment, Watson is now used in 17 online classes, Goel said, covering more than a thousand person-hours of work. Goel, who is also the chief scientist for C21U, a company developing innovative uses for AI, is now working to adapt Watson for high school and middle school teachers. And with remote learning, he believes the AI teaching assistant could also be used to help answer parents questions as they support students at home.

Essay GradingColleges and universities are already using this approach to some degree, and this latest version of that technology is moving into the K-12 education space. Automated AI essay graders have been around for some time, but the makers of the software say the AI features now available are much more sophisticated evaluators of student writing than what were available years ago. They can judge hundreds of features in a written piece, everything from spelling and grammar to sentence structure. (Lots of concerns remain that these programs can be biased, can fail to interpret creativity correctly, and can be gamed by students writing to the algorithm.) Though some states are using these types of programs to grade essays on their standardized state tests, theyre yet to be widely adopted on a district and school level.

Web Only

Back to Top

Originally posted here:
What Soldiers, Doctors, and Professors Can Teach Us About Artificial Intelligence During COVID-19 - Education Week

SUSE infuses portfolio with artificial intelligence and edge technology – SiliconANGLE

Now independent from previous owner Micro Focus International PLC, SUSE is out to make its presence more deeply felt with developers and innovators. Its biggest competitors, Red Hat Inc. and Microsoft Corp., have developed impressively broad, varied portfolios. Can SUSEpull any tricks from its Linux-distro hat interesting enough to compete for the attention of leading-edge, developer-driven IT departments?

Even amid the COVID-19 pandemic, SUSEis busily engaging with its community, according toMelissa Di Donato, chief executive officer of SUSE.Open source is developing a community that often times does not sit together. And now were really trying to engage with that community as much as possible to keep innovation alive, to keep collaboration alive, Di Donato said.

SUSE will collaborate and integrate with its developer community in 2020, as well as sharpen its focus on Linux use cases at the edge, such as autonomous driving, Di Donato added.

Di Donatospoke withStu Miniman, host of theCUBE, SiliconANGLE Medias livestreaming studio, during the SUSECON Digital event. They discussed how to drive engagement in open-source communities and how SUSEis infusing its portfolio with artificial intelligence, edge technology and more. (* Disclosure below.)

SUSEhas recently opened up a community to developers with content around Linux, DevOps, containers, Kubernetes, microservices and more. It has also introduced the SUSECloud Application Platform Developer Sandbox.

We wanted to make it easy for these developers to benefit from the best practicesthat evolved from the cloud-native application deliverythat we offer every day to customers and now for free to our developers, Di Donato said.You can expect SUSE to enter new markets like powering autonomous vehicles with safety-certified Linux and other really innovative technologies.

For example, SUSEiscarving out fresh terrain through its partnership with ElectrobitWireless Communications Oy, aleading providerof embedded software solutions for automotive. The two companies will be working on the use of safety-certified Linux in self-driving cars. Also, next quarter the company will announce a solution for simplifying the integration of AI building blocks into software.

Heres the complete video interview, part of SiliconANGLEs and theCUBEs coverage of the SUSECON Digital event. (* Disclosure: TheCUBE is a paid media partner for SUSECON Digital. Neither SUSE, the sponsor for theCUBEs event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Original post:
SUSE infuses portfolio with artificial intelligence and edge technology - SiliconANGLE

Humans And Artificial Intelligence Systems Perform Better Together: Microsoft Chief Scientist Eric Horvitz – Digital Information World

According to a recent study, humans and artificial intelligence systems can perform better when both of them work together to tackle problems. The research was done by Eric Horvitz Chief scientist Microsoft, Ece Kamar the Microsoft Research principal researcher, and Bryan Wilder, a student at Harvard University and Microsoft Research intern.

It seems that Eric Horvitz first published the research paper. He was hired as Microsoft principal researcher back in the year 1993 and the company named him Microsoft Chief Scientist officer during March. He led the companys research programs from the year 2017 to 2020. The research paper was published earlier this month and it studies the performance of artificial intelligence teams and humans operating together on two PC vision projects namely breast cancer metastasis recognition and Galaxy categorization. With this proposed approach, the artificial intelligence (AI) model evaluates which tasks humans can perform best and what type of tasks AI systems can handle better.

In this approach, the learning procedure is developed to merge human contributions and machine predictions. The artificial intelligence systems work to tackle problems that can be difficult for humans while humans focus on solving issues that can be tough for AI systems to figure out. Basically, AI system predictions generated with lower accuracy levels are routed to human teams in this system. According to researchers, combined training of human and artificial intelligence systems can enhance the galaxy classification model for us. It can improve the performance of Galaxy Zoo with a 21 to 73% decrease in loss. This system can also deliver an up to 20% better performance for CAMELYON16.

The research paper states that the performance of machine learning in segregation overcomes the circumstances where human skills can add integral context, although human teams have their own restrictions including systematic biases. Researchers stated in the paper that they have developed methods focused on training the AI learning model to supplement human strengths. It also accounts for the expense of inquiring an expert. Human and AI system teamwork can take various forms but the researchers focused on settings where machines would decide which instances required human absorption and then merging human and machine judgments.

Horvitz, during the year 2007, worked on a policy to examine when human assistants should interfere in consumer conversations with computerized receptionist systems. The researchers also stated in the paper, Learning to Complement Humans, that they see opportunities of studying extra aspects of human-machine cooperation across various settings. While studying a different type of teamwork, Open Artificial Intelligence research experts have looked at machine assistants operating together in games such as hide and seek, and Quake 3.

Photo: Ipopba / Getty Images

Read next: Researchers Developed An Artificial Intelligence System That Can Transform Brain Signals Into Words

See the original post here:
Humans And Artificial Intelligence Systems Perform Better Together: Microsoft Chief Scientist Eric Horvitz - Digital Information World

Artificial Intelligence Equipped Supercomputer Mining for COVID-19 Connections in 18 Million Research Documents – SciTechDaily

By DOE/Oak Ridge National LaboratoryMay 19, 2020

Using ORNLs Summit supercomputer, scientists can comb through millions of medical journal articles looking for possible connections among FDA-approved drug therapies and known COVID-19 symptoms. Credit: Dasha Herrmannova/Oak Ridge National Laboratory, U.S. Dept. of Energy

Scientists have tapped the immense power of the Summit supercomputer at Oak Ridge National Laboratory to comb through millions of medical journal articles to identify potential vaccines, drugs, and effective measures that could suppress or stop the spread of COVID-19.

A team comprising researchers from ORNL and Georgia Tech are using artificial intelligence methods designed to unearth relevant information from about 18 million available research documents. They looked for connections among 84 billion concepts and cross-referenced keywords associated with COVID-19 such as high fever, dry cough, and shortness of breath with existing medical solutions.

Our goal is to assist doctors and researchers ability to identify information about drug therapies that are already approved by the U.S. Federal Drug Administration, said ORNLs Ramakrishnan Ramki Kannan.

A massive subset of 6 million documents dated between 2010 and 2015 took 80 minutes, and the entire 18 million will take less than a day to run on Summit. Results will be shared with medical researchers for feedback, which will inform adjustments to improve future calculations.

The rest is here:
Artificial Intelligence Equipped Supercomputer Mining for COVID-19 Connections in 18 Million Research Documents - SciTechDaily

Machine Learning and Artificial Intelligence in Healthcare Market 2020 Driving Forces, Future Growth, Top Key Players, Industry Share, Regional…

Machine Learning and Artificial Intelligence in Healthcare Market analysis report have recently added by Research N Reports which helps to make informed business decisions. This research report further identifies the market segmentation along with their sub-types.

The Machine Learning and Artificial Intelligence in Healthcare Market expected to reach at a huge+51% CAGR to reach $22,700 millionduring the forecast period. Various factors are responsible for the markets growth, which are studied in detail in this research report.

The growth of the Machine Learning and Artificial Intelligence in Healthcare Market is driven by AIs ability to improve patient outcomes, improve coordination between healthcare workers and patients, increase acceptance of precision medicine, and significantly increase venture capital investments. In addition, the growing importance of big data in healthcare is expected to fuel market growth. The market is expected to experience moderate growth over the forecast period as AI systems are increasingly used.

Ask for the Sample Copy of This Report:

https://www.researchnreports.com/request_sample.php?id=659734

Top Key Players Profiled in This Report:

Intel Corporation, IBM Corporation, Nvidia Corporation, Microsoft Corporation, Alphabet Inc (Google Inc.), General Electric (GE) Company, Enlitic Inc., Verint Systems General Vision Inc., Welltok Inc., iCarbonX.

The key questions answered in the report:

Get Discount on This Report:

https://www.researchnreports.com/ask_for_discount.php?id=659734

This research report represents a 360-degree overview of the competitive landscape of the Machine Learning and Artificial Intelligence in Healthcare Market. Furthermore, it offers massive data relating to recent trends, technological advancements, tools, and methodologies. The research report analyzes the Machine Learning and Artificial Intelligence in Healthcare Market in a detailed and concise manner for better insights into the businesses.

Researchers of this report throw light on different terminologies. The competitive landscape section of the report covers the solution, products, services, and business overview. This Machine Learning and Artificial Intelligence in Healthcare Market research report cover several dynamic aspects such as drivers, restraints and challenging factors. Different leading companies have been profiled to get a clear insight into the businesses.

The research study has taken the help of graphical presentation techniques such as infographics, charts, tables, and pictures. It provides guidelines for both established players and new entrants in Machine Learning and Artificial Intelligence in Healthcare Market.

If You Have Any Query, Ask Our Experts:

https://www.researchnreports.com/enquiry_before_buying.php?id=659734

Table of Contents:

The worldwide home social insurance advertise was esteemed at around USD 250.56 billion out of 2017 and expected to enlist CAGR of 11.9% in the figure time frame. The Asia Pacific is the quickest developing area for diabetes medicate showcase

The rest is here:
Machine Learning and Artificial Intelligence in Healthcare Market 2020 Driving Forces, Future Growth, Top Key Players, Industry Share, Regional...

Business Applications for Artificial Intelligence: An …

Discussion of artificial intelligence (AI) elicits a wide range of feelings. On one end of the spectrum is fear of job loss spurred by a bot revolution. On the opposite is excitement about the overblown prospects of what people can achieve with machine augmentation.

But Dr. Mark Esposito wants to root the conversation in reality. Esposito is the co-founder of Nexus Frontier Tech and instructor of Harvards Artificial Intelligence in Business: Creating Value with Machine Learning, a two-day intensive program.

Rather than thinking about what could be, he says businesses looking to adopt AI should look at what already exists.

AI has become the latest tech buzzword everywhere from Silicon Valley to China. But the first piece of AI, the artificial neuron, was developed in 1943 by scientist William McCulloch and logician Walter Pitts. Since then, weve come a long way in our understanding and development of models capable of comprehension, prediction, and analysis.

Artificial intelligence is already widely used in business applications, including automation, data analytics, and natural language processing. Across industries, these three fields of AI are streamlining operations and improving efficiencies.

Automation alleviates repetitive or even dangerous tasks. Data analytics provides businesses with insights never before possible. Natural language processing allows for intelligent search engines, helpful chatbots, and better accessibility for people who are visually impaired.

Other common uses for AI in business include:

Indeed, many experts note that the business applications of AI have advanced to such an extent that we live and work alongside it every day without even realizing it.

In 2018, Harvard Business Review predicted that AI stands to make the greatest impact in marketing services, supply chain management, and manufacturing.

Two years on, we are watching these predictions play out in real time. The rapid growth of AI-powered social media marketing, for instance, makes it easier than ever for brands to personalize the customer experience, connect with their customers, and track the success of their marketing efforts.

Supply chain management is also poised to make major AI-based advances in the next several years. Increasingly, process intelligence technologies will provide companies with accurate and comprehensive insight to monitor and improve operations in real-time.

Other areas where we can expect to see significant AI-based advancements include the healthcare industry and data transparency and security.

On the patient side of the healthcare business, we are likely to see AI help with everything from early detection and immediate diagnoses. On the physician side, AI is likely to play a larger role in streamlining scheduling processes and helping to secure patient records.

Data transparency and security is another area where AI is expected to make a significant difference in the coming years. As customers become aware of just how much data companies are collecting, the demand for greater transparency into what data is collected, how it is used, and how it is secured will only grow.

Additionally, as Esposito notes, there continues to be significant opportunity to grow the use of AI in finance and banking, two sectors with vast quantities of data and tremendous potential for AI-based modernization, but which still rely heavily on antiquated processes.

For some industries, the widespread rollout of AI hinges on ethical considerations to ensure public safety.

While cybersecurity has long been a concern in the tech world, some businesses must now also consider physical threats to the public. In transportation, this is a particularly pressing concern.

For instance, how autonomous vehicles should respond in a scenario in which an accident is imminent is a big topic of debate. Tools like MITs Moral Machine have been designed to gauge public opinion on how self-driving cars should operate when human harm cannot be avoided.

But the ethics question goes well beyond how to mitigate damage. It leads developers to question if its moral to place one humans life above another, to ask whether factors like age, occupation, and criminal history should determine when a person is spared in an accident.

Problems like these are why Esposito is calling for a global response to ethics in AI.

Given the need for specificity in designing decision-making algorithms, it stands to reason that an international body will be needed to set the standards according to which moral and ethical dilemmas are resolved, Esposito says in his World Economic Forum post.

Its important to stress the global aspect of these standards. Countries around the world are engaging in an AI arms race, quickly developing powerful systems. Perhaps too quickly.

If the race to develop artificial intelligence results in negligence to create ethical algorithms, the damage could be great. International standards can give developers guidelines and parameters that ensure machine systems mitigate risk and damage as well as a human, if not better.

According to Esposito, theres a lot of misunderstanding in the business world about AIs current capabilities and future potential. At Nexus, he and his partners work with startups and small businesses to adopt AI solutions that can streamline operations or solve problems.

Esposito discovered early on that many business owners assume AI can do everything a person can do, and more. A better approach involves identifying specific use cases.

The more you learn about the technology, the more you understand that AI is very powerful, Esposito says. But it needs to be very narrowly defined. If you dont have a narrow scope, it doesnt work.

For companies looking to leverage AI, Esposito says the first step is to look at which parts of your current operations can be digitized. Rather than dreaming up a magic-bullet solution, businesses should consider existing tech that can free up resources or provide new insights.

The low-hanging fruit is recognizing where in the value chain they can improve operations, Esposito says. AI doesnt start with AI. It starts at the company level.

For instance, companies that have already digitized payroll will find that theyre collecting a lot of data that could help forecast future costs. This allows businesses to hire and operate with more predictability, as well as streamline tasks for accounting.

One company thats successfully integrated AI tech into multiple aspects of its business is Unilever, a consumer goods corporation. In addition to streamlining hiring and onboarding, AI is helping Unilever get the most out of its vast amounts of data.

Data informs much of what Unilever does, from demand forecasts to marketing analytics. The company observed that their data sources were coming from varying interfaces and APIs, according to Diginomica. This both hindered access and made the data unreliable.

In response, Unilever developed its own platforms to store the data and make it easily accessible for its employees. Augmented with Microsofts Power BI tool, Unilevers platform collects data from both internal and external sources. It stores the data in a universal data lake where its preservedto be used indefinitely for anything from business logistics to product development.

Amazon is another early adopter. Even before its virtual assistant Alexa was in every other home in America, Amazon was an innovator in using machine learning to optimize inventory management and delivery.

With a fully robust, AI-empowered system in place, Amazon was able to make a successful foray into the food industry via its acquisition of Whole Foods, which now uses Amazon delivery services.

Esposito says this kind of scalability is key for companies looking to develop new AI products. They can then apply the tech to new markets or acquired businesses, which is essential for the tech to gain traction.

Both Unilever and Amazon are exemplary because theyre solving current problems with technology thats already available. And theyre predicting industry disruption so they can stay ahead of the pack.

Of course, these two examples are large corporations with deep pockets. But Esposito believes that most businesses thinking about AI realistically and strategically can achieve their goals.

Looking ahead from 2020, it is increasingly clear that AI will only work in conjunction with people, not instead of people.

Every major place where we have multiple dynamics happening can really be improved by these technologies, Esposito says. And I want to reinforce the fact that we want these technologies to improve society, not displace workers.

To ease fears over job loss, Esposito says business owners can frame the conversation around creating new, more functional jobs. As technologies improve efficiencies and create new insights, new jobs that build on those improvements are sure to arise.

Jobs are created by understanding what we do and what we can do better, Esposito says.

Additionally, developers should focus on creating tech that is probabilistic, as opposed to deterministic. In a probabilistic scenario, AI could predict how likely a person is to pay back a loan based on their history, then give the lender a recommendation. Deterministic AI would simply make that decision, ignoring any uncertainty.

There needs to be cooperation between machines and people, Esposito says. But we will never invite machines to make a decision on behalf of people.

See more here:
Business Applications for Artificial Intelligence: An ...

Artificial Intelligence and IP – WIPO

(Photo: WIPO)AI and IP policy

The growth of AI across a range of technical fields raises a number of policy questions with respect to IP. The main focus of those questions is whether the existing IP system needs to be modified to provide balanced protection for machine created works and inventions, AI itself and the data AI relies on to operate. WIPO has started an open process to lead the conversation regarding IP policy implications.

From stories, to reports, news and more, we publish content on the topics most discussed in the field of AI and IP.

In a world in which AI is playing an ever-expanding role, including in the processes of innovation and creativity, Professor Ryan Abbott considers some of the challenges that AI is posing for the IP system.

Saudi inventor HadeelAyoub, founder of the London-based startup, BrightSign, talks about how she cameto develop BrightSign, an AI-based smart glove that allows sign language users tocommunicate directly with others without the assistance of an interpreter.

How big data, artificial intelligence, and other technologies are changing healthcare.

British-born computer scientist, Andrew Ng, leading thinker on AI, discusses the transformative power of AI, and the measures required to ensure that AI benefits everyone.

AI is set to transform our lives. But what exactly is AI, and what are the techniques and applications driving innovation in this area?

David Hanson, maker of Sophia the Robot and CEO and Founder of Hanson Robotics, shares his vision of a future built around super intelligence.

Read the original here:
Artificial Intelligence and IP - WIPO

MS in Artificial Intelligence | Artificial Intelligence

The Master of Science in Artificial Intelligence (M.S.A.I.) degree program is offered by the interdisciplinary Institute for Artificial Intelligence. Areas of specialization include automated reasoning, cognitive modeling, neural networks, genetic algorithms, expert databases, expert systems, knowledge representation, logic programming, and natural-language processing. Microelectronics and robotics were added in 2000.

Admission is possible in every semester, but Fall admission is preferable. Applicants seeking financial assistance should apply before February 15, but assistantships are sometimes awarded at other times. Applicants must include a completed application form, three letters of recommendation, official transcripts, Graduate Record Examinations (GRE) scores, and a sample of your scholarly writing on any subject (in English). Only the General Test of the GRE is required for the M.S.A.I. program. International students must also submit results of the TOEFL and a statement of financial support. Applications must be completed at least six weeks before the proposed registration date.

No specific undergraduate major is required for admission, but admission is competitive. We are looking for students with a strong preparation in one or more relevant background areas (psychology, philosophy, linguistics, computer science, logic, engineering, or the like), a demonstrated ability to handle all types of academic work (from humanities to mathematics), and an excellent command of written and spoken English.

For more information regarding applications, please vist theMS Program AdmissionsandInformation for International Studentspages.

Requirements for the M.S.A.I. degree include: interdisciplinary foundational courses in computer science, logic, philosophy, psychology, and linguistics; courses and seminars in artificial intelligence programming techniques, computational intelligence, logic and logic programming, natural-language processing, and knowledge-based systems; and a thesis. There is a final examination covering the program of study and a defense of the written thesis.

For further information on course and thesis requirements, please visit theCourse & Thesis Requirementspage.

The Artificial Intelligence Laboratories serve as focal points for the M.S.A.I. program. AI students have regular access to PCs running current Windows technology, and a wireless network is available for students with laptops and other devices. The Institute also features facilities for robotics experimentation and a microelectronics lab. The University of Georgia libraries began building strong AI and computer science collections long before the inception of these degree programs. Relevant books and journals are located in the Main and Science libraries (the Science library is conveniently located in the same building complex as the Institute for Artificial Intelligence and the Computer Science Department). The University's library holdings total more than 3 million volumes.

Graduate assistantships, which include a monthly stipend and remission of tuition, are available. Assistantships require approximately 13-15 hours of work per week and permit the holder to carry a full academic program of graduate work. In addition, graduate assistants pay a matriculation fee and all student fees per semester.

For an up to date description of Tuition and Fees for both in-state and out-of-state students, please visit the site of theBursar's Office.

On-campus housing, including a full range of University-owned married student housing, is available to students. Student fees include use of a campus-wide bus system and some city bus routes. More information regarding housing is available here:University of Georgia Housing.

The University of Georgia has an enrollment of over 34,000, including approximately 8,000 graduate students. Students are enrolled from all 50 states and more than 100 countries. Currently, there is a very diverse group of students in the AI program. Women and international students are well represented.

Additional information about the Institute and the MSAI program, including policies for current students, can be found in the AI Student Handbook.

Excerpt from:
MS in Artificial Intelligence | Artificial Intelligence

What Are the Advantages of Artificial Intelligence …

The general benefit of artificial intelligence, or AI, is that it replicates decisions and actions of humans without human shortcomings, such as fatigue, emotion and limited time. Machines driven by AI technology are able to perform consistent, repetitious actions without getting tired. It is also easier for companies to get consistent performance across multiple AI machines than it is across multiple human workers.

Companies incorporate AI into production and service-based processes. In a manufacturing business, AI machines can churn out a high, consistent level of production without needing a break or taking time off like people. This efficiency improves the cost-basis and earning potential for many companies. Mobile devices use intuitive, voice-activated AI applications to offer users assistance in completing tasks. For example, users of certain mobile phones can ask for directions or information and receive a vocal response.

The premise of AI is that it models human intelligence. Though imperfections exist, there is often a benefit to AI machines making decisions that humans struggle with. AI machines are often programmed to follow statistical models in making decisions. Humans may struggle with personal implications and emotions when making similar decisions. Famous scientist Stephen Hawking uses AI to communicate with a machine, despite suffering from a motor neuron disease.

View original post here:
What Are the Advantages of Artificial Intelligence ...

What is Artificial Intelligence? | Azure Blog and Updates …

It has been said that Artificial Intelligence will define the next generation of software solutions. If you are even remotely involved with technology, you will almost certainly have heard the term with increasing regularity over the last few years. It is likely that you will also have heard different definitions for Artificial Intelligence offered, such as:

The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. Encyclopedia Britannica

Intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. Wikipedia

How useful are these definitions? What exactly are tasks commonly associated with intelligent beings? For many people, such definitions can seem too broad or nebulous. After all, there are many tasks that we can associate with human beings! What exactly do we mean by intelligence in the context of machines, and how is this different from the tasks that many traditional computer systems are able to perform, some of which may already seem to have some level of intelligence in their sophistication? What exactly makes the Artificial Intelligence systems of today different from sophisticated software systems of the past?

It could be argued that any attempt to try to define Artificial Intelligence is somewhat futile, since we would first have to properly define intelligence, a word which conjures a wide variety of connotations. Nonetheless, this article attempts to offer a more accessible definition for what passes as Artificial Intelligence in the current vernacular, as well as some commentary on the nature of todays AI systems, and why they might be more aptly referred to as intelligent than previous incarnations.

Firstly, it is interesting and important to note that the technical difference between what used to be referred to as Artificial Intelligence over 20 years ago and traditional computer systems, is close to zero. Prior attempts to create intelligent systems known as expert systems at the time, involved the complex implementation of exhaustive rules that were intended to approximate intelligent behavior. For all intents and purposes, these systems did not differ from traditional computers in any drastic way other than having many thousands more lines of code. The problem with trying to replicate human intelligence in this way was that it requires far too many rules and ignores something very fundamental to the way intelligent beings make decisions, which is very different from the way traditional computers process information.

Let me illustrate with a simple example. Suppose I walk into your office and I say the words Good Weekend? Your immediate response is likely to be something like yes or fine thanks. This may seem like very trivial behavior, but in this simple action you will have immediately demonstrated a behavior that a traditional computer system is completely incapable of. In responding to my question, you have effectively dealt with ambiguity by making a prediction about the correct way to respond. It is not certain that by saying Good Weekend I actually intended to ask you whether you had a good weekend. Here are just a few possible intents behind that utterance:

And more.

The most likely intended meaning may seem obvious, but suppose that when you respond with yes, I had responded with No, I mean it was a good football game at the weekend, wasnt it?. It would have been a surprise, but without even thinking, you will absorb that information into a mental model, correlate the fact that there was an important game last weekend with the fact that I said Good Weekend? and adjust the probability of the expected response for next time accordingly so that you can respond correctly next time you are asked the same question. Granted, those arent the thoughts that will pass through your head! You happen to have a neural network (aka your brain) that will absorb this information automatically and learn to respond differently next time.

The key point is that even when you do respond next time, you will still be making a prediction about the correct way in which to respond. As before, you wont be certain, but if your prediction fails again, you will gather new data, which leads to my suggested definition of Artificial Intelligence, as it stands today:

Artificial Intelligence is the ability of a computer system to deal with ambiguity, by making predictions using previously gathered data, and learning from errors in those predictions in order to generate newer, more accurate predictions about how to behave in the future.

This is a somewhat appropriate definition of Artificial Intelligence because it is exactly what AI systems today are doing, and more importantly, it reflects an important characteristic of human beings which separates us from traditional computer systems: human beings are prediction machines. We deal with ambiguity all day long, from very trivial scenarios such as the above, to more convoluted scenarios that involve playing the odds on a larger scale. This is in one sense the essence of reasoning. We very rarely know whether the way we respond to different scenarios is absolutely correct, but we make reasonable predictions based on past experience.

Just for fun, lets illustrate the earlier example with some code in R! If you are not familiar with R, but would like to follow along, see the instructions on installation. First, lets start with some data that represents information in your mind about when a particular person has said good weekend? to you.

In this example, we are saying that GoodWeekendResponse is our score label (i.e. it denotes the appropriate response that we want to predict). For modelling purposes, there have to be at least two possible values in this case yes and no. For brevity, the response in most cases is yes.

We can fit the data to a logistic regression model:

Now what happens if we try to make a prediction on that model, where the expected response is different than we have previously recorded? In this case, I am expecting the response to be Go England!. Below, some more code to add the prediction. For illustration we just hardcode the new input data, output is shown in bold:

The initial prediction yes was wrong, but note that in addition to predicting against the new data, we also incorporated the actual response back into our existing model. Also note, that the new response value Go England! has been learnt, with a probability of 50 percent based on current data. If we run the same piece of code again, the probability that Go England! is the right response based on prior data increases, so this time our model chooses to respond with Go England!, because it has finally learnt that this is most likely the correct response!

Do we have Artificial Intelligence here? Well, clearly there are different levels of intelligence, just as there are with human beings. There is, of course, a good deal of nuance that may be missing here, but nonetheless this very simple program will be able to react, with limited accuracy, to data coming in related to one very specific topic, as well as learn from its mistakes and make adjustments based on predictions, without the need to develop exhaustive rules to account for different responses that are expected for different combinations of data. This is this same principle that underpins many AI systems today, which, like human beings, are mostly sophisticated prediction machines. The more sophisticated the machine, the more it is able to make accurate predictions based on a complex array of data used to train various models, and the most sophisticated AI systems of all are able to continually learn from faulty assertions in order to improve the accuracy of their predictions, thus exhibiting something approximating human intelligence.

You may be wondering, based on this definition, what the difference is between machine learning and Artificial intelligence? After all, isnt this exactly what machine learning algorithms do, make predictions based on data using statistical models? This very much depends on the definition of machine learning, but ultimately most machine learning algorithms are trained on static data sets to produce predictive models, so machine learning algorithms only facilitate part of the dynamic in the definition of AI offered above. Additionally, machine learning algorithms, much like the contrived example above typically focus on specific scenarios, rather than working together to create the ability to deal with ambiguity as part of an intelligent system. In many ways, machine learning is to AI what neurons are to the brain. A building block of intelligence that can perform a discreet task, but that may need to be part of a composite system of predictive models in order to really exhibit the ability to deal with ambiguity across an array of behaviors that might approximate to intelligent behavior.

There are a number of practical advantages in building AI systems, but as discussed and illustrated above, many of these advantages are pivoted around time to market. AI systems enable the embedding of complex decision making without the need to build exhaustive rules, which traditionally can be very time consuming to procure, engineer and maintain. Developing systems that can learn and build their own rules can significantly accelerate organizational growth.

Microsofts Azure cloud platform offers an array of discreet and granular services in the AI and Machine Learning domain, that allow AI developers and Data Engineers to avoid re-inventing wheels, and consume re-usable APIs. These APIs allow AI developers to build systems which display the type of intelligent behavior discussed above.

If you want to dive in and learn how to start building intelligence into your solutions with the Microsoft AI platform, including pre-trained AI services like Cognitive Services and the Bot Framework, as well as deep learning tools like Azure Machine Learning, Visual Studio Code Tools for AI, and Cognitive Toolkit, visit AI School.

See the article here:
What is Artificial Intelligence? | Azure Blog and Updates ...