Page 202«..1020..201202203204..210..»

Category Archives: Artificial Intelligence

Artificial intelligence grows a nose | Science | AAAS – Science Magazine

Posted: February 20, 2017 at 7:17 pm

Computer programs odor predictions are on the nose.

Image Source/Alamy Stock Photo

By Robert F. ServiceFeb. 19, 2017 , 8:15 PM

Predicting color is easy: Shine a light with a wavelength of 510 nanometers, and most people will say it looks green. Yet figuring out exactly how a particular molecule will smell is much tougher. Now, 22 teams of computer scientists have unveiled a set of algorithms able to predict the odor of different molecules based on their chemical structure. It remains to be seen how broadly useful such programs will be, but one hope is that such algorithms may help fragrancemakers and food producers design new odorants with precisely tailored scents.

This latest smell prediction effort began with a recent study by olfactory researcher Leslie Vosshall and colleagues at The Rockefeller University in New York City, in which 49 volunteers rated the smell of 476 vials of pure odorants. For each one, the volunteers labeled the smell with one of 19 descriptors, including fish, garlic, sweet, or burnt. They also rated each odors pleasantness and intensity, creating a massive database of more than 1 million data points for all the odorant molecules in their study.

When computational biologist Pablo Meyer learned of the Rockefeller study 2 years ago, he saw an opportunity to test whethercomputer scientists could use it to predict how people would assess smells. Besides working at IBMs Thomas J. Watson Research Center in Yorktown Heights, New York, Meyer heads something called the DREAM challenges, contests that ask teams of computer scientists to solve outstanding biomedical problems, such as predicting the outcome of prostate cancer treatment based on clinical variables or detecting breast cancer from mammogram data. I knew from graduate school that olfaction was still one of the big unknowns, Meyer says. Even though researchers have discovered some 400 separate odor receptors in humans, he adds, just how they work together to distinguish different smells remains largely a mystery.

In 2015, Meyer and his colleagues set up theDREAM Olfaction Prediction Challenge. They divided the Rockefeller groups data set into three parts. Participants were given the volunteer ratings for two-thirds of the odors, along with the chemical structure of the molecules that produced them. They were also given more than 4800 descriptors for each molecule, such as the atoms included, their arrangement, and geometry, which constituted a separate set of more than 2 million data points. These data were then used to train their computer models in predicting smells from chemical structural information. The remaining groups of datatwo sets of 69 ratings and their corresponding chemical informationwere used to test how well the models predicted both how an average person would rate an odor and how each of the 49 individuals would rate them.

Twenty-two teams from around the globe took up the challenge. Many did well, but two stood out. A team led by Yuanfang Guan, a computer scientist at the University of Michigan in Ann Arbor, scored best at predicting how individual subjects rate smells. Another team led by Richard Gerkin at Arizona State University in Tempe best predicted how all the participants on average would rate smells, Meyer and his colleagues report today in Science.

We learned that we can very specifically assign structural features to descriptions of the odor, Meyer says. For example, molecules with sulfur groups tend to produce a garlicky smell, and molecules with a similar chemical structure to vanillin, from vanilla beans, predicts whether subjects will perceive a bakery smell.

Meyer suggests such models may help fragrance and flavor companies come up with new molecules tuned to trigger particular smells, such as sandalwood or citrus. But Avery Gilbert, a biological psychologist at Synesthetics in Fort Collins, Colorado, and a longtime veteran of the fragrance and flavor industry, says hes not so sure. Gilbert says the new work is useful in that it provides such a large data set. But the 19 different verbal descriptors of different scents, he says, is too limited. Thats really a slim number of attributes, he says. Alternative studies have had volunteers use 80 or more categories to rate different smells.

The upshot is that even though the current study showed computers can predict which of 19 words people will use to describe this set of odors, its not clear whetherthe same artificial intelligence programs would rise to the challenge if there were more categories. If you had different descriptors, you might have had different models predict them best. So Im not sure where that leaves us, Gilbert says. Perhaps it serves mostly as a reminder that odor perception remains a challenge both for human scientists and artificial intelligence.

Please note that, in an effort to combat spam, comments with hyperlinks will not be published.

Read the original:

Artificial intelligence grows a nose | Science | AAAS - Science Magazine

Posted in Artificial Intelligence | Comments Off on Artificial intelligence grows a nose | Science | AAAS – Science Magazine

Top 3 Trends Impacting the Artificial Intelligence Market in the US Education Sector Through 2021: Technavio – Yahoo Finance

Posted: at 7:17 pm

LONDON--(BUSINESS WIRE)--

Technavios latest market research report on the artificial intelligence market in the US education sector provides an analysis of the most important trends expected to impact the market outlook from 2017-2021. Technavio defines an emerging trend as a factor that has the potential to significantly impact the market and contribute to its growth or decline.

This Smart News Release features multimedia. View the full release here: http://www.businesswire.com/news/home/20170220005208/en/

Jhansi Mary, a lead analyst from Technavio, specializing in research on education technology sector says, The artificial intelligence market in the US education sector is expected to grow at a spectacular CAGR of more than 47%. The US education system is the pioneer in implementing education technology solutions with the objective to improve the quality of education imparted to students and consequently the graduation rates. Therefore, many public and private educational institutions in the US are investing large resources in implementing the digitization of education.

Request a sample report: http://www.technavio.com/request-a-sample?report=56665

Technavios sample reports are free of charge and contain multiple sections of the report including the market size and forecast, drivers, challenges, trends, and more.

The top three emerging market trends driving the artificial intelligence market in the US education sector according to Technavio education research analysts are:

Artificial intelligence-empowered educational games

Educational games provide teachers a useful medium to teach education concepts in an interactive and engaging manner. Such a method not only generates curiosity but also motivates through reward points, badges, and levels. Vendors are incorporating features of artificial intelligence in games to enhance the interactivity element. These games embody the adaptive learning feature so that students can be given frequent and timely suggestions for a guided learning experience. These games improvise adaptive learning features while deploying machine learning, a form of artificial intelligence.

Duolingo, an open-source provider of language learning courses and games, received an investment of USD 45 million from Google Capital in 2015. This investment was used to bolster the deployment of machine learning and NLP technologies in their products.

Assists collaborative learning model

With new learning models, such as blended learning and flipped classrooms, evolving there is a growing focus by educators to create an environment that facilitates collaborative learning among students. Teachers have been introducing various group activities, such as role plays and problem-solving exercises, to motivate students to learn together.

Artificial intelligence will yield a more scientific outlook toward developing collaboration with practical methods such as virtual agents, adaptive group formation, intelligent facilitation, and expert moderation. Such defined models will ease the implementation and assessment of collaborative learning models.

All these methods have been designed to foster efficient group activities by considering each student's learning needs and cognitive skills. While performing activities, students are provided with appropriate guidance, advice, and inputs by the artificial intelligent system. This ensures teachers obtain the desired outcomes, says Jhansi.

Facilitates better course designing activities

Content analytics consist of techniques that assist course content designers in designing, developing, modifying, and upgrading content based on the needs of learners. Owing to the benefits it provides it has increasingly become a part of the technology-enabled education system. Artificial intelligence has taken this technique a step forward ever since its introduction.

Companies, such as IBM, are combining cognitive neuroscience and cognitive computing to harness the benefits of artificial intelligence in educational content. It uses research in the field of cognitive neuroscience to understand human learning and other cognitive processes. These inputs help to create content that will be more engaging and interactive for learners.

Coursera, a massive online open course (MOOC) provider, offers content backed by artificial intelligence software. It helps in closing loopholes present in the education content, such as differences in lecture notes and educational materials. Therefore, the presence of such smart software will positively impact the market growth.

Read More

Browse Related Reports:

Become a Technavio Insights member and access all three of these reports for a fraction of their original cost. As a Technavio Insights member, you will have immediate access to new reports as theyre published in addition to all 6,000+ existing reports covering segments like K12 and higher education, and school and college essentials. This subscription nets you thousands in savings, while staying connected to Technavios constant transforming research library, helping you make informed business decisions more efficiently.

About Technavio

Technavio is a leading global technology research and advisory company. The company develops over 2000 pieces of research every year, covering more than 500 technologies across 80 countries. Technavio has about 300 analysts globally who specialize in customized consulting and business research assignments across the latest leading edge technologies.

Technavio analysts employ primary as well as secondary research techniques to ascertain the size and vendor landscape in a range of markets. Analysts obtain information using a combination of bottom-up and top-down approaches, besides using in-house market modeling tools and proprietary databases. They corroborate this data with the data obtained from various market participants and stakeholders across the value chain, including vendors, service providers, distributors, re-sellers, and end-users.

If you are interested in more information, please contact our media team at media@technavio.com.

View source version on businesswire.com: http://www.businesswire.com/news/home/20170220005208/en/

MULTIMEDIA AVAILABLE:http://www.businesswire.com/news/home/20170220005208/en/

Originally posted here:

Top 3 Trends Impacting the Artificial Intelligence Market in the US Education Sector Through 2021: Technavio - Yahoo Finance

Posted in Artificial Intelligence | Comments Off on Top 3 Trends Impacting the Artificial Intelligence Market in the US Education Sector Through 2021: Technavio – Yahoo Finance

"brain scans" of artificial intelligence processes – Boing Boing

Posted: at 7:17 pm

Graphcore produced a series of striking images of computational graphs mapped to its "Intelligent Processing Unit."

The graph compiler builds up an intermediate representation of the computational graph to be scheduled and deployed across one or many IPU devices. The compiler can display this computational graph, so an application written at the level of a machine learning framework reveals an image of the computational graph which runs on the IPU.

The image below shows the graph for the full forward and backward training loop of AlexNet, generated from a TensorFlow description.

Our Poplar graph compiler has converted a description of the network into a computational graph of 18.7 million vertices and 115.8 million edges. This graph represents AlexNet as a highly-parallel execution plan for the IPU. The vertices of the graph represent computation processes and the edges represent communication between processes. The layers in the graph are labelled with the corresponding layers from the high level description of the network. The clearly visible clustering is the result of intensive communication between processes in each layer of the network, with lighter communication between layers.

report this ad

Zuck That says, Have you ever been on the Internet when you came across a checkbox that says Im not a robot? In this video, I explain how those checkboxes (No CAPTCHA reCAPTCHAs) work as well as why they exist in the first place. I mention CAPTCHA farms briefly, but the idea behind them is []

Gaetan Hadjeres and Francois Pachet at the Sony Computer Science Laboratories in Paris created DeepBach, then entered Bachs 352 chorales. The resulting composition is certainly in the style. So why does this work better than some other attempts?

My friend and Cool Tools business partner Kevin Kelly spoke at TEDSummit about the rapid rise of artificial intelligence. The talk is based on his excellent bestselling book, The Inevitable. The actual path of a raindrop as it goes down the valley is unpredictable, but the general direction is inevitable, says digital visionary Kevin Kelly []

Python is immensely popular in the data science world for the same reason it is in most other areas of computingit has highly readable syntax and is suitable for anything from short scripts to massive web services. One of its most exciting, newest applications, however, is in machine learning. You can dive into this booming []

Learning new skills is a great way to improve your resume and stand out from other candidates. Especially in a workforce in which many job-seekers have a wide variety of qualifications. With lifetime access toVirtual Training Company, you wont have to choose a specific focus. You can pick up new expertise whenever you deem it []

Instead of throwing out all the empties after your next party, why not transform them into some new DIY glassware? Cut back on waste and add some home ambiance with the Kinkajou Bottle Cutter and Candle Making Kit.The Kinkajou is designed as a clamp-on scoring blade to make precise cuts. Just slide abottle in, tighten []

report this ad

View post:

"brain scans" of artificial intelligence processes - Boing Boing

Posted in Artificial Intelligence | Comments Off on "brain scans" of artificial intelligence processes – Boing Boing

Artificial intelligence set to transform the patient experience, but many questions still to be answered – Healthcare IT News

Posted: at 7:17 pm

ORLANDO From Watson to Siri, Alexa to Cortana, consumers and patients have become much more familiar with artificial intelligence and natural language processing in recent years. Pick your terminology: machine learning, cognitive computing, neural networks/deep learning. All are becoming more commonplace in our smartphones, in our kitchens and as they continue to evolve at a rapid pace, expectations are high for how they'll impact healthcare.

Skepticism is, too. And even fear.

As it sparks equal part doubt and hope (and not a little hype) from patients, physicians and technologists, a panel of IT experts at HIMSS17 discussed the future of AI in healthcare on Sunday afternoon.

Kenneth Kleinberg, managing director at The Advisory Board Company, spoke with execs from two medical AI startups: Cory Kidd, CEO of Catalia Health, and Jay Parkinson, MD, founder and CMO of Sherpaa.

Catalia developed a small robot, the Mabu Personal Healthcare Companion, aimed at assisting with "long-term patient engagement." It's able to have tailored conversations with patients that can evolve over time as the platform developed using principles of behavioral psychology gains daily data about treatment plans, health challenges and outcomes.

Sherpaa is billed as an "on-demand doctor practice" that connects subscribers with physicians, via its app, who can make diagnoses, order lab tests and imaging and prescribe medications at locations near the patient. "Seventy percent of time, the doctors have a diagnosis," said Parkinson. "Most cases can be solved virtually." Rather than just a virtual care, platform, it enables "care coordination with local clinicians in the community," he said.

In this fast-changing environment, there are many questions to ask: "We're starting to see these AI systems appear in other parts of our lives," said Kleinberg. "How valuable are they? How capable are they? What kind of authority will these systems attain?"

And also: "What does it mean to be a physician and patient in this new age?"

Kidd said he's a "big believer when it's used right."

Parkinson agreed: "It has to be targeted to be successful."

Another important question: For all the hype and enthusiasm about AI, "where on the inflection curve are we?" asked Kleinberg. "Is it going to take off and get a lot better? And does it offer more benefits at the patient engagement level? Or as an assistant to clinicians?"

For Kidd, it's clearly the former, as Catalia's technology deploys AI to help patients manage their own chronic conditions.

"The kinds of algorithms we're developing, we're building up psychological models of patients with every encounter," he explained. "We start with two types of psychologies: The psychology of relationships how people develop relationships over time as well as the psychology of behavior change: How do we chose the right technique to use with this person right now?"

The platform also gets "smarter" as it become more attuned to "what we call our biographical model, which is kind of a catch-all for everything else we learn in conversation," he said. "This man has a couple cats, this woman's son calls her every Sunday afternoon, whatever it might be that we'll use later in conversations."

Consumer applications driving clinical innovations AI is fast advancing in healthcare in large part because it's evolving so quickly in the consumer space. Take Apple's Siri, for instance: "The more you talk to it, the better it makes our product," said Kidd. "Literally. We're licensing the same voice recognition and voice outlet technology thats running on your iPhone right now."

For his part, Parkinson sees problems with simply adding AI technology onto the doctor-patient relationship as it currently exists. Most healthcare encounters involve "an oral conversation between doctor and patient," he said, where "retention is 15 percent or less."

For AI to truly be an effective augmentation of clinical practices, that conversation "needs to be less oral and more text-driven," he said. "I'm worried about layering AI on a broken delivery process."

But machine learning is starting to change the came in areas large and small throughout healthcare. Kleinberg pointed to the area of imaging recognition. IBM, for instance, made headlines when it acquired Merge Healthcare for $1 billion in 2015, allowing Watson to "see" medical images the largest data source in healthcare.

Then there are the various iPhone apps that say they can help diagnose skin cancer with photos users take of their own moles. Kleinberg said he mentioned the apps to a dermatologist friend of his.

"I want to quote him very carefully: He said, 'Naaaaahhhhhh.'"

But Parkinson took a different view: "About 25 percent of our cases have photos attached," he said. "Right now, if it's a weird mole we're sending people out to see a dermatologist. But I would totally love to replace that (doctor) with a robot. And I don't think that's too far off."

In the near term, however, "you would be amazed at the image quality that people taking photographs think are good photographs," he said. "So there's a lot of education for the patient about how to take a picture."

The patient's view If artificial intelligence is having promising if controversial impact so far on the clinical side, one of the most important aspects of this evolution also still has some questions to answer. Most notably: What do the patient think?

One one hand, Kleinberg pointed to AI pilots where patients paired with humanoid robots "felt a sense of loss" after the test ended. "One woman followed the robot out and waved goodbye to it."

On the other, "some people are horrified that we would be letting machines play a part in a role that should be played by humans," he said.

The big question, then: "Do we have place now for society and a system such as this?" he asked.

"The first time I put something like this in a patient's home was 10 years ago now," said Kidd. "We've seen, with the various versions of AI and robots, that people can develop an attachment to them. At the same time, typical conversation is two or three minutes. It's not like people spend all day talking with these."

It's essential, he argued, to be up front with patients about just what the technology can and should do.

"How you introduce this, and how you couch the terminology around this technology and what it can and can't do is actually very important in making it effective for patients," said Kidd. "We don't try to convince anyone that this is a doctor or a nurse. As long as we set up the relationship in the right way so people understand how it works and what it can do, it can be very effective.

"There is this cultural conception that AI and robotics can be scary," he conceded. "But what I've seen, putting this in front of patients is that this is a tool that can do something and be very effective, and people like it a lot."

HIMSS17runs from Feb. 19-23, 2017 at the Orange County Convention Center.

This article is part of our ongoing coverage of HIMSS17. VisitDestination HIMSS17for previews, reporting live from the show floor and after the conference.

Like Healthcare IT News onFacebookandLinkedIn

Continue reading here:

Artificial intelligence set to transform the patient experience, but many questions still to be answered - Healthcare IT News

Posted in Artificial Intelligence | Comments Off on Artificial intelligence set to transform the patient experience, but many questions still to be answered – Healthcare IT News

Utrip raises $4M to build out artificial intelligence-based travel planning platform – GeekWire

Posted: at 7:17 pm

Utrip CEO Gilad Berenstein. (Utrip Photo)

Utrip, a Seattle startup that uses machine learning to help travelers plan their trips , just closed a $4 million funding round.

Investors in the Series A round include Plug and Play, Tiempo Capital, Acorn Ventures, and executives from companies such as Apple and Costco, participatingas angel investors. The cash will go toward Utrips machine learning and data science operations, which fuel the platforms recommendation engine.

One of the things that our travelers love about Utrip is the depth with which we curate destinations and go beyond those top 10 lists that are available everywhere to offer experiences that are really unique and local and authentic for that destination, said Utrip CEO Gilad Berenstein. Thats one big priority, continuing to build out our machine learning capabilities as well as our human expert network, our chefs, artists, historians, etcetera.

Utrip also plans to grow its team by about a third this year, primarily focusing on sales and marketing. The startup currently employs 18 in Seattle.

Utrips itinerary-planning tools are free for consumers. The startup makes money by licensing its software and building products for businesses in the hospitalityspace, like hotels and cruise lines.

All of our partners be it small or big, be it hotel or cruise line, airline, etcetera are interested in offering personalized experiences and recommendations their potential guests, said Berenstein. So, ultimately, this goes all the way from making great recommendations and itineraries on their sites to doing personalized email marketing and personalized social media promotion where they can show the right recommendation, the right experience, to the right traveler.

Berenstein and his family moved to the U.S. from Israel when he was a child. As an immigrant entrepreneur and the founder of a travel-based business, he has a few thoughts on the crush of immigration-related news from the past few weeks.

Im a believer in immigration and Im a believer in open borders and I think that travel its this really beautiful, eye-opening experience, he said. When people travel they get to see the world and we all get to remember that our perspective is not the only perspective and that our way of life is not the only way of life. I think that if we look at some of the craziness right now thats happening in the world, more travel and more connections between people in different countries and continents is only for the good.

Here is a full list of investors participating in Utrips series A:Plug and Play, Tiempo Capital, Acorn Ventures, SWAN Venture Fund and W&W Capital. Angel investors encompass Apple, Inc. Treasurer Gary Wipfler; Costco CFO Richard Galanti; Savers Inc. CEO Ken Alterman; H. S. Wright III, CEO and founder of Seattle Hospitality Group as well a partner in family businesses that own the Seattle Space Needle, Chihuly Garden and Glass and the Sheraton Seattle Hotel; veteran hotel executive Carla Murray; hotelier Craig Schafer and Neal Dempsey, a partner at Bay Partners.

See the original post here:

Utrip raises $4M to build out artificial intelligence-based travel planning platform - GeekWire

Posted in Artificial Intelligence | Comments Off on Utrip raises $4M to build out artificial intelligence-based travel planning platform – GeekWire

Why Our Conversations on Artificial Intelligence Are Incomplete – The Wire

Posted: February 19, 2017 at 11:15 am

Featured Conversations about artificial intelligence must focus on jobs as well as questioning its purpose, values, accountability and governance.

There is an urgent need to expand the AI epistemic community beyond the specific geographies in which it is currently clustered. Credit: YouTube

Artificial Intelligence (AI) is no longer the subject of science fiction and is profoundly transforming our daily lives. While computers have already been mimicking human intelligence for some decades now using logic and if-then kind of rules, massive increases in computational power are now facilitating the creation of deep learning machines i.e. algorithms that permit software to train itselfto recognise patterns and perform tasks, like speech and image recognition, through exposure to vast amounts of data.

These deep learning algorithms are everywhere, shaping our preferences and behaviour. Facebook uses a set of algorithms totailor what news stories an individual user sees and in what order. Bot activity on Twittersuppressed a protest against Mexicos now presidentby overloading the hashtag used to organise the event. The worlds largest hedge fund is building a piece of software to automate the day-to-day management of the firm, including, hiring, firing and other strategic decision-making. Wealth management firms are increasingly using algorithms to decide where to invest money. The practice of traders shouting and using hand signals to buy and sell commodities has become outdated on Wall Street as traders have been replaced by machines. And bots are now being used to analyse legal documents to point out potential risks and areas of improvement.

Much of the discussion on AI in popular media has been through the prism of job displacement. Analysts, however, differ widely on the projected impact a 2016 studyby the Organisation for Economic Co-operation and Developmentestimates that 9% of jobs will be displaced in the next two years, whereas a 2013 study by Oxford University estimates that job displacement will be 47%. The staggering difference illustrates how much the impact of AI remains speculative.

Responding to the threat of automation on jobs will undoubtedly require revising existing education and skilling paradigms, but at present, we also need to consider more fundamental questions about the purposes, values and accountability of AI machines. Interrogating these first-order concerns will eventually allow for a more systematic and systemic response to the job displacement challenge as well.

First, what purpose do we want to direct AI technologies towards? AI technologies can undoubtedly create tremendous productivity and efficiency gains. AI might also allow us to solve some of the most complex problems of our time. But we need to make political and social choices about the parts of human life in which we want to introduce these technologies, at what cost and to what end.

Technological advancement has resulted in a growth in national incomes and GDP, yet the share of national incomes that have gone to labour has dropped in developing countries. Productivity and efficiency gains are thus not in themselves conclusive indicators on where to deploy AI rather, we need to consider the distribution of these gains. Productivity gains are also not equally beneficial to all incumbents with data and computational power will be able to use AI to gain insight and market advantage.

Moreover, a bot might be able to make more accurate judgments about worker performance and future employability, but we need to have a more precise handle over the problem that is being addressed by such improved accuracy.AI might be able to harness the power of big data to address complex social problems. Arguably, however, our inability to address these problems has not been a result of incomplete data for a number of decades now we have had enough data to make reasonable estimates about the appropriate course of action. It is the lack of political will and social and cultural behavioural patterns that have posed obstacles to action, not the lack of data. The purpose of AI in human life must not be merely assumed as obvious, or subsumed under the banner of innovation, but be seen as involving complex social choices that must be steered through political deliberations.

This then leads to a second question about the governance of AI who should decide where AI is deployed, how should these decisions be made and on what principles and priorities? Technology companies, particularly those that have the capital to make investments in AI capacities, are leading current discussions predominantly. Eric Horvitz, managing director of the Microsoft Research Lab, launched the One Hundred Year Study on Artificial Intelligence based out of Stanford University. The Stanford report makes the case for industry self-regulation, arguing that attempts to regulate AI, in general, would be misguided as there is no clear definition of AI and the risks and considerations are very different in different domains.

The White House Office of Science and Technology Policy recently released a report on the Preparing for the Future of Artificial Intelligence, but accorded a minimal role to thegovernment as regulator. Rather, the question of governance is left to the supposed ideal of innovation i.e. AI will fuel innovation, which will fuel economic growth and this will eventually benefit society as well. The trouble with such innovation-fuelled self-regulation is that development of AI will be concentrated in those areas in which there is a market opportunity, not necessarily areas that are the most socially beneficial. Technology companies are not required to consider issues of long-term planning and the sharing of social benefits, nor can they be held politically and socially accountable.

Earlier this year, a set of principles for Beneficial AI was articulated at the Asilomar Conference the star speakers and panelists were predominantly from large technology companies like Google, Facebook and Tesla, alongside a few notable scientists, economists and philosophers. Notably missing from the list of speakerswas the government, journalists and the public and their concerns. The principles make all the right points, clustering around the ideas of beneficial intelligence, alignment with human values and common good, but they rest on fundamentally tenuous value questions about what constitutes human benefit a question that demands much wider and inclusive deliberation, and one that must be led by government for reasons of democratic accountability and representativeness.

What is noteworthy about the White House Report in this regard is the attempt to craft a public deliberative process the report followed five public workshops and an Official Request for Information on AI.

The trouble is not only that most of these conversations about the ethics of AI are being led by the technology companies themselves, but also that governments and citizens in the developing world are yet to start such deliberations they are in some sense the passive recipients of technologies that are being developed in specific geographies but deployed globally. The Stanford report, for example, attempts to define the issues that citizens of a typical North American city will face in computers and robotic systems that mimic human capabilities. Surely these concerns will look very different across much of the globe. The conversation in India has mostly been clustered around issues of jobs and the need for spurring AI-based innovation to accelerate growth and safeguard strategic interests, with almost no public deliberation around broader societal choices.

The concentration of an AI epistemic community in certain geographies and demographics leads to a third key question about how artificially intelligent machines learn and make decisions. As AI becomes involved in high-stakes decision-making, we need to understand the processes by which such decision making takes place. AI consists of a set of complex algorithms built on data sets. These algorithms will tend to reflect the characteristics of the data that they are fed. This then means that inaccurate or incomplete data sets can also result in biased decision making. Such data bias can occur in two ways.

First, if the data set is flawed or inaccurately reflects the reality it is supposed to represent. If for example, a system is trained on photos of people that are predominantly white, it will have a harder time recognising non-white people. This kind of data bias is what led a Google application to tag black people as gorillas or the Nikon camera software to misread Asian people as blinking. Second, if the process being measured through data collection itself reflects long-standing structural inequality. ProPublica found, for example, that software that was being useful to assess the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

What these examples suggest is that AI systems can end up reproducing existing social bias and inequities, contributing towards the further systematic marginalisation of certain sections of society. Moreover, these biases can be amplified as they are coded into seemly technical and neutral systems that penetrate across a diversity of daily social practices. It is, of course, an epistemic fallacy to assume that we can ever have complete data on any social or political phenomena or peoples. Yet, there is an urgent need to improve the quality and breadth of our data sets, as well as investigate any structural biases that might exist in these data how we would do this is hard enough to imagine, leave alone implement.

The danger that AI will reflect and even exacerbate existing social inequities leads finally to the question of the agency and accountability of AI systems. Algorithms represent much more than code, as they exercise authority on behalf of organisations across various domains and have real and serious consequences in the analog world. However, the difficult question is whether this authority can be considered a form of agency that can be held accountable and culpable.

Recent studies suggest for example that algorithmic trading between banks was at least partly responsible for the financial crisis of 2008; the crash of the sterling in 2016 has similarly been linked to a panicky bot-spiral. Recently, both Google and Teslas self-driving care caused fatal crashes in the Tesla case, a man died while using Teslas autopilot function. Legal systems across the world are not yet equipped to respond to the issue of culpability in such cases, and the many more that we are yet to imagine. Neither is it clear how AI systems will respond to ethical conundrums like the famous trolley problem, nor the manner in which human-AI interaction on ethical questions will be influenced by cultural differences across societies or time. The question comes down to the legal liability of AI, whether it should be considered a subject or an object.

The trouble with speaking about accountability also stems from the fact that AI is intended to be a learning machine. It is this capacity to learn that marks the newness of the current technological era, and this capacity of learning that makes it possible to even speak of AI agency. Yet, machine learning is not a hard science; rather its outcomes are unpredictable and can only be fully known after the fact. Until Googles app labels a black person as a gorilla, Google may not even know what the machine has learnt this leads to an incompleteness problem for political and legal systems that are charged with the governance of AI.

The question of accountability also comes down to one of visibility. Any inherent bias in the data on which an AI machine is programmed is invisible and incomprehensible to most end users. This inability to review the data reduces the agency and capacity of individuals to resist, even recognise, the discriminatory practices that might result from AI. AI technologies thus exercise a form of invisible but pervasive power, which then also obscures the possible points or avenues for resistance. The challenge is to make this power visible and accessible. Companies responsible for these algorithms keep their formulas secret as proprietary information. However, the far-ranging impact of AI technologies necessitates the need for algorithmic transparency, even if it reduces the competitive advantage of companies developing these systems. A profit motive cannot be blindly prioritisedif it comes at the expense of social justice and accountability.

When we talk about AI, we need to talk about jobs both about the jobs that will be lost and the opportunities that will arise from innovation. But we must also tether these conversations to questions about the purpose, values, accountability and governance of AI. We need to think about the distribution of productivity and efficiency gains and broader questions of social benefit and well being. Given the various ways in which AI systems exercise power in social contexts, that power needs to be made visible to facilitate conversations about accountability. And responses have to be calibrated through public engagement and democratic deliberation the ethics and governance questions around AI cannot be left to market forces alone, albeit in the name of innovation.

Finally, there is a need to move beyond the universalising discourse around technology technologies will be deployed globally and with global impact, but the nature of that impact will be mediated through local political, legal, cultural and economic systems. There is an urgent need to expand the AI epistemic community beyond the specific geographies in which it is currently clustered, and provide resources and opportunities for broader and more diverse public engagement.

Urvashi Aneja is Founding Director of Tandem Research, a multidisciplinary think tank based in Socorro, Goa that produces policy insights around issues of technology, sustainability and governance. She is Associate Professor at the Jindal School of International Affairs and Research Fellow at the Observer Research Foundation.

Categories: Featured, Tech

Tagged as: AI, AI-based innovation, Artificial Intelligence, Beneficial AI, Facebook, GDP, Google, human intelligence, innovation, technology, Tesla, Urvashi Aneja

Go here to see the original:

Why Our Conversations on Artificial Intelligence Are Incomplete - The Wire

Posted in Artificial Intelligence | Comments Off on Why Our Conversations on Artificial Intelligence Are Incomplete – The Wire

Artificial Intelligence & Bias – Huffington Post

Posted: at 11:15 am

By Jackson Grigsby, Harvard Class of 2020

On Thursday, February 16th, the JFK Jr. Forum at the Harvard Institute of Politics hosted a conversation on the past, present, and future of Artificial Intelligence with Harvard Kennedy School Professor of Public Policy Iris Bohnet, Harvard College Gordon McKay Professor of Computer Science Cynthia Dwork, and Massachusetts Institute of Technology Professor Alex Sandy Pentland.

Moderated by Sheila Jasanoff, Kennedy School Pforzheimer Professor of Science and Technology Studies, the conversation focused on the potential benefits of Artificial Intelligence as well as some of the major ethical dilemmas that these experts predicted. While Artificial Intelligence (AI) has the potential to eliminate inherent human bias in decision-making, the panel agreed that in the near future, there are ethical boundaries that society and governments must explore as Artificial Intelligence expands into the realms of medicine, governance, and even self-driving cars.

Some major takeaways from the event were:

1. Artificial Intelligence offers an incredible opportunity to eliminate human biases in decision-making

In the future, Artificial Intelligence can be utilized to eliminate inherent human biases that often influence important decisions surrounding employment, government policy, and even policing. At the event, Professor Iris Bohnet stated that every person has biases that inform their decisions. These biases can affect whether a candidate for a job is chosen or not. As a result, Bohnet suggested that by using algorithms, employers could choose the best candidates by using AI to focus on the candidates qualifications rather than by basing decisions on gender, race, age or other variables. However, the panel also discussed the fact that even algorithms can have bias. For example, the algorithm that is used to match medical students with residency hospitals can either be biased in favor of the hospitals preferences or the students. It is up to humans to control bias in the algorithms that they use.

2. Society must begin having conversations surrounding the ethics of Artificial Intelligence

Due to the fact that Artificial Intelligence is becoming more popularly utilized, society and governments must continue to have conversations addressing ethics and Artificial Intelligence. Professors Alex Pentland and Cynthia Dwork stated that as Artificial Intelligence proliferates, moral conflicts can surface. Pentland emphasized that citizens must ask themselves is this something that is performing in a way that we as a society want? Pentland noted that our society must continue a dialogue around ethics and determine what is right.

3. Although Artificial Intelligence is growing, there are still tasks that only humans should do

In the end, the experts agreed, there are tasks and decisions that only humans can make. At the same time, there are some tasks and decisions that could be executed by machines, but ultimately should be done by humans. Professor Bohnet emphasized this point by reaffirming humanitys position, concluding, There are jobs that cannot be done by machines.

Check out video of the full forum below:

See the rest here:

Artificial Intelligence & Bias - Huffington Post

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence & Bias – Huffington Post

If I Only Had a Brain: How AI ‘Thinks’ – Daily Beast

Posted: at 11:15 am

AI can beat humans in chess, Go, poker and Jeopardy. But what about emotional intelligence or street smarts?

Artificial intelligence has gotten pretty darn smartat least, at certain tasks. AI has defeated world champions in chess, Go, and now poker. But can artificial intelligence actually think?

The answer is complicated, largely because intelligence is complicated. One can be book-smart, street-smart, emotionally gifted, wise, rational, or experienced; its rare and difficult to be intelligent in all of these ways. Intelligence has many sources and our brains dont respond to them all the same way. Thus, the quest to develop artificial intelligence begets numerous challenges, not the least of which is what we dont understand about human intelligence.

Still, the human brain is our best lead when it comes to creating AI. Human brains consist of billions of connected neurons that transmit information to one another and areas designated to functions such as memory, language, and thought. The human brain is dynamic, and just as we build muscle, we can enhance our cognitive abilitieswe can learn. So can AI, thanks to the development of artificial neural networks (ANN), a type of machine learning algorithm in which nodes simulate neurons that compute and distribute information. AI such as AlphaGo, the program that beat the world champion at Go last year, uses ANNs not only to compute statistical probabilities and outcomes of various moves, but to adjust strategy based on what the other player does.

Facebook, Amazon, Netflix, Microsoft, and Google all employ deep learning, which expands on traditional ANNs by adding layers to the information input/output. More layers allow for more representations of and links between data. This resembles human thinkingwhen we process input, we do so in something akin to layers. For example, when we watch a football game on television, we take in the basic information about whats happening in a given moment, but we also take in a lot more: whos on the field (and whos not), what plays are being run and why, individual match-ups, how the game fits into existing data or history (does one team frequently beat the other? Is the quarterback passing for as many yards as usual?), how the refs are calling the game, and other details. In processing this information we employ memory, pattern recognition, statistical and strategic analysis, comparison, prediction, and other cognitive capabilities. Deep learning attempts to capture those layers.

Youre probably already familiar with deep learning algorithms. Have you ever wondered how Facebook knows to place on your page an ad for rain boots after you got caught in a downpour? Or how it manages to recommend a page immediately after youve liked a related page? Facebooks DeepText algorithm can process thousands of posts, in dozens of different languages, each second. It can also distinguish between Purple Rain and the reason you need galoshes.

Deep learning can be used with faces, identifying family members who attended an anniversary or employees who thought they attended that rave on the down-low. These algorithms can also recognize objects in contextsuch a program that could identify the alphabet blocks on the living room floor, as well as the pile of kids books and the bouncy seat. Think about the conclusions that could be drawn from that snapshot, and then used for targeted advertising, among other things.

Google uses Recurrent Neural Networks (RNNs) to facilitate image recognition and language translation. This enables Google Translate to go beyond a typical one-to-one conversion by allowing the program to make connections between languages it wasnt specifically programmed to understand. Even if Google Translate isnt specifically coded for translating Icelandic into Vietnamese, it can do so by finding commonalities in the two tongues and then developing its own language which functions as an interlingua, enabling the translation.

Machine thinking has been tied to language ever since Alan Turings seminal 1950 publication Computing Machinery and Intelligence. This paper described the Turing Testa measure of whether a machine can think. In the Turing Test, a human engages in a text-based chat with an entity it cant see. If that entity is a computer program and it can make the human believe hes talking to another human, it has passed the test. Iterations of the Turing Test, such as the Loebner Prize, still exist, though its become clear that just because a program can communicate like a human (complete with typos, an abundance of exclamation points, swear words, and slang) doesnt mean its actually thinking. A 1960s Rogerian computer therapist program called ELIZA duped participants into believing they were chatting with an actual therapist, perhaps because it asked questions and unlike some human conversation partners, appeared as though its listening. ELIZA harvests key words from a users response and turns them into question, or simply says, tell me more. While some argue that ELIZA passed the Turing Test, its evident from talking with ELIZA (you can try it yourself here) and similar chatbots that language processing and thinking are two entirely different abilities.

But what about IBMs Watson, which thrashed the top two human contestants in Jeopardy? Watsons dominance relies on access to massive and instantly accessible amounts of information, as well as its computation of answers probable correctness. In the game, Watson received this clue: Maurice LaMarche found his inner Orson Welles to voice this rodent whose simple goal was to take over the world. Watsons possible answers and probabilities were as follows:

Pinky and the Brain: 63 percent

Googling Maurice LaMarche quickly confirms that he voiced Pinky. But the clue is tricky because it contains a number of key terms: LaMarche, voiceover, rodent, and world domination. Orson Welles functions as a red herringyes, LaMarche supplied his trademark Orson Welles voice for Vincent DOnofrios character in Ed Wood, but that line of thought has nothing to do with a rodent. Similarly, a capybara is a South American rodent (the largest in the world, which perhaps Watson connected with the take over the world part of the clue), but the animal has no connection to LaMarche or to voiceovers unless LaMarche does a mean capybara impression. A human brain probably wouldnt conflate concepts as Watson does here; indeed, Ken Jennings buzzed in with the right answer.

Still, Watsons capabilities and applications continue to growits now working on cancer. By uploading case histories, diagnostic information, treatment protocols, and other data, Watson can work alongside human doctors to help identify cancer and determine personalized treatment plans. Project Lucy focuses Watsons supercomputing powers on helping Africa meet farming, economic, and social challenges. Watson can prove itself intelligent in discrete realms of knowledge, but not across the board.

Perhaps the major limitation of AI can be captured by a single letter: G. While we have AI, we dont have AGIartificial general intelligence (sometimes referred to as strong or full AI). The difference is that AI can excel at a single task or game, but it cant extrapolate strategies or techniques and apply them to other scenarios or domainsyou could probably beat AlphaGo at Tic Tac Toe. This limitation parallels human skills of critical thinking or synthesiswe can apply knowledge about a specific historical movement to a new fashion trend or use effective marketing techniques in a conversation with a boss about a raise because we can see the overlaps. AI cant, for now.

Some believe well never truly have AGI; others believe its simply a matter of time (and money). Last year, Kimera unveiled Nigel, a program it bills as the first AGI. Since the beta hasnt been released to the public, its impossible to assess those claims, but well be watching closely. In the meantime, AI will keep learning just as we do: by watching YouTube videos and by reading books. Whether thats comforting or frightening is another question.

Excerpt from:

If I Only Had a Brain: How AI 'Thinks' - Daily Beast

Posted in Artificial Intelligence | Comments Off on If I Only Had a Brain: How AI ‘Thinks’ – Daily Beast

Why are Indian engineers so afraid of ‘artificial intelligence’? – Scroll.in

Posted: at 11:15 am

2 hours ago.

Artificial intelligence is being counted among the hottest startup sectors in India this year, but the highly specialised space is struggling to grow due to the lack of a primary input: engineers.

Forget getting people of our choice, we dont even get applications when we advertise for positions for our AI team, said 25-year-old Tushar Chhabra, co-founder of Cron Systems, which builds internet of things-related solutions for the defence sector. Its as if people are scared of the words artificial intelligence. They start freaking out when we ask them questions about AI.

India has over 170 startups focused purely on AI, which have together raised over $36 million. The sector has received validation from marquee investors like Sequoia Capital, Kalaari Capital, and business icon Ratan Tata. But entrepreneurs are struggling to expand due to a shortage of engineers with skills related to robotics, machine learning, analytics, and automation.

Racetrack.ai co-founders Subrat Parida and Navneet Gupta said that around 40% of their working time is spent searching for the right talent. The organisation, which operates out of Bengaluru, has built an AI-driven communication bot called Marvin. People are the core strength of a startup, Parida, also the CEO, told Quartz. So hiring for a startup is very challenging. We are not looking for the regular tech talent and, since AI is a relatively new field in India, you dont get people with past experience in working on those technologies.

Only 4% of AI professionals in India have actually worked on cutting-edge technologies like deep learning and neural networks, which are the key ingredients for building advanced AI-related solutions, according to recruitment startup Belong, which often helps its clients discover and recruit AI professionals.

Also, many such companies require candidates with PhD degrees in AI-related technologies, which is rare in India.

While it takes a company just a month to find a good app developer, it could take up to three months to fill up a position in the AI space, said Harishankaran K, co-founder and CTO of HackerRank, which helps companies hire tech talent through coding challenges.

India is among the top countries in terms of the number of engineers graduating every year. But the engineering talent here has traditionally been largely focused on IT and not research and innovation.

Fields like AI require a mindset of research and experimentation, said PK Viswanathan, professor of business intelligence at the Great Lakes Institute of Management in Chennai. But most aspiring engineers in India follow a pattern: finish school, go to IIT, do an MBA, and then take up a job. To work on AI, you need people who not only have a strong technology background, but also have analytical thinking, puzzle-solving skills, and they should not be scared of numbers.

Ironically, the subject has been a part of the curriculum at some engineering schools for almost a decade. However, what is taught there is mostly irrelevant to the real world.

Sachin Jaiswal, who graduated from IIT Kharagpur in 2011, studied some aspects of AI back in college. But whatever he is doing at his two-year-old startup Niki.ai it has built a bot that lets users order anything through a chat interface is based on what he learned in his earlier jobs, he said.

A lot of people are disillusioned when they come out of college and begin their first jobs, said Jaiswal, whose startup is backed by Ratan Tata.

In fact, even now, when he interacts with graduates from elite institutes to hire them, he sees a glaring gap between what these youngsters have learned and what is needed on the work floor.

Given the shortage of AI-related talent in India, several startups aspire to tap Silicon Valley. But thats not a feasible solution for young teams.

A few months back, Chhabra of Cron Systems was in talks with a US-based engineer, an IIT-Delhi alumnus working on AI for seven years. The guy asked for Rs 2.5 crore per annum as salary, said Chhabra. As a startup you cannot afford that price.

Cron Systems has found a jugaad to solve their problem, Chhabra said. Late last year, the company hired a bunch of engineers with basic skills needed to create AI-related solutions and trained them.

We broke down AI into smaller pieces and hired six tech professionals who understood those basic skills well, Chhabra said. Then we conducted a three-month training for these people and brought them onboard with what we do.

Niki.ai, too, is following this hire-and-train model. Training takes time and investment but we have no option because we need the talent, Jaiswal of Niki.ai told Quartz. If we had better access to talent, things would have been better.

Gurugram-based AI startup Staqu has started partnering with academic institutions to build a steady pipeline of engineers and researchers.

Despite this struggle, entrepreneurs and investors in India feel bullish.

In an ecosystem where e-commerce and food delivery hog the limelight, a recent report by venture lending firm InnoVen Capital named AI one of the most under-hyped sectors. But that is set to change, said London-based angel investor Sanjay Choudhary.

In September, Choudhary invested in Delhi-based AI startup Corseco Technologies. He regularly interacts with the companys team and the genuine issue of finding talent comes up frequently, he told Quartz.

India is a late entrant into the AI space and talent crunch will be a challenge for the industry for some time to come, he said. But I plan to continue investing in AI in India because I feel that the space has a lot of potential and needs to be supported.

While there seems no end to the struggle, Jaiswal of Niki.ai sees a silver lining: Talent crunch ensures that companies cant enter the field easily. So we have a competitive edge.

This article first appeared on Quartz.

See original here:

Why are Indian engineers so afraid of 'artificial intelligence'? - Scroll.in

Posted in Artificial Intelligence | Comments Off on Why are Indian engineers so afraid of ‘artificial intelligence’? – Scroll.in

The new IQ test: Technologists assess the potential of artificial intelligence – SC Magazine

Posted: February 18, 2017 at 4:17 am

AI may still seem like a far-flung concept, but in cybersecurity its already a reality.

Rather than focus on attack signatures, these AI solutions look for anomalous network behavior, flagging when a machine goes rogue or if user activity or traffic patterns appear unusual. A really simple example is someone with high privilege who attempts to get onto a system at a time of day or night that they never normally log in and potentially from a geolocation or a machine that they don't log in from, said Kelley.

Another example would be a really rapid transfer of a lot of data, especially if that data consists of the corporate crown jewels.

Such red-flags allow admins to quickly catch high-priority malware infections and network compromises before they can cause irreparable damage.

IBM calls this kind of machine learning cognitive with a little c' which the company was already practicing prior to Watson. Despite its diminutive designation, little c can have some big benefits for one's network.

A network really in its simplest form, is a data set, one that changes with every millisecond, said Justin Fier, director of cyber intelligence and analysis at U.K.-based cybersecurity company Darktrace, whose network threat detection solution was created by mathematicians and machine-learning specialists from the University of Cambridge. With machine learning, we can analyze that data in a more efficient way.

We're not looking for malicious behavior, we're looking for anomalous behavior, Fier continued, in an interview with SC Media. And that can sometimes turn into malicious behavior and intent, or it can turn into configuration errors or it could just be vulnerable protocols. But we're looking for the things that just stand out.

An advantage of these kinds of AI solutions is that they often run on unsupervised learning models meaning they do not need to be fed scores of data in advance to help its algorithms define what constitutes a true threat. Rather, they tend to self-learn through observation, making note of which machines are defying typical patterns a process that Fier said is the AI determining its own sense of self on the network.

While Fier said that basic compliance failures are the most commonly detected issue, he recalled one particular client that used biometric fingerprint scanners for security access, only to discover through anomaly detection that one of these devices had been connected to the Internet and subsequently breached.

To cover up his activity, the perpetrator modified and deleted various log files, but this unusual behavior was discovered as well. The solution even found irregularities in the network server that suggested the culprit moved fingerprint data from the biometric device to a company database, perhaps to establish an alibi. My belief is that somebody on the inside was probably getting get help from somebody on the outside, said Fier, noting that it was a significant find because insider threats are one of the hardest things to catch.

Another client, Catholic Charities of Santa Clara County, an affiliate of CatholicCharities USA that helps 54,000 local clients per year, used anomaly detection to thwart an attempted ransomware attack only weeks after commencing a test of the technology. The solution immediately flagged the event, after a receptionist opened a malicious email with a fake invoice attachment. I was able to respond right away, and disconnected the targeted device to prevent any further encryption or financial cost, saidWill Bailey, director of IT at the social services organization.

Little c's benefits extend beyond the network as well. Kelley cited the advent of application scanning tools that seek out problematic lines of code in websites and mobile software that could result in exploitation. And Fier noted a current Darktrace endeavor called Project Turing, whereby researchers are using AI to model how security analysts and investigators work in order to make their jobs more efficient.

More here:

The new IQ test: Technologists assess the potential of artificial intelligence - SC Magazine

Posted in Artificial Intelligence | Comments Off on The new IQ test: Technologists assess the potential of artificial intelligence – SC Magazine

Page 202«..1020..201202203204..210..»