Page 200«..1020..199200201202..210..»

Category Archives: Artificial Intelligence

College Student Uses Artificial Intelligence To Build A Multimillion-Dollar Legal Research Firm – Forbes

Posted: February 24, 2017 at 6:26 pm


Forbes
College Student Uses Artificial Intelligence To Build A Multimillion-Dollar Legal Research Firm
Forbes
Lawyers spend years in school learning how to sift through millions of cases looking for the exact language that will help their clients win. What if a computer could do it for them? It's not the kind of question many lawyers would dignify with an answer.

Read the original post:

College Student Uses Artificial Intelligence To Build A Multimillion-Dollar Legal Research Firm - Forbes

Posted in Artificial Intelligence | Comments Off on College Student Uses Artificial Intelligence To Build A Multimillion-Dollar Legal Research Firm – Forbes

Artificial Intelligence or Artificial Expectations? – Science 2.0

Posted: at 6:26 pm

News concerning Artificial Intelligence (AI) abounds again. The progress with Deep Learning techniques are quite remarkable with such demonstrations of self-driving cars, Watson on Jeopardy, and beating human Go players. This rate of progress has led some notable scientists and business people towarn about the potential dangers of AI as it approaches a human level. Exascale computers are being considered that would approach what many believe is this level.

However, there are many questions yet unanswered on how the human brain works, and specifically the hard problem of consciousness with its integrated subjective experiences. In addition, there are many questions concerning the smaller cellular scale, such as why some single-celled organisms can navigate out of mazes, remember, and learn without any neurons.

In this blog, I look at a recent review that suggests brain computations being done at a scale finer than the neuron might mean we are far from the computational, power both quantitatively and qualitatively. The review is by Roger Penrose (Oxford) and Stuart Hameroff (University of Arizona) on their journey through almost three decades of investigating the role of potential quantum aspects in neurons microtubules. As a graduate student in 1989, I was intrigued when Penrose, a well-known mathematical physicist, published the book, The Emperors New Mind, outlining a hypothesis that consciousness derived from quantum physics effects during the transition from a superposition and entanglement of quantum states into a more classical configuration (the collapse or reduction of the wavefunction). He further suggested that this process, which has baffled generations of scientists, might occur only when a condition, based on the differences of gravitational energies of the possible outcomes, is met (i.e., Objective Reduction or OR). He then went another step in suggesting that the brain takes advantage of the this process to perform computations in parallel, with some intrinsic indeterminacy (non-computability), and over a larger integrated range by maintaining the quantum mix of microtubule configurations separated from the noisy warm environment until this reduction condition was met (i.e., Orchestrated Objective Reduction or Orch OR).

As an anesthesiologist, Stuart Hameroff questioned how relatively simple molecules could cause unconsciousness. He explored the potential classical computational power of microtubules. The microtubules had been recognized as an important component of neurons, especially in the post synaptic dendrites and cell body, where the cylinders lined up parallel to the dendrite, stabilized, and formed connecting bridges between cylinders (MAPs). Not only are there connections between microtubules within dendrites but there are also interneuron junctions allowing cellular material to tunnel between neuron cells. One estimate of the potential computing power of a neurons microtubules (a billion binary state microtubule building blocks , tubulins, operating at 10 megahertz) is the equivalent computing power of the assumed neuronnet of the brain (100 billion neurons each with 1000 synapses operating at about 100 Hz). That is, the brains computing power might be the square of the standard estimate (10 petaflops) based on relatively simple neuron responses.

Soon after this beginning, Stuart Hameroff and Roger Penrose, found each others complementary approach and started forming a more detailed set of hypotheses. Much criticism was leveled about this view. Their responses included modifying the theory, calling for more experimental work, and defending against general attacks. Many experiments await to be done, including whether objective reduction occurs but this experiment cannot be done yet with the current resolution of laboratory instruments. Other experiments on electronic properties of microtubules were done in Japan in 2009 which discovered high conductance at certain frequencies from kilohertz to gigahertz frequencies. These measurements, which also show conductance increasing with microtubule length, are consistent with conduction pathways through aligned aromatic rings in the helical and linear patterns of the microtubule. Other indications of quantum phenomena in biology include the recent discoveries quantum effects in photosynthesis, bird navigation, and protein folding

There are many subtopics toexplore. Often the review discusses potential options without committing (or claiming) a specific resolution. These subtopics include interaction of microtubule with associated protein and transport mechanisms, the relationship of microtubules to diseases such as Alzheimers, the frequency of the collapse from the range of megahertz to hertz, memory formation and processing with molecules that bind to microtubules, the temporal aspect of brain activity and conscious decisions, whether the quantum states are spin (electron or nuclear) or electrical dipoles, the helical pattern of the microtubule (A or B), the fraction of microtubules involved with entanglement, the mechanism for environmental isolation, and the way that such a process might be advantageous in evolution. The review ends not with a conclusion concerning the validity of the hypothesis but instead lays a roadmap for the further tests that could rule out or support their hypothesis.

As I stated at the beginning, the progress in AI has been remarkable. However, the understanding of the brain is still very limited and the mainstream expectation that computers are getting close to equaling computing potential may be far off both qualitatively and quantitatively. While in the end it is unclear how much of this hypothesis will survive the test of experiments, it is very interesting to consider and follow the argumentative scientific process.

Stuart Hameroffs Web Site: http://www.quantumconsciousness.org/

Review Paper site: http://smc-quantum-physics.com/pdf/PenroseConsciouness.pdf

See original here:

Artificial Intelligence or Artificial Expectations? - Science 2.0

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence or Artificial Expectations? – Science 2.0

Artificial intelligence: Understanding how machines learn – Robohub

Posted: at 6:26 pm

From Jeopardy winners and Go masters to infamous advertising-related racial profiling, it would seem we have entered an era in which artificial intelligence developments are rapidly accelerating. But a fully sentient being whose electronic brain can fully engage in complex cognitive tasks using fair moral judgement remains, for now, beyond our capabilities.

Unfortunately, current developments are generating a general fear of what artificial intelligence could become in the future. Its representation in recent pop culture shows how cautious and pessimistic we are about the technology. The problem with fear is that it can be crippling and, at times, promote ignorance.

Learning the inner workings of artificial intelligence is an antidote to these worries. And this knowledge can facilitate both responsible and carefree engagement.

The core foundation of artificial intelligence is rooted in machine learning, which is an elegant and widely accessible tool. But to understand what machine learning means, we first need to examine how the pros of its potential absolutely outweigh its cons.

Simply put, machine learning refers to teaching computers how to analyse data for solving particular tasks through algorithms. For handwriting recognition, for example, classification algorithms are used to differentiate letters based on someones handwriting. Housing data sets, on the other hand, use regression algorithms to estimate in a quantifiable way the selling price of a given property.Machine learning, then, comes down to data. Almost every enterprise generates data in one way or another: think market research, social media, school surveys, automated systems. Machine learning applications try to find hidden patterns and correlations in the chaos of large data sets to develop models that can predict behaviour.

Machine learning, then, comes down to data. Almost every enterprise generates data in one way or another: think market research, social media, school surveys, automated systems. Machine learning applications try to find hidden patterns and correlations in the chaos of large data sets to develop models that can predict behaviour.

Data have two key elements samples and features. The former represents individual elements in a group; the latter amounts to characteristics shared by them.

Look at social media as an example: users are samples and their usage can be translated as features. Facebook, for instance, employs different aspects of liking activity, which change from user to user, as important features for user-targeted advertising.

Facebook friends can also be used as samples, while their connections to other people act as features, establishing a network where information propagation can be studied.

Outside of social media, automated systems used in industrial processes as monitoring tools use time snapshots of the entire process as samples, and sensor measurements at a particular time as features. This allows the system to detect anomalies in the process in real time.

All these different solutions rely on feeding data to machines and teaching them to reach their own predictions once they have strategically assessed the given information. And this is machine learning.

Any data can be translated into these simple concepts and any machine-learning application, including artificial intelligence, uses these concepts as its building blocks.

Once data are understood, its time to decide what do to with this information. One of the most common and intuitive applications of machine learning is classification. The system learns how to put data into different groups based on a reference data set.

This is directly associated with the kinds of decisions we make every day, whether its grouping similar products (kitchen goods against beauty products, for instance), or choosing good films to watch based on previous experiences. While these two examples might seem completely disconnected, they rely on an essential assumption of classification: predictions defined as well-established categories.

When picking up a bottle of moisturiser, for example, we use a particular list of features (the shape of the container, for instance, or the smell of the product) to predict accurately that its a beauty product. A similar strategy is used for picking films by assessing a list of features (the director, for instance, or the actor) to predict whether a film is in one of two categories: good or bad.

By grasping the different relationships between features associated with a group of samples, we can predict whether a film may be worth watching or, better yet, we can create a program to do this for us.

But to be able to manipulate this information, we need to be a data science expert, a master of maths and statistics, with enough programming skills to make Alan Turing and Margaret Hamilton proud, right? Not quite.

We all know enough of our native language to get by in our daily lives, even if only a few of us can venture into linguistics and literature. Maths is similar; its around us all the time, so calculating change from buying something or measuring ingredients to follow a recipe is not a burden. In the same way, machine-learning mastery is not a requirement for its conscious and effective use.

Yes, there are extremely well-qualified and expert data scientists out there but, with little effort, anyone can learn its basics and improve the way they see and take advantage of information.

Going back to our classification algorithm, lets think of one that mimics the way we make decisions. We are social beings, so how about social interactions? First impressions are important and we all have an internal model that evaluates in the first few minutes of meeting someone whether we like them or not.

Two outcomes are possible: a good or a bad impression. For every person, different characteristics (features) are taken into account (even if unconsciously) based on several encounters in the past (samples). These could be anything from tone of voice to extroversion and overall attitude to politeness.

For every new person we encounter, a model in our heads registers these inputs and establishes a prediction. We can break this modelling down to a set of inputs, weighted by their relevance to the final outcome.

For some people, attractiveness might be very important, whereas for others a good sense of humour or being a dog person says way more. Each person will develop her own model, which depends entirely on her experiences, or her data.

Different data result in different models being trained, with different outcomes. Our brain develops mechanisms that, while not entirely clear to us, establish how these factors will weighout.

What machine learning does is develop rigorous, mathematical ways for machines to calculate those outcomes, particularly in cases where we cannot easily handle the volume of data. Now more than ever, data are vast and everlasting. Having access to a tool that actively uses this data for practical problem solving, such as artificial intelligence, means everyone should and can explore and exploit this. We should do this not only so we can create useful applications, but also to put machine learning and artificial intelligence in a brighter and not so worrisome perspective.

There are several resources out there for machine learning although they do require some programming ability. Many popular languages tailored for machine learning are available, from basic tutorials to full courses. It takes nothing more than an afternoon to be able to start venturing into it with palpable results.

All this is not to say that the concept of machines with human-like minds should not concern us. But knowing more about how these minds might work will gives us the power to be agents of positive change in a way that can allow us to maintain control over artificial intelligence and not the other way around.

This article was originally published on The Conversation. Read the original article.

If you liked this article, you may also want to read:

See allthe latest robotics newson Robohub, orsign up for our weekly newsletter.

More:

Artificial intelligence: Understanding how machines learn - Robohub

Posted in Artificial Intelligence | Comments Off on Artificial intelligence: Understanding how machines learn – Robohub

Google rolls out artificial intelligence tool for media companies to combat online trolls – CTV News

Posted: at 6:26 pm

Google said it will begin offering media groups an artificial intelligence tool designed to stamp out incendiary comments on their websites.

The programming tool, called Perspective, aims to assist editors trying to moderate discussions by filtering out abusive "troll" comments, which Google says can stymie smart online discussions.

"Seventy-two per cent of American internet users have witnessed harassment online and nearly half have personally experienced it," said Jared Cohen, president of Google's Jigsaw technology incubator.

"Almost a third self-censor what they post online for fear of retribution," he added in a blog post on Thursday titled "When computers learn to swear."

Perspective is an application programming interface (API), or set of methods for facilitating communication between systems and devices, that uses machine learning to rate how comments might be regarded by other users.

The system, which will be provided free to media groups including social media sites, is being tested by The Economist, The Guardian, The New York Times and Wikipedia.

Many news organizations have closed down their comments sections over lack of sufficient human staff to monitor the postings for abusive content.

"We hope we can help improve conversations online," Cohen said.

Google has been testing the tool since September with The New York Times, which wanted to find a way to maintain a "civil and thoughtful" atmosphere in reader comment sections.

Perspective's initial task is to spot toxic language in English, but Cohen said the goal was to build tools for other languages, and to identify when comments are "unsubstantial or off-topic."

Twitter said earlier this month that it also would start rooting out hateful messages, which are often anonymous, by identifying the authors and prohibiting them from opening new accounts, or hiding them from internet searches.

Last year, Google, Twitter, Facebook and Microsoft signed a "code of good conduct" with the European Commission, pledging to examine most abusive content signalled by users within 24 hours.

See the original post here:

Google rolls out artificial intelligence tool for media companies to combat online trolls - CTV News

Posted in Artificial Intelligence | Comments Off on Google rolls out artificial intelligence tool for media companies to combat online trolls – CTV News

Artificial intelligence used to detect very early signs of autism in infants – SlashGear

Posted: at 6:26 pm

Its difficult to diagnose infants with autism due to trouble determining whether any behavioral traits common to autism are present. This difficulty is most pronounced before the age of two, and especially before the age of one, resulting in delayed diagnoses. All that may be changing, though, thanks to artificial intelligence and its ability to predict with high accuracy which infants will be diagnosed with autism by their second year.

As detailed by a new study funded by the US National Institute of Health, predictions of future Autism Spectrum Disorder can be made based on MRI scans of an infants brain. The technology heavily relies on brain scans taken of infants that are at high risk of being diagnosed with autism by their second birthday.

Using 106 brain scans of high-risk infants, researchers determined that certain aspects of the brains maturation may be early indicators of autism. One potential indicator is a quickly growing brain, one that grows faster than normal during the age spanning 12 to 24 months. As well, the cortex may grow faster than average during this time period.

Based on this information and using brain scans taken at 6 and 12 months, a customized algorithm was able to predict which infants would end up with an autism diagnoses with an 81-percent accuracy. Knowing whether a baby is likely to be autistic may help, in certain cases, early intervention and the preparation of parents.

SOURCE: PBS

View post:

Artificial intelligence used to detect very early signs of autism in infants - SlashGear

Posted in Artificial Intelligence | Comments Off on Artificial intelligence used to detect very early signs of autism in infants – SlashGear

There are two very different kinds of AI, and the difference is important – Popular Science

Posted: February 23, 2017 at 1:14 pm

Todays artificial intelligence is certainly formidable. It can beat world champions at intricate games like chess and Go, or dominate at Jeopardy!. It can interpret heaps of data for us, guide driverless cars, respond to spoken commands, and track down the answers to your internet search queries.

And as artificial intelligence becomes more sophisticated, there will be fewer and fewer jobs that robots cant take care ofor so Elon Musk recently speculated. He suggested that we might have to give our own brains a boost to stay competitive in an AI-saturated job market.

But if AI does steal your job, it wont be because scientists have built a brain better than yours. At least, not across the board. Most of the advances in artificial intelligence have been focused on solving particular kinds of problems. This narrow artificial intelligence is great at specific tasks like recommending songs on Pandora or analyzing how safe your driving habits are. However, the kind of general artificial intelligence that would simulate a person is a long ways off.

At the very beginning of AI there was a lot of discussion about more general approaches to AI, with aspirations to create systemsthat would work on many different problems, says John Laird, a computer scientist at the University of Michigan. Over the last 50 years the evolution has been towards specialization.

Still, researchers are honing AIs skills in complex tasks like understanding language and adapting to changing conditions. The really exciting thing is that computer algorithms are getting smarter in more general ways, says David Hanson, founder and CEO of Hanson Robotics in Hong Kong, who builds incredibly lifelike robots.

And there have always been people interested in how these aspects of AI might fit together. They want to know: How do you create systems that have the capabilities that we normally associate with humans? Laird says.

So why dont we have general AI yet?

There isn't a single, agreed-upon definition for general artificial intelligence. Philosophers will argue whether General AI needs to have a real consciousness or whether a simulation of it suffices," Jonathan Matus, founder and CEO of Zendrive, which is based in San Francisco and analyzes driving data collected from smartphone sensors, said in an email.

But, in essence, General intelligence is what people do, says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle, Washington. We dont have a computer that can function with the capabilities of a six year old, or even a three year old, and so were very far from general intelligence.

Such an AI would be able to accumulate knowledge and use it to solve different kinds of problems. I think the most powerful concept of general intelligence is that its adaptive, Hanson says. If you learn, for example, how to tie your shoes, you could apply it to other sorts of knots in other applications. If you have an intelligence that knows how to have a conversation with you, it can also know what it means to go to the store and buy a carton of milk.

General AI would need to have background knowledge about the world as well as common sense, Laird says. Pose it a new problem, its able to sort of work its way through it, and it also has a memory of what its been exposed to.

Scientists have designed AI that can answer an array of questions with projects like IBMs Watson, which defeated two former Jeopardy! champions in 2011. It had to have a lot of general capabilities in order to do that, Laird says.

Today, there are many different Watsons, each tweaked to perform services such as diagnosing medical problems, helping businesspeople run meetings, and making trailers for movies about super-smart AI. Still, Its not fully adaptive in the humanlike way, so it really doesnt match human capabilities, Hanson says.

Were still figuring out the recipe for general intelligence. One of the problems we have is actually defining what all these capabilities are and then asking, how can you integrate them together seamlessly to produce coherent behavior? Laird says.

And for now, AI is facing something of a paradox. Things that are so hard for people, like playing championship-level Go and poker have turned out to be relatively easy for the machines, Etzioni says. Yet at the same time, the things that are easiest for a personlike making sense of what they see in front of them, speaking in their mother tonguethe machines really struggle with.

The strategies that help prepare an AI system to play chess or Go are less helpful in the real world, which does not operate within the strict rules of a game. Youve got Deep Blue that can play chess really well, youve got AlphaGo that can play Go, but you cant walk up to either of them and say, ok were going to play tic-tac-toe, Laird says. There are these kinds of learning that are not youre not able to do just with narrow AI.

What about things like Siri and Alexa?

A huge challenge is designing AI that can figure out what we mean when we speak. Understanding of natural language is what sometimes is called AI complete, meaning if you can really do that, you can probably solve artificial intelligence, Etzioni says.

Were making progress with virtual assistants such as Siri and Alexa. Theres a long way to go on those systems, but theyre starting to have to deal with more of that generality, Laird says. Still, he says, once you ask a question, and then you ask it another question, and another question, its not like youre developing a shared understanding of what youre talking about.

In other words, they can't hold up their end of a conversation. They dont really understand what you say, the meaning of it, Etzioni says. Theres no dialogue, theres really no background knowledge and as a resultthe systems misunderstanding of what we say is often downright comical.

Extracting the full meaning of informal sentences is tremendously difficult for AI. Every word matters, as does word order and the context in which the sentence is spoken. There are a lot of challenges in how to go from language to an internal representation of the problem that the system can then use to solve a problem, Laird says.

To help AI handle natural language better, Etzioni and his colleagues are putting them through their paces with standardized tests like the SAT. I really think of it as an IQ test for the machine, Etzioni says. And guess what? The machine doesnt do very well.

In his view, exam questions are a more revealing measure of machine intelligence than the Turing Test, which chatbots often pass by resorting to trickery.

To engage in a sophisticated dialogue, to do complex question and answering, its not enough to just work with the rudiments of language, Etzioni says. It ties into your background knowledge, it ties into your ability to draw conclusions.

Lets say youre taking a test and find yourself faced with the question: what happens if you move a plant into a dark room? Youll need an understanding of language to decipher the question, scientific knowledge to inform you what photosynthesis is, and a bit of common sensethe ability to realize that if light is necessary for photosynthesis, a plant wont thrive when placed in a shady area.

Its not enough to know what photosynthesis is very formally, you have to be able to apply that knowledge to the real world, Etzioni says.

Will general AI think like us?

Researchers have gained a lot of ground with AI by using what we know about how the human brain. Learning a lot about how humans work from psychology and neuroscience is a good way to help direct the research, Laird says.

One promising approach to AI, called deep learning, is inspired by the architecture of neurons in the human brain. Its deep neural networks gather human amounts of data and sniff out patterns. This allows it to make predictions or distinctions, like whether someone uttered a P or a B, or if a picture features a cat or a dog.

These are all things that the machines are exceptionally good at, and [they] probably have developed superhuman patter recognition abilities, Etzioni says. But thats only a small part of what is general intelligence.

Ultimately, how humans think is grounded in the feelings within our bodies, and influenced by things like our hormones and physical sensations. Its going to be a long time before we can create an effective simulation of all of that, Hanson says.

We might one day build AI that is inspired by how humans think, but does not work the same way. After all, we didnt need to make airplanes flap their wings. Instead we built airplanes that fly, but they do that using very different technology, Etzioni says.

Still, we might want to keep some especially humanoid featureslike emotion. People run the world, so having AI that understand and gets along with people can be very, very useful, says Hanson, who is trying to design empathetic robots that care about people. He considers emotion to be an integral part of what goes into general intelligence.

Plus, the more humanoid a general AI is designed to be, the easier it will be to tell how well it works. If we create an alien intelligence thats really unlike humans, we dont know exactly what hallmarks for general intelligence to look for, Hanson says. Theres a bigger concern for me which is that, if its alien are we going to trust it? Is it going to trust us? Are we going to have a good relationship with it?

When will it get here?

So, how will we use general AI? We already have targeted AI to solve specific problems. But general AI could help us solve them better and faster, and tackle problems that are complex and call for many types of skills. The systems that we have today are far less sophisticated than we could imagine, Etzioni says. If we truly had general AI we would be saving lives left and right.

The Allen Institute has designed a search engine for scientists called Semantic Scholar. The kind of search we do, even with the targeted AI we put in, is nowhere near what scientists need, Etzioni says. Imagine a scientist helperthat helps our scientists solve humanitys thorniest problems, whether its climate change or cancer or superbugs.

Or it could give strategic advice to governments, Matus says. It could also be used to plan and execute super complex projects, like a mission to Mars, a political campaign, or a hostile takeover of a public company."

People could also benefit from general AI in their everyday lives. It could assist elderly or disabled people, improve customer service, or tutor us. When it comes to a learning assistant, it could understand your learning weaknesses and find your strengths to help you step up and plan a program for improving your capabilities, Hanson says. I see it helping people realize their dreams.

But all this is a long way off. Were so far away fromeven six-year-old level of intelligence, let alone full general human intelligence, let alone super-intelligence, Etzioni says. He surveyed other leaders in the field of AI, and found that most of them believed super-intelligent AI was 25 years or more away. Most scientists agree that human-level intelligence is beyond the foreseeable horizon, he says.

General artificial intelligence does raise a few concerns, although machines run amok probably wont be one of them. Im not so worried about super-intelligence and Terminator scenarios, frankly I think those are quite farfetched, Etzioni says. But Im definitely worried about the impact on jobs and unemployment, and this is already happening with the targeted systems.

And like any tool, general artificial intelligence could be misused. Such technologies have the potential for tremendous destabilizing effects in the hands of any government, research organization or company, Matus says. This simply means that we need to be clever in designing policy and systems that will keep stability and give humans alternative sources of income and occupation. People are pondering solutions like universal basic income to cope with narrow AI's potential to displace workers.

Ultimately, researchers want to beef up artificial intelligence with more general skills so it can better serve humans. Were not going to see general AI initially to be anything like I, Robot. Its going to be things like Siri and stuff like that, which will augment and help people, Laird says. My hope is that its really going be something that makes you a better person, as opposed to competes with you.

See the original post here:

There are two very different kinds of AI, and the difference is important - Popular Science

Posted in Artificial Intelligence | Comments Off on There are two very different kinds of AI, and the difference is important – Popular Science

This Cognitive Whiteboard Is Powered By Artificial Intelligence – Forbes

Posted: at 1:14 pm


Forbes
This Cognitive Whiteboard Is Powered By Artificial Intelligence
Forbes
Imagine if the whiteboard in your next corporate meeting could take notes when you talked and add comments from your teammates in the meeting. The wait could be over soon. IBM and Ricoh Europe have announced an interactive whiteboard with artificial ...

and more »

Original post:

This Cognitive Whiteboard Is Powered By Artificial Intelligence - Forbes

Posted in Artificial Intelligence | Comments Off on This Cognitive Whiteboard Is Powered By Artificial Intelligence – Forbes

Artificial intelligence in the real world: What can it actually do? – ZDNet

Posted: at 1:14 pm

Getty Images/iStockphoto

AI is mainstream these days. The attention it gets and the feelings it provokes cover the whole gamut: from hands-on technical to business, from social science to pop culture, and from pragmatism to awe and bewilderment. Data and analytics are a prerequisite and an enabler for AI, and the boundaries between the two are getting increasingly blurred.

Many people and organizations from different backgrounds and with different goals are exploring these boundaries, and we've had the chance to converse with a couple of prominent figures in analytics and AI who share their insights.

IoT: The Security Challenge

The Internet of Things is creating serious new security risks. We examine the possibilities and the dangers.

Professor Mark Bishop is a lot of things: an academic with numerous publications on AI, the director of TCIDA (Tungsten Centre for Intelligent Data Analytics), and a thinker with his own view on why there are impenetrable barriers between deep minds and real minds.

Bishop recently presented on this topic in GOTO Berlin. His talk, intriguingly titled "Deep stupidity - what deep Neural Networks can and cannot do," was featured in the Future of IT track and attracted widespread interest.

In short, Bishop argues that AI cannot become sentient, because computers don't understand semantics, lack mathematical insight and cannot experience phenomenal sensation -- based on his own "Dancing with Pixies" reductium.

Bishop however is not some far-out academic with no connection to the real world. He does, when prompted, tend to refer to epistemology and ontology at a rate that far surpasses that of the average person. But he is also among the world's leading deep learning experts, having being deeply involved in neural networks before it was cool.

"I was practically mocked when I announced this was going to be my thesis topic, and going from that to seeing it in mainstream news is quite the distance," he notes.

His expertise has earned him more than recognition and a pet topic, however. It has also gotten him involved in a number of data-centric initiatives with some of the world's leading enterprises. Bishop, about to wrap up his current engagement with Tungsten as TCIDA director, notes that going from academic research and up in the sky discussions to real-world problems is quite the distance as well.

"My team and myself were hired to work with Tungsten to add more intelligence in their SaaS offering. The idea was that our expertise would help get the most out of data collected from Tungsten's invoicing solution. We would help them with transaction analysis, fraud detection, customer churn, and all sorts of advanced applications.

But we were dumbfounded to realize there was an array of real-world problems we had to address before embarking on such endeavors, like matching addresses. We never bothered with such things before -- it's mundane, somebody must have addressed the address issue already, right? Well, no. It's actually a thorny issue that was not solved, so we had to address it."

Injecting AI in enterprise software is a promising way to move forward, but beware of the mundane before tackling the advanced

Steven Hillion, on the other hand, comes at this from a different angle. With a PhD in mathematics from Berkeley, he does not lack relevant academic background. But Hillion made the turn to industry a long time ago, driven by the desire to apply his knowledge to solve real-world problems. Having previously served as VP of analytics for Greenplum, Hillion co-founded Alpine Data, and now serves as its CPO.

Hillion believes that we're currently in the "first generation" of enterprise AI: tools that, while absolutely helpful, are pretty mundane when it comes to the potential of AI. A few organizations have already moved to the second generation, which consists of a mix of tools and platforms that can operationalize data science -- e.g. custom solutions like Morgan Stanley's 3D Insights Platform or off the shelf solutions such as Salesforce's Einstein.

In many fields, employees (or their bosses) determine the set of tasks to focus on each day. They log into an app, go through a checklist, generate a BI report, etc. In contrast, AI could use existing operational data to automatically serve up the highest priority (or most relevant, or most profitable) tasks that a specific employee needs to focus on that day, and deliver those tasks directly within the relevant application.

"Success will be found in making AI pervasive across apps and operations and in its ability to affect people's work behavior to achieve larger business objectives. And, it's a future which is closer than many people realize. This is exactly what we have been doing with a number of our clients, gradually injecting AI-powered features into the everyday workflow of users and making them more productive.

Of course, this isn't easy. And in fact, the difficult aspect of getting value out of AI is as much in solving the more mundane issues, like security or data provisioning or address matching, as it is in working with complex algorithms."

Before handing over to AI overlords, it may help to actually understand how AI works

So, do androids dream of electric sheep, and does it matter for your organization? Although no definitive answers exist at this point, it is safe to say that both Bishop and Hillion seem to think this is not exactly the first thing we should be worried about. Data and algorithmic transparency on the other hand may be.

10 types of enterprise deployments

As businesses continue to experiment with the Internet of Things, interesting use cases are emerging. Here are some of the most common ways IoT is deployed in the enterprise.

Case in point -- Google's presentation on deep learning preceding Bishop's one in GOTO. The presentation, aptly titled "Tensorflow and deep learning, without a PhD", did deliver what it promised. It was a step-by-step, hands-on tutorial on how to use Tensorflow, Google's open source toolkit for deep learning, given by Robert Kubis, senior developer advocate for the Google Cloud Platform.

Expectedly, it was a full house. Unexpectedly, that changed dramatically as the talk progressed: by the end, the room was half empty, and a lukewarm applause greeted off Kubis. Bishop's talk, by contrast, started with what seemed like a full house, and ended proving there could actually be more people packed in the room, with a roaring applause and an entourage for Bishop.

There is an array of possible explanations for this. Perhaps Bishop's delivery style was more appealing than Kubis' -- videos of AI-generated art and Bladerunner references make for a lighter talk than a recipe-style "do A then B" tutorial.

Perhaps up in the sky discussions are more appealing than hands-on guides for yet another framework -- even if that happens to be Google's open source implementation of the technology that is supposed to change everything.

Or maybe the techies that attended GOTO just don't get Tensorflow -- with or without a PhD. In all likelihood, very few people in Kubis' audience could really connect with the recipe-like instructions delivered and understand why they were supposed to take the steps described, or how the algorithm actually works.

And they are not the only ones. Romeo Kienzler, chief data scientist at IBM Watson IoT, admitted in a recent AI Meetup discussion: "we know deep learning works, and it works well, but we don't exactly understand why or how." The million dollar question is -- does it matter?

After all, one could argue, not all developers (need to) know or care about the intrinsic details of QSort or Bubble Sort to use a sort function in their APIs -- they just need to know how to call it and trust it works. Of course, they can always dig into commonly used sort algorithms, dissect them, replay and reconstruct them, thus building trust in the process.

Deep learning and machine learning on the other hand are a somewhat different beast. Their complexity and their way of digressing from conventional procedural algorithmic wisdom make them hard to approach. Coupled with vast amounts of data, this makes for opaque systems, and adding poor data quality to the mix only aggravates the issue.

It's still early days for mainstream AI, but dealing with opaqueness may prove key to its adoption.

How the cloud enables the AI revolution:

Continue reading here:

Artificial intelligence in the real world: What can it actually do? - ZDNet

Posted in Artificial Intelligence | Comments Off on Artificial intelligence in the real world: What can it actually do? – ZDNet

Artificial intelligence: What’s real and what’s not in 2017 – The Business Journals

Posted: at 1:14 pm

Artificial intelligence: What's real and what's not in 2017
The Business Journals
I'm a big Star Wars fan, so when Rogue One: A Star Wars Story descended on theaters this month, I of course braved the crowds to see it twice in the first 18 hours. And just like all the other Star Wars movies, Rogue One stoked our geeky ...
Rogue One: A Star Wars Story In-Home Trailer (Official)YouTube
The Mission Comes Home: Rogue One: A Star Wars Story Arrives Soon on Digital HD and Blu-ray | StarWars.comStarWars.com

all 79 news articles »

Here is the original post:

Artificial intelligence: What's real and what's not in 2017 - The Business Journals

Posted in Artificial Intelligence | Comments Off on Artificial intelligence: What’s real and what’s not in 2017 – The Business Journals

UW CSE announces the Guestrin Endowed Professorship in Artificial Intelligence and Machine Learning – UW Today

Posted: at 1:14 pm

Engineering | Honors and awards | News releases | Research | Technology | UW Today blog

February 23, 2017

Carlos Guestrin in the Paul G. Allen Center for Computer Science & Engineering at the UW.Dennis Wise/ University of Washington

University of Washington Computer Science & Engineering announced today the establishment of the Guestrin Endowed Professorship in Artificial Intelligence and Machine Learning. This $1 million endowment will further enhance UW CSEs ability to recruit and retain the worlds most outstanding faculty members in these burgeoning areas.

The professorship is named for Carlos Guestrin, a leading expert in the machine learning field, who joined the UW CSE faculty in 2012 as the Amazon Professor of Machine Learning. Guestrin works on the machine learning team at Apple and joined Apple when it acquired the company he founded, Seattle-based Turi, Inc. Guestrin is widely recognized for creating the high-performance, highly-scalable machine learning technology first embodied in his open-source project GraphLab.

At Apple, Guestrin is helping establish a new Seattle hub for artificial intelligence and machine learning research and development, as well as strengthening ties between Apple and UW researchers.

Appleincorporates machine learning across our products and services, and education has been a part ofApples DNA from the very beginning. said Johny Srouji, senior vice president of Hardware Technologies at Apple.

Seattle and UW are near and dear to my heart, and it was incredibly important to me and our team that we continue supporting this world-class institution and the amazing talent coming out of the CSE program, said Guestrin. We look forward to strong collaboration betweenApple, CSE and the broader AI and machine learning community for many years to come.

For more information, contact Ed Lazowska, Bill & Melinda Gates Chair in Computer Science & Engineering at lazwoska@cs.washington.edu or Guestrin at guestrin@cs.washington.edu.

Read more here:

UW CSE announces the Guestrin Endowed Professorship in Artificial Intelligence and Machine Learning - UW Today

Posted in Artificial Intelligence | Comments Off on UW CSE announces the Guestrin Endowed Professorship in Artificial Intelligence and Machine Learning – UW Today

Page 200«..1020..199200201202..210..»