Page 182«..1020..181182183184..190200..»

Category Archives: Artificial Intelligence

Supercharge healthcare with artificial intelligence – CIO

Posted: April 23, 2017 at 12:53 am

Pattern-recognition algorithms can transform horses into zebras; winter scenes can become summer; artificial intelligence algorithms can generate art; robot radiologists can analyze your X-rays with remarkable precision.

We have reached the point where pattern-recognition algorithms and artificial intelligence (A.I.) are more accurate than humans at the visual diagnosis and observation of X-rays, stained breast cancer slides and other medical signs involving general correlations between normal and abnormal health patterns.

Before we run off and fire all the doctors, lets better understand theA.I. landscape and the technology's broad capabilities.A.I. wont replace doctors it will help to empower them and extend their reach, improving patient outcomes.

The challenge with artificial intelligence is that no single and agreed-upon definition exists. Nils Nilsson definedA.I. as activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment. But that definition isn't close to describing howA.I. evolved.

Artificial intelligence began with the Turing Test, proposed in 1950 by Alan Turing, the scientist, cryptanalyst and theoretical biologist. Since then, rapid progress has been made over the last 75 years, advancingA.I. capabilities.

Isaac Asimov proposed the Three Laws of Robotics in 1950. The firstA.I. program was coded in 1951. In 1959, MIT began research in the field of artificial intelligence. GM introduced the first robot into its production assembly line in 1961. The 1960s were transformative, with the first machine learning program written and the first demonstration of anA.I. program which understood natural language, and the first chatbot emerged. In the 1970s, the first autonomous vehicle was designed at the StanfordA.I. lab. Healthcare applications forA.I. were first introduced in 1974, along with an expert system for medical diagnostics. The LISP language emerged out of the 1980s, with natural networks integrating with autonomous vehicles. IBMs famous Deep Blue beat Gary Kasparov at chess in 1997. And by 1999, the world was experimenting with A.I.-based domesticated robots.

Innovation was further inspiredin 2004 when DARPA hosted the first design competition for autonomous vehicles in the commercial sector. By 2005, big tech companies, including IBM, Microsoft, Google and Facebook, were actively investing in commercial applications, and the first recommendation engines surfaced. The highlight of 2009 was Googles first self-driving car, some three decades after the first autonomous vehicle was tested at Stanford.

The fascination of narrative science, forA.I. to write reports, was demonstrated in 2010, and IBM Watson was crowned a Jeopardy champion in 2011. Narrative science quickly evolved into personal assistants with the likes of Siri, Google, Now and Cortana. Elon Musk and others launched OpenAI, to discover and enact the path to safe artificial general intelligence in 2015 to find a friendlyA.I. In early 2016, Google's DeepMind defeated legendary Go player Lee Se-dol in a historic victory.

What will 2017 have in store for artificial intelligence? With the history ofA.I. behind us, we can now determine howA.I. could potentially help advance our healthcare capabilities through four actions:

The foundation ofA.I. has been defined into four essential classifications. Next, identify the classification that has the greatest ability to advance your current business model.

NormallyA.I. is considered an alternative or replacement for replicating intelligent behavior. This replication could potentially surpass human abilities. However, to date, high-performanceA.I. has only performed in narrow fields, such as gaming, facial recognition and driving cars.

The full spectrum ofA.I. is much broader than the narrow fields we read about in the daily headlines.

Prior to outlining the technology stack and the domains where artificial intelligence offers value, we need to review the framework of artificial intelligence. The section of the broad categories ofA.I. will accelerate whereA.I. adds value and, more specifically, how we as CIOs can tap directly into that value for our organizations.

Artificial intelligence is broken out into 10 functional areas:

If we drill into machine learning, a subtype within artificial intelligence, we have three primary sub-classifications:

We live in a business climate where the norm is a continuous pressure to perform, deliver and innovate. CIOs are forever in search of thetool for competitive advantage. Attaining knowledge of the A.I. landscape andA.I. capabilities will drive more informed decisions, resulting in offering better services to consumers.

The simple version of this stack is:

The evaluation of the technology stack is challenging and can result in the incorrect identification of capabilities that are too immature to fully be integrated into conventional business functions, processes and workflows. Spend the required time with your teams to properly evaluate where business and technical capabilities should be extended.

TheA.I. foundation has been determined. The framework was selected. TheA.I. technology stack was evaluated. Were now ready to engage an emerging organization to help us achieve the expanded capabilities we have defined we require.

The following hot A.I. companies will help open the possibilities of howA.I. can generate new business models, fueling new organization growth.

Assessing the potential of artificial intelligence to differentiate your organizational capabilities starts with an understanding of the A.I. foundation, theA.I. framework, theA.I. technology stack and theA.I. companies offering dynamic and useful interactions. This strong background will serve you and your team well, as you venture into the uncharted world of artificial intelligence.

This article is published as part of the IDG Contributor Network. Want to Join?

Excerpt from:

Supercharge healthcare with artificial intelligence - CIO

Posted in Artificial Intelligence | Comments Off on Supercharge healthcare with artificial intelligence – CIO

Elon Musk’s new plan to save humanity from artificial intelligence – KOLO

Posted: at 12:53 am

WASHINGTON (CNNMoney) -- Elon Musk has a new plan to protect humanity from artificial intelligence -- if you can't beat 'em, join 'em.

In October 2014, Musk ignited a global discussion on the perils of artificial intelligence. Humans might be doomed if we make machines that are smarter than us, Musk warned. He called artificial intelligence our greatest existential threat.

Now he is hoping to harness AI in a way that will benefit society.

In a recent interview with the website waitbutwhy.com, Musk explained that his attempt to sound the alarm on artificial intelligence didn't have an impact, so he decided to try to develop artificial intelligence in a way that will have a positive affect on humanity.

So Musk, who is already the CEO of SpaceX and Tesla, is now heading up a startup called Neuralink. The San Francisco outfit is building devices to connect the human brain with computers. Initially, the technology could repair brain injuries or cancer lesions. Quadriplegics may benefit from the technology.

But the most amazing and alarming implications of Musk's vision lie years and likely decades down the line. Brain-machine interfaces could overhaul what it means to be human and how we live.

Today, technology is implanted in brains in very limited cases, such as to treat Parkinson's Disease. Musk wants to go farther, creating a robust plug-in for our brains that every human could use. The brain plug-in would connect to the cloud, allowing anyone with a device to immediately share thoughts.

Humans could communicate without having to talk, call, email or text. Colleagues scattered throughout the globe could brainstorm via a mindmeld. Learning would be instantaneous. Entertainment would be any experience we desired. Ideas and experiences could be shared from brain to brain.

We would be living in virtual reality, without having to wear cumbersome goggles. You could re-live a friend's trip to Antarctica -- hearing the sound of penguins, feeling the cold ice -- all while your body sits on your couch.

But many technical hurdles remain. Musk believes it will be eight to 10 years before this kind of the technology will be ready to use by people without disabilities. Musk's companies have made a habit of achieving what seemed impossible. But he's also notorious for aggressive deadlines that his companies don't meet.

Neuralink told waitbutwhy.com that it would need to simulate one million brain neurons before a transformative brain-machine interface could be built. If current rates of progress hold, it won't reach that milestone until 2100.

In the meantime, there are many reasons for humans to be wary of implanting a computer in their brain. Any digital technology can be hacked. Humans might be unwittingly turned into malicious agents for unsavory causes. Computers crash too. If the interface fails, that could imperil our physical health.

With a brain-machine interface recording our lives, all of our experiences would be stored in the cloud. Privacy would be threatened. Governments or others would have incentives to access that information and track behavior.

If our brains merge with machines, our thoughts would become indistinguishable from what we'd downloaded from the cloud. We could struggle to know if our beliefs and views came from personal experiences, or from what the internet sent to our brains. Humans would be putting enormous trust in the maker of the brain-machine interface to share good information with them.

As Musk sees it, our options are limited.

"We're going to have the choice of either being left behind," Musk told waitbutwhy.com, "and being effectively useless, or like a pet."

Link:

Elon Musk's new plan to save humanity from artificial intelligence - KOLO

Posted in Artificial Intelligence | Comments Off on Elon Musk’s new plan to save humanity from artificial intelligence – KOLO

Our fear of artificial intelligence? It is all too human – San Francisco Chronicle

Posted: at 12:53 am

The classic sci-fi fear that robots will intellectually outpace humans has resurfaced now that artificial intelligence is part of our daily lives. Today artificially intelligent programs deliver food, deposit checks and help employers make hiring decisions. If we are to worry about a robot takeover, however, it is not because artificial intelligence is inhuman and immoral, but rather because we are coding-in distinctly human prejudice.

Last year, Microsoft released an artificially intelligent Twitter chatbot named Tay aimed at engaging Millennials online. The idea was that Tay would spend some time interacting with users, absorb relevant topics and opinions, and then produce its own content. In less than 24 hours, Tay went from tweeting humans are super cool to racist, neo-Nazi one-liners, such as: I f hate n, I wish we could put them all in a concentration camp with kikes and be done with the lot. Needless to say, Microsoft shut down Tay and issued an apology.

We need to hold the companies who make our AI-enabled devices accountable to a standard of ethics.

As the Tay disaster revealed, artificial intelligence does not always distinguish between the good, the bad and the ugly in human behavior. The type of artificial intelligence frequently used in consumer products is called machine learning. Before machine learning, humans analyzed data, found a pattern and wrote an algorithm (like a step-by-step recipe) for the computer to use. Now, we feed the computer huge amounts of data points, and the computer itself can spot the pattern then write the algorithm for itself to follow.

For example, if we wanted the artificial intelligence to correctly identify cars, then wed teach it what cars looked like by giving it lots pictures of cars. If all the pictures we chose happened to be red sedans, then the artificial intelligence might think that cars, by definition, are red sedans. If we then showed the artificial intelligence a picture of a blue sports utility vehicle, it might determine it wasnt a car. This is all to say that the accuracy of AI-powered technology depends on the data we use to teach it.

When there is bias in the data used to train artificial intelligence, there is bias in its output.

AI-controlled online advertising is almost six times more likely to show high-paying job posts to men than to women. An AI-judged beauty contest found white women most attractive. Artificially intelligent software used in court to help judges set bail and parole sentences also showed racial prejudice. As ProPublica reported, The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. It is not that the algorithm is inherently racist its that it was fed stacks of court filings that were harsher on black men than on white. In turn, the artificial intelligence learned to call black defendants criminals at an unfairly higher rate, just like a human might.

That algorithm-fueled artificial intelligence amplifies human bias should make us wary of Silicon Valleys claim that this technology will usher in a better future.

Even when algorithms are not involved, old-fashioned assumptions make their way into the newest gadgets. I walked into room the other day to a man yelling, Alexa, find my phone! only later to realize he was talking to his Amazon Alexa robot personal assistant, not a human female secretary. It is no coincidence that all the AI personal assistants Apples Siri, Microsofts Cortana and Amazons Alexa marketed to perform to traditionally female tasks, default to female voices. What is disruptive about that?

Some have suggested that AIs bias problem stems from the homogeneity of the people making the technology. Silicon Valleys top tech firms are notoriously white-male dominated, and hire fewer women and people of color than the rest of the business sector. Companies such as Uber and Tesla have gained reputations for corporate culture hostile to women and people of color. Google was sued in January by the Department of Labor for failing to provide compensation data, and then charged with underpaying its female employees (Google is federally contracted and must hire in accordance with federal law). There is no question that there should be more women and people of color in tech. But adding diversity to product teams alone will not counteract the systemic nature of the bias in data used to train artificial intelligence.

Careful attention to how artificial intelligence learns will require placing antibias ethics at the center of tech companies operating principles not just an after-the-fact inclusion measure mentioned on the company website. This ethical framework exists in other fields medicine, law, education, government. Training, licensing, ethics boards, legal sanctions and public opinion coalesce to establish standards of practice. For instance, medical doctors are taught the Hippocratic oath and agree to uphold certain ethical practices or lose their licenses. Why cant tech have a similar ethical infrastructure?

Perhaps ethics in tech did not matter as much when the products were confined to calculators, video games and iPods. But now that artificial intelligence makes serious, humanlike decisions, we need to hold it to humanlike moral standards and humanlike laws. Otherwise, we risk building a future that looks just like our past and present.

Madeleine Chang is a San Francisco Chronicle staff writer. Email: mchang@sfchronicle.com Twitter: @maddiechang

Original post:

Our fear of artificial intelligence? It is all too human - San Francisco Chronicle

Posted in Artificial Intelligence | Comments Off on Our fear of artificial intelligence? It is all too human – San Francisco Chronicle

Artificial intelligence: fulfilling the failed promise of big data – Information Management

Posted: at 12:53 am

The topic of artificial intelligence is dominating discussions of data management this year. But while a growing number of organizations are interested in AI, many dont fully understand what the technology can do to help boost their customer engagement or the bottom line.

Forrester Research analyst Brandon Purcell has recently authored two reports on the current strong interest in artificial intelligence and what can be expected from it. In part one of Information Managements interviews with Purcell yesterday, we discussed The Top Emerging Technologies in Artificial Intelligence. In part two today we discuss the report Artificial Intelligence Technologies and Solutions, Q1 2017.

Information Management: Artificial intelligence seems to have replaced big data as the big data theme for 2017. What is your sense of exactly how many organizations are working with artificial intelligence and where are they in the process?

Brandon Purcell: Id agree that AI has replaced big data as the buzzword du jour, but in my mind it actually has the ability to fulfill big datas failed promise. Big data really focused on capturing massive amounts of data from multiple sources. Companies got really good at that, but theyve struggled to turn that data into insights and insight into action. The promise of AI is to complete that process - from data to insight to action - in a virtuous cycle that optimizes continuously.

According to Forresters Business Technographics survey of over 3,000 global technology and business decision makers from last year, 41percent of global firms are already investing in AI and another 20 percent are planning to invest in the next year.

Most large enterprises first foray into AI is with chatbots for customer service, what we call conversational service solutions. These run the gamut from hard coded rules-based chatbots which arent artificially intelligent to very sophisticated engines using a combination of NLP, NLG, and deep learning. From a customer insights perspective, many companies are starting to uses some of the sensory components of AI such as image and video analytics and speech analytics to unlock insights from unstructured data.

IM: What the top reasons that organizations are adopting artificial intelligence and what gains to they hope to realize?

Purcell: Organizations are adopting AI to optimize the customer journey from discovery through conversion, all the way to the end of the customer lifecycle. AI promises to automate the process of understanding customers and anticipating their needs, then delivering the right experience to them at the right time. Organizations are hoping to impact the top line by acquiring new customers and increasing the value and lifetime of existing ones, and theyre hoping to impact the bottom line as well by reducing costs through automation.

IM: What are some of the top obstacles or challenges to achieving success with an artificial intelligence effort?

Purcell: The primary challenge is and will always be the data. Data is the lifeblood of AI. An AI system needs to learn from data in order to be able to fulfill its function. Unfortunately, organizations struggle to integrate data from multiple sources to create a single source of truth on their customers. AI will not solve these data issues - it will only make them more pronounced.

After data, traditional people and process challenges come into play. Who owns the AI initiative? Typically the group in the organization with the technical skills to implement AI is not the same group that will actually own its execution. We see companies fumble this handoff all the time. And how will you measure success to prove the ROI of the effort? Rigorous measurement processes still remain elusive for most companies.

IM: What are your thoughts on artificial intelligence best practices that organizations should use to best achieve success?

Purcell: Start with a narrow use case and make sure you have data for it. Then bring together internal stakeholders and agree upon how youll measure success. For example, a subscription-based business may want to decrease customer churn.

They probably have historical data on past customers who have churned that they can use to train a model. They may also have data on retention incentives that have worked in the past. Assemble the marketers who will oversee the retention campaign as well as the data engineers and scientists responsible for building the model. And agree upon a measurement methodology.

Traditional text and control works quite well. Treat one set of customers and see how much higher their retention rate is than a holdout sample after a specified period of time. Assuming the success of the project, you can build the business case for further investment.

David Weldon is the editor-in-chief of Information Management.

Read the rest here:

Artificial intelligence: fulfilling the failed promise of big data - Information Management

Posted in Artificial Intelligence | Comments Off on Artificial intelligence: fulfilling the failed promise of big data – Information Management

The disruption and promise of artificial intelligence – CIO

Posted: at 12:53 am

By Peter Bendor-Samuel, star Advisor, CIO | Apr 21, 2017 9:20 AM PT

Opinions expressed by ICN authors are their own.

Your message has been sent.

There was an error emailing this page.

Theres no shortage of books, news articles and comments in social media about how artificial intelligence (A.I.) is shaping our future. Although its still blazing a trail, were on the brink of A.I. disruption that will change all industries and society at a very deep and fundamental level. I believe it will be one of the next great wealth generators.

My optimism about A.I.s growing potential arises from many successful use case examples as clear evidence that A.I. is now getting the scale, maturity and the ecosystem in which it can be effective. Although A.I. has been developing for 20 to 30 years, its gaining enough elements necessary for a supporting ecosystem.

Individuals active in the A.I. space discussed some use cases at a recent dinner I attended at the United Nations. Government entities, distinguished academics and journalists, and the leaders in AI at companies such as Facebook, Google, IBM Watson, Intel and IPSoft attended. We talked, for instance, about how A.I. affects drilling for oil and how it helps provide medicine to the under-served.

A particularly interesting case involves a special condition affecting about 120,000 people in the U.S. when they have a stroke. Currently, it is diagnosed in only 4,000 of 120,000 lives. Paralysis is the result if doctors cant diagnose the condition and treat it appropriately. The condition is largely not caught because healthcare providers lack the ability to diagnose the condition and treat it in the golden four-hour period after a stroke happens. An A.I. engine can quickly process the hundred or so MRI images and easily diagnose the condition, saving lives across the country.

Its clear that A.I. is a very powerful vehicle even though it is still in its nascent stage. The optimism shared by company A.I. users and governments at the meeting was very encouraging.

Over time, there is almost no endeavor that humans undertake that A.I. doesnt stand to augment, thus making humans more effective and more productive. In doing so, artificial intelligence answers the dilemma weve had for the last 20 years: a lack of productivity in the U.S. Our productivity growth has not been increasing. In fact, our productivity rate has fallen over the last 20 years and is now measured as 1 percent or less.

Productivity is the largest determinant of real wage gains. If were going to increase wealth, particularly in a sustainable and more balanced way, A.I. stands as a very important tool to dramatically increase broad productivity across almost every human endeavor.

This article is published as part of the IDG Contributor Network. Want to Join?

Peter Bendor-Samuel is founder and CEO of Everest Group, which provides advisory and research services to Global 1000 enterprises, leading service providers, and private equity firms.

Sponsored Links

View post:

The disruption and promise of artificial intelligence - CIO

Posted in Artificial Intelligence | Comments Off on The disruption and promise of artificial intelligence – CIO

Artificial intelligence could build new drugs faster than any human … – Quartz

Posted: April 21, 2017 at 2:26 am


Quartz
Artificial intelligence could build new drugs faster than any human ...
Quartz
Artificial intelligence algorithms are being taught to generate art, human voices, and even fiction stories all on their ownwhy not give them a shot at building ...

and more »

Continue reading here:

Artificial intelligence could build new drugs faster than any human ... - Quartz

Posted in Artificial Intelligence | Comments Off on Artificial intelligence could build new drugs faster than any human … – Quartz

Artificial Intelligence Technology And Its Impact On Business – Women Love Tech (press release) (blog)

Posted: at 2:26 am

When you hear the phrase artificial intelligence, what comes to mind?

Most people think of HAL, the Terminator, or R2D2 and C3PO, and tend to equate AI with science fiction. Few people equate it with customer service or sales because its such a fantastic concept that its hard to reconcile it with everyday life.

Yet artificial intelligence is already here, and learning more as each day goes by. IBMs Watson was only the beginning as now millions of AI bots are helping people in expanding capacities. From AIs that answer the phone and direct you to the correct extension to AIs that can troubleshoot your computer and more, artificial intelligence isnt just for science fiction anymore!

What is AI?

Artificial intelligence has been created using algorithms that process data. As they are fed new data, they account for it in the algorithm. This results in a sort of learning process, where the AI gets smarter as it is given new information. They can be given parameters that create personalities, which can then help your customers with information or transactions, among other things. By finding and synthesizing information at faster rates, businesses save time and money, allocating their human resources to more complex needs.

Components that enable AI

AI has been enabled by several interacting components. The world wide web coupled with high-speed capability gives AI a place to perform. Mobile technology provides a need for AI, as people on the go have a demand for information that is accessed quickly. This takes a web of interactions, from accessing the data to interpreting it and providing answers based on the search parameters. Every time you do a voice search for a pizza place on your mobile device, millions of AIs communicate with one another to deliver your answers nearly instantaneously.

Data Analysis and AI

Because AIs can process data so quickly, they can scan for specific data at a much higher rate than humans can. Previously, the data that AI seeks would have been stored in file cabinets and company vaults. Today it is stored digitally, and AIs are like tiny digital filing clerks that can move and think at lightning speeds. This frees your human employees to handle more complicated problems that the AIs cant.

Customer Service/Virtual Assistance

Have you ever had a service get interrupted in the middle of the night, after customer service hours? The only alternatives to this problem for businesses are to have employees available 24-7, or to use AI chatbots. A good development team that is collecting and using data from interactions with their bots can keep the bots updated, allowing them to learn and mature to more easily address the needs of users.

Data Modelling

Data relates to other data, and AI allows us to manipulate data in whatever way comes to mind. We use data to make predictions and study trends, but AI can give us unlimited options for studying and interpreting the data we have. This can give us new angles to explore and perhaps even usher in a few breakthroughs.

Businesses that take advantage of artificial intelligence will have an edge as we make our AIs smarter. As each AI and its development team learn how it is applied, it gets faster and more efficient. Any business would love to have an efficient team that gets instant results, and with AI, that is becoming more and more available!

See original here:

Artificial Intelligence Technology And Its Impact On Business - Women Love Tech (press release) (blog)

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Technology And Its Impact On Business – Women Love Tech (press release) (blog)

How artificial intelligence learns to be racist – Vox – Vox

Posted: April 19, 2017 at 10:07 am

Open up the photo app on your phone and search dog, and all the pictures you have of dogs will come up. This was no easy feat. Your phone knows what a dog looks like.

This and other modern-day marvels are the result of machine learning. These are programs that comb through millions of pieces of data and start making correlations and predictions about the world. The appeal of these programs is immense: These machines can use cold, hard data to make decisions that are sometimes more accurate than a humans.

But know: Machine learning has a dark side. Many people think machines are not biased, Princeton computer scientist Aylin Caliskan says. But machines are trained on human data. And humans are biased.

Computers learn how to be racist, sexist, and prejudiced in a similar way that a child does, Caliskan explains: from their creators.

Nearly all new consumer technologies use machine learning in some way. Like Google Translate: No person instructed the software to learn how to translate Greek to French and then to English. It combed through countless reams of text and learned on its own. In other cases, machine learning programs make predictions about which rsums are likely to yield successful job candidates, or how a patient will respond to a particular drug.

Machine learning is a program that sifts through billions of data points to solve problems (such as can you identify the animal in the photo), but it doesnt always make clear how it has solved the problem. And its increasingly clear these programs can develop biases and stereotypes without us noticing.

Last May, ProPublica published an investigation on a machine learning program that courts use to predict who is likely to commit another crime after being booked systematically. The reporters found that the software rated black people at a higher risk than whites.

Scores like this known as risk assessments are increasingly common in courtrooms across the nation, ProPublica explained. They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts to even more fundamental decisions about defendants freedom.

The program learned about who is most likely to end up in jail from real-world incarceration data. And historically, the real-world criminal justice system has been unfair to black Americans.

This story reveals a deep irony about machine learning. The appeal of these systems is they can make impartial decisions, free of human bias. If computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be fairer and more selective about who is incarcerated and for how long, ProPublica wrote.

But what happened was that machine learning programs perpetuated our biases on a large scale. So instead of a judge being prejudiced against African Americans, it was a robot.

Its stories like the ProPublica investigation that led Caliskan to research this problem. As a female computer scientist who was routinely the only woman in her graduate school classes, shes sensitive to this subject.

Caliskan has seen bias creep into machine learning in often subtle ways for instance, in Google Translate.

Turkish, one of her native languages, has no gender pronouns. But when she uses Google Translate on Turkish phrases, it always ends up as hes a doctor in a gendered language. The Turkish sentence didnt say whether the doctor was male or female. The computer just assumed if youre talking about a doctor, its a man.

Recently, Caliskan and colleagues published a paper in Science, that finds as a computer teaches itself English, it becomes prejudiced against black Americans and women.

Basically, they used a common machine learning program to crawl through the internet, look at 840 billion words, and teach itself the definitions of those words. The program accomplishes this by looking for how often certain words appear in the same sentence. Take the word bottle. The computer begins to understand what the word means by noticing it occurs more frequently alongside the word container, and also near words that connote liquids like water or milk.

This idea to teach robots English actually comes from cognitive science and its understanding of how children learn language. How frequently two words appear together is the first clue we get to deciphering their meaning.

Once the computer amassed its vocabulary, Caliskan ran it through a version of the implicit association test.

In humans, the IAT is meant to undercover subtle biases in the brain by seeing how long it takes people to associate words. A person might quickly connect the words male and engineer. But if a person lags on associating woman and engineer, its a demonstration that those two terms are not closely associated in the mind, implying bias. (There are some reliability issues with the IAT in humans, which you can read about here.)

Here, instead at looking at the lag time, Caliskan looked at how closely the computer thought two terms were related. She found that African-American names in the program were less associated with the word pleasant than white names. And female names were more associated with words relating to family than male names. (In a weird way, the IAT might be better suited for use on computer programs than for humans, because humans answer its questions inconsistently, while a computer will yield the same answer every single time.)

Like a child, a computer builds its vocabulary through how often terms appear together. On the internet, African-American names are more likely to be surrounded by words that connote unpleasantness. Thats not because African Americans are unpleasant. Its because people on the internet say awful things. And it leaves an impression on our young AI.

This is as much as a problem as you think.

Increasingly, Caliskan says, job recruiters are relying on machine learning programs to take a first pass at rsums. And if left unchecked, the programs can learn and act upon gender stereotypes in their decision-making.

Lets say a man is applying for a nurse position; he might be found less fit for that position if the machine is just making its own decisions, she says. And this might be the same for a women applying for a software developer or programmer position. Almost all of these programs are not open source, and were not able to see whats exactly going on. So we have a big responsibility about trying to uncover if they are being unfair or biased.

And that will be a challenge in the future. Already AI is making its way into the health care system, helping doctors find the right course of treatment for their patients. (Theres early research on whether it can help predict mental health crises.)

But health data, too, is filled with historical bias. Its long been known that women get surgery at lower rates than men. (One reason is that women, as primary caregivers, have fewer people to take care of them post-surgery.)

Might AI then recommend surgery at a lower rate for women? Its something to watch out for.

Inevitably, machine learning programs are going to encounter historical patterns that reflect racial or gender bias. And it can be hard to draw the line between what is bias and what is just a fact about the world.

Machine learning programs will pick up on the fact that most nurses throughout history have been women. Theyll realize most computer programmers are male. Were not suggesting you should remove this information, Caliskan says. It might actually break the software completely.

Caliskan thinks there need to be more safeguards. Humans using these programs need to constantly ask, Why am I getting these results? and check the output of these programs for bias. They need to think hard on whether the data they are combing is reflective of historical prejudices. Caliskan admits the best practices of how to combat bias in AI is still being worked out. It requires a long-term research agenda for computer scientists, ethicist, sociologists, and psychologists, she says.

But at the very least, the people who use these programs should be aware of these problems, and not take for granted that a computer can produce a less biased result than a human.

And overall, its important to remember: AI learns about how the world has been. It picks up on status quo trends. It doesnt know how the world ought to be. Thats up to humans to decide.

Original post:

How artificial intelligence learns to be racist - Vox - Vox

Posted in Artificial Intelligence | Comments Off on How artificial intelligence learns to be racist – Vox – Vox

Artificial Intelligence Comes to Hollywood – Studio Daily

Posted: at 10:07 am

Is Your Job Safe?

Last September, when the 20th Century Fox sci-fi thriller Morgan premiered, artificial intelligence (AI) took center stage for the first time not as a plot point but a tool. The film studio revealed that it had used IBMs Watson a supercomputer endowed with AI capabilities to make the movies trailer. IBM research scientists taught Watson about horror movie trailers by feeding it 100 such trailers, cut into scenes. Watson then analyzed the data, from the point of view of visuals, audio and emotions, to learn what makes a horror trailer scary. Then the scientists fed in the entire 90-minute Morgan. According to Engadget, Watson instantly zeroed in on 10 scenes totaling six minutes of footage.

The media buzz that followed both overstated and understated what had actually happened. In fact, an actual human being edited the trailer, using the scenes Watson chose. So AI didnt actually edit the trailer. But it was also a benchmark, tantalizing the Hollywood creatives (and studio executives) interested in how artificial intelligence might change entertainment.

Philip Hodgetts

The discussion about AI is still a bit premature; when todays products are described, machine learning is a more accurate description. The first person to posit that machines could actually learn was computer gaming pioneer Arthur Samuels, in 1959. Based on pattern recognition and dependent on enough data to train the computer, machine learning is used for any repetitive task. Philip Hodgetts, who founded two companies integrating machine learning, Intelligent Assistance and Lumberjack System, notes that theres a big leap from doing a task really well to a generalized intelligence that can do multiple self-directed tasks. Most experts agree that autonomous cars are the closest we have today to a real-world artificial intelligence.

Machine learning can and does form an important role in a growing number of applications aimed at the media and entertainment business, nearly all of them invisible to the end user. Perhaps the most obvious ones are the applications aimed at distribution of digital media.Iris.TV, which partners with numerous media companies from Time Warners Telepictures Productions to Hearst Digital Media, uses machine learning to create what it dubs personalized video programming. The company takes in the target companys digital assets and creates a taxonomy and structure, with the metadata forming the basis of recommendations. The APIs, which integrate with most video players, learn what the user watches, then create a playlist based on those preferences. The results are pretty impressive: The Hollywood Reporter, for example, was able to double its video views from 80 million in October 2016 to 210 million in February 2017.

Machine learning also plays an increasingly significant role in video post-production much more so than production, which is still a hands-on, very human job. The production process is dependent on bipedal mobility, notes Hodgetts wryly. Weve motorized cranes and so on, but itll be harder to replace a runner on set. Even so, the process of creating digital imagery will feel the impact of machine learning in the not-so-distant future. Adobe, for example, is working with the Beckman Institute for Advanced Science and Technology to use a kind of machine learning to teach a software algorithm how to distinguish and eliminate backgrounds. With the goal of automating compositing, the software has been taught to do so via a dataset of 49,300 training images.

Todays machine learning-enhanced tools fall under the umbrella of cognitive services, a term that covers any off-the-shelf programs that have already been trained at a task, whether its facial recognition or motion detection. At NAB 2017, Finnish company Valossa will debut its Alexa-integrated real-time video recognition platform, Val.ai.

Val.ai is intended to solve the problem of discoverability. Companies that have lots of media assets and want to monetize them better fall into this category, says Valossa chief executive founder Mika Rautiainen. Or they can also re-use archived material for new content. Increasingly, weve found other scenarios emerging in the years weve been creating the service related to content analytics. Deep content understanding correlated with user behavior lets media companies serve contextual advertising and other end-user experiences around media. The Valossa video intelligence engine is in beta at 120 companies, the majority of which are in the U.S. and the U.K.

Rautiainen states that content analytics can also be used to promote and sell items in a video, a capability that Valossa is not developing. But I was surprised how many companies are working around reinventing retail or the purchasing process, he says. Valossa also has a technology demo for facial-expression recognition, which Rautiainen calls a next-level intelligence, and Valossa Movie Finder, with a database of metadata from 140,000 movies.

Yvonne Thomas

Arvato Systems will debut its next-generation MAM system, Media Portal, at NAB 2017. Yvonne Thomas, the companys product manager for the broadcast solutions division, says Media Portal integrates analytics and machine learning via an API, and indexes/updates the respective media. It will also support the visualization for the user in the form of facets that can handle a wide range of data.

At Piksel, chief technology officer Mark Christie points out that machine learning capabilities have accelerated dramatically in recent years and, through natural-language processing techniques, they can now enable a deeper understanding of content. In 2016, Piksel acquired Lingospot, with its patented and patent-pending natural language processing, semantic search, image analysis and machine-learning technologies, and integrated it into Piksels Palette, to collect proprietary metadata on a scene-by-scene basis. Its Fuse, which is built on Piksel Palette, enriches metadata with cast and crew lists or other documentation from third-party sources and serves it across broadcast and OTTworkflows.

Although the advent of tools enhanced by machine learning is interesting, most people in the entertainment industry want to know how worried they should be about their jobs. Hodgetts has a simple answer. If you can teach someone your job in three days, it will be automated [via machine learning], he says.

At USC School of Cinematic Arts, professor and editor Norman Hollyn has been thinking about the implications of collecting metadata for a long time. In principle, automation of what used to be a tedious, labor-intensive job could wreak major changes on the job of the assistant editor. Hollyn has a more positive spin on the integration of these new tools.

About three years ago, I started realizing the value of machine learning and artificial intelligence, he says. With my background, I knew just how difficult it was for humans to collect data, and I started thinking about how much easier my work would be if database fields could be automatically filled.

He agrees that machine learning will change the job of the assistant editor. Historically, even back in the 35mm days, the assistant editor was really an incredibly specialized librarian, he says. Its not a huge difference today. But once machine learning takes over, the librarian work will easily be taken over.

But the results, he thinks, wont be all bad. On some productions, he believes, that there will be no assistants. On others, assistants may be involved in such tasks as world-building for cross-platform media or cutting trailers. When I think about what my students may be doing in five years, its bad news if they think they want to be assistant editors on a TV job, he says. But they can play a role in building the world out of which comes movies, TV series, games, VR and comic books. Different people have to organize that world-building and thats not a machine-learning capability yet.

The post-production environment always feels the downward budgetary pressure and probably offers less flexibility for facilityowners trying to keep afloat. AI will be good and bad for people in our industry, says AlphaDogs Chief Executive Terence Curren. The level of AI we currently have can already automate many tasks that used to employ people. Automated syncing and grouping of clips is just one example. As AI gets smarter, more jobs will be replaced, but the removal of the human element will also eliminate many mistakes that currently cost time down the pipeline. The bottom line is, if you do something that is repetitive all day, your job will be one of the first to get replaced. If you do something creative, that requires constantly changing approaches, your job will be safe for a long time.

For those worried about the ethical considerations of bringing machine learning and artificial intelligence into the workplace (as well as potentially hundreds of consumer-facing products and services), thats being addressed both by giant technology companies and the IEEE. In September 2016, Google, Facebook, Amazon, IBM and Microsoft formed the Partnership on Artificial Intelligence to Benefit People and Society, to advance public understanding of the technologies and come up with standards. The Partnership says it plans to conduct research, recommend best practices and publish research under an open license in areas such as ethics, fairness and inclusivity; transparency, privacy and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability and robustness of the technology. Apple just joined the group.

Meanwhile, the IEEE and its Standards Association created a new standards project, IEEE P700, a working group that intends to define a process model by which engineers and technologists can address ethical considerations throughout the various stages of system initiation, analysis and design for big data, machine learning and artificial intelligence.

Machine learning is here, and AI is coming, not just to the entertainment industry but many others. There will be winners and losers, but the very human talent of creativity a specialty in the entertainment industry is safe for the foreseeable future.

Go here to see the original:

Artificial Intelligence Comes to Hollywood - Studio Daily

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Comes to Hollywood – Studio Daily

How Artificial Intelligence Might Transform the Engineering Industry – TrendinTech

Posted: at 10:07 am

Artificial intelligence is all around us. Since 1956, when the field of AI was founded its been a subject of public interest. But as great as it is, expectations for AI today are phenomenally high and designing such systems is not easy. The progress of these systems can be seen in IBMs Watson and Google DeepMinds AlphaGo program which demonstrates how increasingly powerful computing abilities are fostering AI. There are now several AIs in existence whose abilities exceed that of humans, and that number is continuously increasing. Its a technology that is becoming popular is various industries, including engineering. The following are a few examples of some of the current AI techniques and applications being used and how they might affect the engineering industry.

Machine learning: There are various methods of machine learning in use, but some of the most efficient are those based on the concept of artificial neural networks (ANNs). These are modeled upon the neurons found in the human brain and are equipped with a network of nodes connected with varying degrees of correlation. One method that has been used since the beginning of training ANNs is the perception algorithm. This algorithm teaches a network to sort inputs into one of two classes. It works by inputting training data, comparing the expected output to the actual node output and updates their weighting based on the difference.

Artificial Intelligence Applications: There are various accomplishments for the engineering industry about machine learning and AI techniques that can be seen over the past few years including Natural Language Processing (NLP), Image Processing, Disease Treatment, Autonomous Vehicles, and Data Structure Technology. The Internet of Things, or IoT, is another engineering achievement that will turn every appliance into a smart one. While big data is the mass collection of data that is the key promise of AI analytics.

Overall, AI is certain to bring about some significant changes within the engineering industry, one of which will be the automation of many low-level engineering tasks. However, this may not be as beneficial as it first implies. Artificial intelligence will render many of the simpler professional tasks redundant potentially replacing many of the tasks by which our younger engineers and other professionals learn the details of our trade, said Tim Chapman, director of the Arup Infrastructure Group. A study carried or by Stanford University looked into the impact of AI, and engineering jobs are not roles that are going to be affected too much over the next 15 years. And even if some jobs are lost to AI, new ones will be opened as people will still be needed to oversee the running of them.

More News to Read

comments

Excerpt from:

How Artificial Intelligence Might Transform the Engineering Industry - TrendinTech

Posted in Artificial Intelligence | Comments Off on How Artificial Intelligence Might Transform the Engineering Industry – TrendinTech

Page 182«..1020..181182183184..190200..»