Page 161«..1020..160161162163..170180..»

Category Archives: Artificial Intelligence

Becoming One Of Tomorrow’s Unicorns In The World Of Artificial Intelligence – Forbes

Posted: June 19, 2017 at 7:17 pm


Forbes
Becoming One Of Tomorrow's Unicorns In The World Of Artificial Intelligence
Forbes
Everyone is buzzing about the impact of AI on work, and many leaders feel insecure about what it will mean in terms of their own career development and roles. Deep learning, machine learning, automation and robotics are creating a seismic shift across ...

Go here to read the rest:

Becoming One Of Tomorrow's Unicorns In The World Of Artificial Intelligence - Forbes

Posted in Artificial Intelligence | Comments Off on Becoming One Of Tomorrow’s Unicorns In The World Of Artificial Intelligence – Forbes

Artificial intelligence and the coming health revolution – Phys.Org

Posted: at 7:17 pm

June 19, 2017 by Rob Lever Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology

Your next doctor could very well be a bot. And bots, or automated programs, are likely to play a key role in finding cures for some of the most difficult-to-treat diseases and conditions.

Artificial intelligence is rapidly moving into health care, led by some of the biggest technology companies and emerging startups using it to diagnose and respond to a raft of conditions.

Consider these examples:

California researchers detected cardiac arrhythmia with 97 percent accuracy on wearers of an Apple Watch with the AI-based Cariogram application, opening up early treatment options to avert strokes.

Scientists from Harvard and the University of Vermont developed a machine learning toola type of AI that enables computers to learn without being explicitly programmedto better identify depression by studying Instagram posts, suggesting "new avenues for early screening and detection of mental illness."

Researchers from Britain's University of Nottingham created an algorithm that predicted heart attacks better than doctors using conventional guidelines.

While technology has always played a role in medical care, a wave of investment from Silicon Valley and a flood of data from connected devices appear to be spurring innovation.

"I think a tipping point was when Apple released its Research Kit," said Forrester Research analyst Kate McCarthy, referring to a program letting Apple users enable data from their daily activities to be used in medical studies.

McCarthy said advances in artificial intelligence has opened up new possibilities for "personalized medicine" adapted to individual genetics.

"We now have an environment where people can weave through clinical research at a speed you could never do before," she said.

Predictive analytics

AI is better known in the tech field for uses such as autonomous driving, or defeating experts in the board game Go.

But it can also be used to glean new insights from existing data such as electronic health records and lab tests, says Narges Razavian, a professor at New York University's Langone School of Medicine who led a research project on predictive analytics for more than 100 medical conditions.

"Our work is looking at trends and trying to predict (disease) six months into the future, to be able to act before things get worse," Razavian said.

NYU researchers analyzed medical and lab records to accurately predict the onset of dozens of diseases and conditions including type 2 diabetes, heart or kidney failure and stroke. The project developed software now used at NYU which may be deployed at other medical facilities.

Google's DeepMind division is using artificial intelligence to help doctors analyze tissue samples to determine the likelihood that breast and other cancers will spread, and develop the best radiotherapy treatments.

Microsoft, Intel and other tech giants are also working with researchers to sort through data with AI to better understand and treat lung, breast and other types of cancer.

Google parent Alphabet's life sciences unit Verily has joined Apple in releasing a smartwatch for studies including one to identify patterns in the progression of Parkinson's disease. Amazon meanwhile offers medical advice through applications on its voice-activated artificial assistant Alexa.

IBM has been focusing on these issues with its Watson Health unit, which uses "cognitive computing" to help understand cancer and other diseases.

When IBM's Watson computing system won the TV game show Jeopardy in 2011, "there were a lot of folks in health care who said that is the same process doctors use when they try to understand health care," said Anil Jain, chief medical officer of Watson Health.

Systems like Watson, he said, "are able to connect all the disparate pieces of information" from medical journals and other sources "in a much more accelerated way."

"Cognitive computing may not find a cure on day one, but it can help understand people's behavior and habits" and their impact on disease, Jain said.

It's not just major tech companies moving into health.

Research firm CB Insights this year identified 106 digital health startups applying machine learning and predictive analytics "to reduce drug discovery times, provide virtual assistance to patients, and diagnose ailments by processing medical images."

Maryland-based startup Insilico Medicine uses so-called "deep learning" to shorten drug testing and approval times, down from the current 10 to 15 years.

"We can take 10,000 compounds and narrow that down to 10 to find the most promising ones," said Insilico's Qingsong Zhu.

Insilico is working on drugs for amyotrophic lateral sclerosis (ALS), cancer and age-related diseases, aiming to develop personalized treatments.

Finding depression

Artificial intelligence is also increasingly seen as a means for detecting depression and other mental illnesses, by spotting patterns that may not be obvious, even to professionals.

A research paper by Florida State University's Jessica Ribeiro found it can predict with 80 to 90 percent accuracy whether someone will attempt suicide as far off as two years into the future.

Facebook uses AI as part of a test project to prevent suicides by analyzing social network posts.

And San Francisco's Woebot Labs this month debuted on Facebook Messenger what it dubs the first chatbot offering "cognitive behavioral therapy" onlinepartly as a way to reach people wary of the social stigma of seeking mental health care.

New technologies are also offering hope for rare diseases.

Boston-based startup FDNA uses facial recognition technology matched against a database associated with over 8,000 rare diseases and genetic disorders, sharing data and insights with medical centers in 129 countries via its Face2Gene application.

Cautious optimism

Lynda Chin, vice chancellor and chief innovation officer at the University of Texas System, said she sees "a lot of excitement around these tools" but that technology alone is unlikely to translate into wide-scale health benefits.

One problem, Chin said, is that data from sources as disparate as medical records and Fitbits is difficult to access due to privacy and other regulations.

More important, she said, is integrating data in health care delivery where doctors may be unaware of what's available or how to use new tools.

"Just having the analytics and data get you to step one," said Chin. "It's not just about putting an app on the app store."

Explore further: Artificial intelligence predicts patient lifespans

2017 AFP

A computer's ability to predict a patient's lifespan simply by looking at images of their organs is a step closer to becoming a reality, thanks to new research led by the University of Adelaide.

Watson, IBM Corp.'s supercomputer that famously competed on the television show "Jeopardy," is coming West.

As a patient, your electronic medical record contains a wealth of information about you: vital signs, notes from physicians and medications.

IBM on Monday announced alliances with Apple and others to put artificial intelligence to work drawing potentially life-saving insights from the booming amount of health data generated on personal devices.

Barrow Neurological Institute and IBM Watson Health today announced results of a revolutionary study that has identified new genes linked to Amyotrophic Lateral Sclerosis (ALS), also known as Lou Gehrig's disease. The discovery ...

Apple on Monday confirmed that it has bought US machine learning startup Turi as Silicon Valley giants focus on a future rich with artificial intelligence.

Researchers at UC Santa Barbara professor Yasamin Mostofi's lab have given the first demonstration of three-dimensional imaging of objects through walls using ordinary wireless signal. The technique, which involves two drones ...

A data analytics firm that worked on the Republican campaign of Donald Trump exposed personal information belonging to some 198 million Americans, or nearly every eligible registered voter, security researchers said Monday.

Your next doctor could very well be a bot. And bots, or automated programs, are likely to play a key role in finding cures for some of the most difficult-to-treat diseases and conditions.

From "The Jetsons" to "Chitty Chitty Bang Bang", flying cars have long captured the imagination.

In what could be a major step forward for a new generation of solar cells called "concentrator photovoltaics," University of Michigan researchers have developed a new semiconductor alloy that can capture the near-infrared ...

Engineers at the University of California San Diego have developed a breakthrough in electrolyte chemistry that enables lithium batteries to run at temperatures as low as -60 degrees Celsius with excellent performancein ...

Adjust slider to filter visible comments by rank

Display comments: newest first

Medical records are still faxed between institutions can you believe it?? Getting 2nd opinions is a nightmare and you're lucky if your doctors even take the time to look at what he gets.

Clinical trials are often where the best treatments can be found and it is left up to the patient to find the right one and get them the proper info to determine eligibility.

I do not want humans climbing around inside me if there's a chance a robot can do it.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

The rest is here:

Artificial intelligence and the coming health revolution - Phys.Org

Posted in Artificial Intelligence | Comments Off on Artificial intelligence and the coming health revolution – Phys.Org

For NVIDIA, Gaming Is the Story Now, but Artificial Intelligence Is the Future – Motley Fool

Posted: at 7:17 pm

NVIDIA (NASDAQ:NVDA) stock has returned a scorching 225% over the one-year period through June 15.Investors have been enthused by the chipmaker's strong financial performance across its four target market platforms: gaming, data center, professional visualization, and automotive.

Gaming currently accounts for the largest percentage of revenue for the graphics chip specialist, but artificial intelligence (AI) is the future for the company -- and that's a great thing for investors because the burgeoning AI market is widely predicted to be beyond humongous.

Image source: Getty Images.

Here's how NVIDIA's business broke out in its most recently reported quarter, Q1 of fiscal 2018.

Platform

Fiscal Q1 2018 Revenue

Percentage of Revenue

Gaming

$1.027 billion

53%

Data center

$409 million

21.1%

Professional visualization

$205 million

10.6%

Auto

$140 million

7.2%

OEM and IP* (not target platforms)

$156 million

8.1%

Total

$1.937 billion

100%

Data source: NVIDIA. YOY = year over year. *OEM and IP = original equipment manufacturers and intellectual property.

NVIDIA's gaming business has some seasonality, with the fourth quarter of each fiscal year getting a boost from the holidays. That means the gaming business is somewhat more important even than the 53% figure above suggests. In Q4 fiscal 2017 and the full fiscal year, gaming accounted for 62% and 58.8%, respectively, of the company's revenue.

(NVIDIA doesn't break out operating income or any other form of earnings by platform, so we don't know the relative profitability of these platforms.)

Here's how fast each of NVIDIA's platforms grew in fiscal Q1 2018.

Platform

Revenue Growth (YOY)

Gaming

49%

Data center

186%

Professional visualization

8%

Auto

24%

OEM and IP

(10%)

Data source: NVIDIA. YOY = year over year.

Data center revenue nearly tripled year over year last quarter, making the platform NVIDIA's most powerful growth engine. Since it now accounts for just 21% of NVIDIA's revenue, it might take a while for it to pass gaming, but it's on track to do so.

Here's how quickly the platform has grown as a percentage of NVIDIA's business:

Period

Data Center's Percentage of Total Revenue

Q1 Fiscal 2018

21.1%

Q1 Fiscal 2017

11%

Q1 Fiscal 2016

7.6%

Data source: NVIDIA.

In just two years, the data center segment has grown from just 7.6% of NVIDIA's total quarterly revenue to more than 21%. That phenomenal growth is being fueled by demand for NVIDIA's graphics processing unit-based deep-learning approach to artificial intelligence. On last quarter's earnings call, CFO Colette Kress said:

Driving growth was demand from cloud-service providers and enterprises-building training clusters for web services plus strong gains in high-performance computing, GRID graphics visualization and our DGX-1 AI supercomputer. ...

All of the world's major Internet and cloud service providers now use NVIDIA Tesla-based GPU [graphics processing units]accelerators:AWS, Facebook, Google, IBM, and Microsoft, as well as Alibaba, Baidu, and Tencent.

Autonomous cars are emerging as a major growth driver for NVIDIA. Image source: Getty Images.

Revenue from the automotive platform jumped 24% year over yearin Q1, accounting for 7.2% of NVIDIA's total. Auto revenue has traditionally come from sales of Tegra processors for automakers' infotainment systems.In the last year, this platform has begun to profit from the technological shift toward driverless cars, which is in the early stages and promises to be both massive and long. Fully autonomous vehicles are expected to be legal on public roads across the United States within a decade.

A year ago, NVIDIA began shipping its DRIVE PX 2 AI car platform, which is a supercomputer for processing and interpreting the scads of data taken in by cameras, lidar, radar, and other sensors about the surroundings of semi-autonomous and fully autonomous cars. More than 225 automakers, suppliers, and other entities have started developing autonomous driving systems using it. Moreover, the company recently announced that the world's No. 1 automaker, Toyota,will use the DRIVE PX 2 platform to power its autonomous driving systems on vehicles slated for market introduction.

To wrap up, as Kress put it on the Q1 earnings call: "AI has quickly emerged as the single most powerful force in technology. And at the center of AI are NVIDIA GPUs."

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Beth McKenna has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), Baidu, Facebook, and Nvidia. The Motley Fool has a disclosure policy.

Excerpt from:

For NVIDIA, Gaming Is the Story Now, but Artificial Intelligence Is the Future - Motley Fool

Posted in Artificial Intelligence | Comments Off on For NVIDIA, Gaming Is the Story Now, but Artificial Intelligence Is the Future – Motley Fool

Putting (machine) learning and (artificial) intelligence to work – The Register

Posted: at 7:17 pm

MCubed Blue sky thinking is great, but if youre interested in what machine learning and AI means for your business right now, you should really join us at MCubed London in October.

If youre just beginning to examine what machine learning, AI and advanced analytics can do for your organisation - or your competitors - well be covering the technologies and techniques that every business needs to know.

But well also be going deep on practice, with speakers from companies like Ocado, OpenTable and ASOS as well as experts whove worked with real businesses to get projects up and running.

And of course, well be taking a close-up look at specific technologies and techniques, such as TensorFlow or Graph Analysis, in advanced conference sessions, and our optional day three workshops.

Throughout, our aim is to show you how you can apply tools and methodologies to allow your business or organisation to take advantage of ML, AI and advanced analytics to solve the problems you face today, as well as prepare you for tomorrow.

None of this happens in a vacuum of course, so well also be looking at the organisational, ethical and legal implications of rolling out these technologies. And yes, we will be taking a look at robotics and driverless cars and whacking great lasers.

Its a mind and business expanding lineup, and youll be pleased to know this all takes place at 30 Euston Square in Central London between October 9 and 11.

As well as being easy to get to, this is simply a really pleasant environment in which to enjoy the presentations, and discuss them on the sidelines with your fellow attendees and the speakers. Of course, well ensure theres plenty of top notch food and drink to fuel you through the formal and less formal parts of the programme.

Tickets will be limited, so if you want to ensure your place, head over to our website and snap up your early-bird ticket now.

Read the rest here:

Putting (machine) learning and (artificial) intelligence to work - The Register

Posted in Artificial Intelligence | Comments Off on Putting (machine) learning and (artificial) intelligence to work – The Register

Artificial intelligence and privacy engineering: Why it matters NOW – ZDNet

Posted: at 7:17 pm

As artificial intelligence proliferates, companies and governments are aggregating enormous data sets to feed their AI initiatives.

Although privacy is not a new concept in computing, the growth of aggregated data magnifies privacy challenges and leads to extreme ethical risks such as unintentionally building biased AI systems, among many others.

Privacy and artificial intelligence are both complex topics. There are no easy or simple answers because solutions lie at the shifting and conflicted intersection of technology, commercial profit, public policy, and even individual and cultural attitudes.

Given this complexity, I invited two brilliant people to share their thoughts in a CXOTALK conversation on privacy and AI. Watch the video embedded above to participate in the entire discussion, which was Episode 229 of CXOTALK.

Michelle Dennedy is the Chief Privacy Officer at Cisco. She is an attorney, author of the book The Privacy Engineer's Manifesto, and one of the world's most respected experts on privacy engineering.

David Bray is Chief Ventures Officer at the National Geospatial-Intelligence Agency. Previously, he was an Eisenhower Fellow and Chief Information Officer at the Federal Communications Commission. David is one of the foremost change agents in the US federal government.

Here are edited excerpts from the conversation. You can read the entire transcript at the CXOTALK site.

Michelle Dennedy: Privacy by Design is a policy concept that was hanging around for ten years in the networks and coming out of Ontario, Canada with a woman named Ann Cavoukian, who was the commissioner at the time of Ontario.

But in 2010, we introduced the concept at the Data Commissioner's Conference in Jerusalem, and over 120 different countries agreed we should contemplate privacy in the build, in the design. That means not just the technical tools you buy and consume, [but] how you operationalize, how you run your business; how you organize around your business.

And, getting down to business on my side of the world, privacy engineering is using the techniques of the technical, the social, the procedural, the training tools that we have available, and in the most basic sense of engineering to say, "What are the routinized systems? What are the frameworks? What are the techniques that we use to mobilize privacy-enhancing technologies that exist today, and look across the processing lifecycle to build in and solve for privacy challenges?"

And I'll double-click on the word "privacy." Privacy, in the functional sense, is the authorized processing of personally-identifiable data using fair, moral, legal, and ethical standards. So, we bring down each one of those things and say, "What are the functionalized tools that we can use to promote that whole panoply and complicated movement of personally-identifiable information across networks with all of these other factors built in?" [It's] if I can change the fabric down here, and our teams can build this in and make it as routinized and invisible, then the rest of the world can work on the more nuanced layers that are also difficult and challenging.

David Bray: What Michelle said about building beyond and thinking about networks gets to where we're at today, now in 2017. It's not just about individual machines making correlations; it's about different data feeds streaming in from different networks where you might make a correlation that the individual has not given consent to with [...] personally identifiable information.

For AI, it is just sort of the next layer of that. We've gone from individual machines, networks, to now we have something that is looking for patterns at an unprecedented capability, that at the end of the day, it still goes back to what is coming from what the individual has given consent to? What is being handed off by those machines? What are those data streams?

One of the things I learned when I was in Australia as well as in Taiwan as an Eisenhower Fellow; it's a question about, "What can we do to separate this setting of our privacy permissions and what we want to be done with our data, from where the data is stored?" Because right now, we have this more simplistic model of, "We co-locate on the same platform," and then maybe you get an end-user agreement that's thirty or forty pages long, and you don't read it. Either accept, or you don't accept; if you don't accept, you won't get the service, and there's no opportunity to say, "I'm willing to have it used in this context, but not these contexts." And I think that means Ai is going to raise questions about the context of when we need to start using these data streams.

Michelle Dennedy: We wrote a book a couple of years ago called "The Privacy Engineer's Manifesto," and in the manifesto, the techniques that we used are based on really foundational computer science.

Before we called it "computer science" we used to call it "statistics and math." But even thinking about geometric proof, nothing happens without context. And so, the thought that you have one tool that is appropriate for everything has simply never worked in engineering. You wouldn't build a bridge with just nails and not use hammers. You wouldn't think about putting something in the jungle that was built the same way as a structure that you would build in Arizona.

So, thinking about use-cases and contexts with human data, and creating human experiences, is everything. And it makes a lot of sense. If you think about how we're regulated primarily in the U.S., we'll leave the bankers off for a moment because they're different agencies, but the Federal Communications Commission, the Federal Trade Commission; so, we're thinking about commercial interests; we're thinking about communication. And communication is wildly imperfect why? Because it's humans doing all the communicating!

So, any time you talk about something that is as human and humane as processing information that impacts the lives and cultures and commerce of people, you're going to have to really over-rotate on context. That doesn't mean everyone gets a specialty thing, but it doesn't mean that everyone gets a car in any color that they want so long as it's black.

David Bray: And I want to amplify what Michelle is saying. When I arrived at the FCC in late 2013, we were paying for people to volunteer what their broadband speeds were in certain, select areas because we wanted to see that they were getting the broadband speed that they were promised. And that cost the government money, and it took a lot of work, and so we effectively wanted to roll up an app that could allow people to crowdsource and if they wanted to, see what their score was and share it voluntarily with the FCC. Recognizing that if I stood up and said, "Hi! I'm with the U.S. government! Would you like to have an app [...] for your broadband connection?" Maybe not that successful.

But using the principles that you said about privacy engineering and privacy design, one, we made the app open source so people could look at the code. Two, we made it so that, when we designed the code, it didn't capture your IP address, and it didn't know who you were in a five-mile-radius. So, it gave some fuzziness to your actual, specific location, but it was still good enough for informing whether or not broadband speed is as desired.

And once we did that; also, our terms and conditions were only two pages long; which, again, we dropped the gauntlet and said, "When was the last time you agreed to anything on the internet that was only two pages long?" Rolling that out, as a result, ended up being the fourth most-downloaded app behind Google Chrome because there were people that looked at the code and said, "Yea, verily, they have privacy by design."

And so, I think that this principle of privacy by design is making the recognition that one, it's not just encryption but then two, it's not just the legalese. Can you show something that gives people trust; that what you're doing with their data is explicitly what they have given consent to? That, to me, is what's needed for AI [which] is, can we do that same thing which shows you what's being done with your data, and gives you an opportunity to weigh in on whether you want it or not?

David Bray: So, I'll give the simple answer which is "Yes." And now I'll go beyond that.

So, shifting back to first what Michelle said, I think it is great to unpack that AI is many different things. It's not a monolithic thing, and it's worth deciding are we talking about simply machine learning at speed? Are we talking about neural networks? This matters because five years ago, ten years ago, fifteen years ago, the sheer amount of data that was available to you was nowhere near what it is right now, and let alone what it will be in five years.

If we're right now at about 20 billion networked devices on the face of the planet relative to 7.3 billion human beings, estimates are at between 75 and 300 billion devices in less than five years. And so, I think we're beginning to have these heightened concerns about ethics and the security of data. To Scott's question: because it's just simply we are instrumenting ourselves, we are instrumenting our cars, our bodies, our homes, and this raises huge amounts of questions about what the machines might make of this data stream. It's also just the sheer processing capability. I mean, the ability to do petaflops and now exaflops and beyond, I mean, that was just not present ten years ago.

So, with that said, the question of security. It's security, but also we may need a new word. I heard in Scandinavia, they talk about integrity and being integral. It's really about the integrity of that data: Have you given consent to having it used for a particular purpose? So, I think AI could play a role in making sense of whether data is processed securely.

Because the whole challenge is right now, for most of the processing we have to decrypt it at some point to start to make sense of it and re-encrypt it again. But also, is it being treated with integrity and integral to the individual? Has the individual given consent?

And so, one of the things raised when I was in conversations in Taiwan is the question, "Well, couldn't we simply have an open-source AI, where we give our permission and our consent to the AI to have our data be used for certain purposes?" For example, it might say, "Okay, well I understand you have a data set served with this platform, this other platform over here, and this platform over here. Are you willing to have that data be brought together to improve your housekeeping?" And you might say "no." He says, "Okay. But would you be willing to do it if your heart rate drops below a certain level and you're in a car accident?" And you might say "yes."

And so, the only way I think we could ever possibly do context is not going down a series of checklists and trying to check all possible scenarios. It is going to have to be a machine that can talk to us and have conversations about what we do and do now want to have done with our data.

Michelle Dennedy: Madeleine Clare Elish wrote a paper called "Moral Crumple Zones," and I just love even the visual of it. If you think about cars and what we know about humans driving cars, they smash into each other in certain known ways. And the way that we've gotten better and lowered fatalities of known car crashes is using physics and geometry to design a cavity in various parts of the car where there's nothing there that's going to explode or catch fire, etc. as an impact crumple zone. So all the force and the energy goes away from the passenger and into the physical crumple zone of the car.

Madeleine is working on exactly what we're talking about. We don't know when it's unconscious or unintentional bias because it's unconscious or unintentional bias. But, we can design-in ethical crumple zones, where we're having things like testing for feeding, just like we do with sandboxing or we do with dummy data before we go live in other types of IT systems. We can decide to use AI technology and add in known issues for retraining that database.

I'll give you Watson as an example. Watson isn't a thing. Watson is a brand. The way that the Watson computer beat Jeopardy contestants is by learning Wikipedia. So, by processing mass quantities of stated data, you know, given whatever levels of authenticity that pattern on.

What Watson cannot do is selectively forget. So, your brain and your neural network are better at forgetting data and ignoring data than it is for processing data. We're trying to make our computer simulate a brain, except that brains are good at forgetting. AI is not good at that, yet. So, you can put the tax code, which would fill three ballrooms if you print it out on paper. You can feed it into an AI type of dataset, and you can train it in what are the known amounts of money someone should pay in a given context?

What you can't do, and what I think would be fascinating if we did do, is if we could wrangle the data of all the cheaters. What are the most common cheats? How do we cheat? And we know the ones that get caught, but more importantly, how do [...] get caught? That's the stuff where I think you need to design in a moral and ethical crumple zone and say, "How do people actively use systems?"

The concept of the ghost in the machine: how do machines that are well-trained with data over time experience degradation? Either they're not pulling from datasets because the equipment is simply ... You know, they're not reading tape drives anymore, or it's not being fed from fresh data, or we're not deleting old data. There are a lot of different techniques here that I think have yet to be deployed at scale that I think we need to consider before we're overly relying [on AI], without human checks and balances, and processed checks and balances.

David Bray: I think it's going to have to be a staged approach. As a starting point, you almost need to have the equivalent of a human ombudsman - a series of people looking at what the machine is doing relative to the data that was fed in.

And you can do this in multiple contexts. It could just be internal to the company, and it's just making sure that what the machine is being fed is not leading it to decisions that are atrocious or erroneous.

Or, if you want to gain public trust, share some of the data, and share some of the outcomes but abstract anything that's associated with any one individual and just say, "These types of people applied for loans. These types of loans were awarded," so can make sure that the machine is not hinging on some bias that we don't know about.

Longer-term, though, you've got to write that ombudsman. We need to be able to engineer an AI to serve as an ombudsman for the AI itself.

So really, what I'd see is not just AI as just one, monolithic system, it may be one that's making the decisions, and then another that's serving as the Jiminy Cricket that says, "This doesn't make sense. These people are cheating," and it's pointing out those flaws in the system as well. So, we need the equivalent of a Jiminy Cricket for AI.

CXOTALK brings you the world's most innovative business leaders, authors, and analysts for in-depth discussion unavailable anywhere else. Enjoy all our episodes and download the podcast from iTunes and Spreaker.

Read the original post:

Artificial intelligence and privacy engineering: Why it matters NOW - ZDNet

Posted in Artificial Intelligence | Comments Off on Artificial intelligence and privacy engineering: Why it matters NOW – ZDNet

How Google is powering its next-generation AI – T3

Posted: at 7:17 pm

If you paid any attention to Google's big developer conference earlier this year then you'll know artificial intelligence is about to get big - really big. It's already powering most of Google's apps, one way or another, and the other giants in tech are scrambling to keep up.

So what's all the fuss about? Here we're going to dig deeper into some of the AI announcements Google shared at I/O 2017, and explain how they're going to change the way you interact with your gadgets - from your smartphone to your music speakers.

In broad terms artificial intelligence (usually) refers to a piece of software or a machine that simulates smart, human-like intelligence - even if it's just a hollow robot being operated by a person behind a curtain, pretending to respond to your commands, that's still a kind of AI.

Within that you've got all kinds of branches, categories and approaches. As you may have noticed, different types of AI are better at different tasks: the AI responsible for beating humans at board games isn't necessarily going to be any good at holding up a conversation across an instant messenger app, for instance.

The type of AI Google is most interested in is known as machine learning, where computers learn for themselves based on huge banks of sample data. That could be learning what a picture of a dog looks like or learning how to drive a car, but whatever the end goal, there are two steps: training and inference.

During training, the system is fed with as much sample information as possible - so maybe millions of photos of dogs. The smart algorithms inside the AI then try and spot patterns in the images that suggest a dog, knowledge that's then applied in the inference stage. The end result is an app that recognises your pets in pictures.

Artificial intelligence is already all over Google's apps, whether it's in spotting which email messages are likely to be spam in Gmail, or making recommendations about what you'd like to listen to next in Google Play Music. Any decision not made by a human could be construed as AI of some kind.

Another example is voice commands in the Google Assistant. When you ask it to do something, the sound waves created by your voice are compared to the knowledge Google's systems have gained from analysing huge numbers of other audio snippets, and the app then (hopefully) understands what you're saying.

Translating text from one language into another, working out which ads best match which sets of search results, all of these jobs that apps and computers do can be enhanced by AI. It's even popped up in the Smart Reply feature recently added to Gmail - short snippets of text you might want to use in response, based on an (anonymous) analysis of countless other emails.

And Google isn't slowing down, either. The company is busy working hard to improve its efforts in AI, as we saw at I/O earlier in the year - that means more efficient algorithms, a better end experience for users, and even AI that can teach itself to be better.

We've talked about machine learning but there's a branch of machine learning that Google engineers are specifically interested in called deep learning - that's where AI systems try and mimic the human brain to deal with vast amounts of information.

It's a machine learning technique made possible by the massive amounts of computational power now available to us. In the case of the dog pictures example we mentioned above, it means more layers of analysis, more subtasks making up the main task, and the system itself taking on more of the burden of working out the right answer (so figuring out what makes a dog picture a dog picture, rather than being told by programmers, in our earlier example).

Deep learning means machine learning that relies less on code and instructions written by humans, and deep learning systems are known as neural networks, named after the neurons in the human brain. On stage at Google I/O 2017 we saw a new system called AutoML, which is essentially AI teaching itself - whereas in the past small teams of scientists have had to choose the best coding route to produce the most effective neural nets, now computers can start to do it for themselves.

On its servers, Google has an army of processing units called Cloud TPUs (Tensor Processing Units) designed to handle all this deep thinking. In fact, Google makes some of its AI available to all via the TensorFlow portal - developers can plug the smart algorithms and machine learning power into their own apps, if they know how to harness it. In return, Google gets the best AI minds and apps in the business using its own services.

There was no doubt during the I/O 2017 keynote that Google thinks AI will be the most important area of technology for the foreseeable future - more important, even, than how many megapixels it's going to pack into the camera of the Pixel 2 smartphone.

You can therefore expect to hear a lot more about Google and artificial intelligence in the future, from smart, automatic features in Gmail to map directions that know where you're going before you do. The good news is that it seems keen to bring everyone else along for the ride too, making its platforms and services available for others to make use of, and improving the level of AI across the board.

One of the biggest advances you'll see on your phone is the quality of the digital assistant apps, which are set to take on a more important role in the future: choosing the apps you see, the info you need, and much more. We've also been treated to a glimpse of an app called Google Lens, a smart camera add-on that means your phone will know what it's looking at and be able to make decisions at all times.

The AI systems being developed by Google go way beyond our own consumer gadgets and services too - they're being used in the medical profession as well, where deep learning systems can spot the spread of certain diseases much earlier than doctors can, because they've got so much more data to refer to.

More here:

How Google is powering its next-generation AI - T3

Posted in Artificial Intelligence | Comments Off on How Google is powering its next-generation AI – T3

AI and machine learning will make everyone a musician – Wired.co.uk

Posted: June 18, 2017 at 11:11 am

Music has always been at the cutting edge of technology so its no surprise that artificial intelligence and machine learning is pushing its boundaries.

As AIs that can carry out elements of the creative process continues to evolve, should artists be worried about the machines taking over? Probably not, says Douglas Eck, research scientist at Googles Magenta.

"Musicians and artists are going to grab what works for them and I predict that the music that will be made will be misunderstood by many people," Eck, told WIRED at Snar+D, a showcase of music, creativity and technology held this week in Barcelona.

At the event, which is twinned with the Snar dance music festival, Google held an AI demonstration where Eck showed a series of basic, yet impressive musical clips produced using machine learning model that was able to predict what note should come next.

The Magenta project has been running for just over a year and aims discover whether machine learning can create "compelling" creative works. "Our research is focused on sequence generation," Eck says, were always looking to build models that can listen to what musicians are doing. From that we can extend a piece of music that a musicians created or maybe add a voice".

Just as the drum machine was loathed and feared by many when it first hit the mainstream in the 1970s, AIs role in the creation of art has sparked similar fears among critics. Eck, who admits that he was initially among the drum machine haters, explains that it took an entire generation of musicians to take the technology and figure out how to take it forward without putting good drummers out of work. He envisages a similar process of misunderstanding and eventual acceptance for AI-based music tools.

Given its flexible nature, its likely that musicians and other artists of the future will all use AI differently, according to Freya Murray, program manager at Google Arts & Culture Lab.

"Some will collaborate with machine learning, others will use it as a tool and for others it will be their creative process and thats the case throughout the history of art," she told WIRED.

"In the creative process, it can provide that stimulus to take you in a direction you might not have gone before". AI will also have an important role in art education, says Murray.

Also at Snar+D was Abbey Road Red, the legendary studios tech incubator. Jon Eades, who heads up the scheme agrees that the dawn of AI in music is a good thing.

"In the same way that Instagram has democratised the process of taking and editing photos, well see a similar progression towards making more people musical creators using assertive AI to help people make good music, he told WIRED at a recent talk on AI at the London studio. "I dont think well see a complete replacement of composers with computers but I do think there are going to be big shifts. Weve already seen passable results in a lot of areas".

Georgia Tech

The move to AI-based music creation tools will be "as big a technological shift as the digitisation of music," he predicted, albeit cautiously.

Abbey Road Red recently announced the most recent intake of startups for its mentoring scheme, including AI Music, a company that plans to use artificial intelligence to transform music "from a static process of a one-directional interaction, to one of a universal dynamic co-creation". Applications for the next wave of hopefuls are now open (until 7 July).

While machines may not replace composers anytime soon, theyre certainly catching up. This week, a marimba-playing robot called Shimon composed its own music for the first time. Developed by the Georgia Institute of Technology, the musical bot was given more than 5,000 complete songs, two million motifs, riffs and short passages of music and then asked to produce its own composition.

However, Freya Murray says robo-composers simply can't compete with the human touch, explaining: "Our ability to imagine and create is at the core of what it makes us human and artists will continue to express the world we live in, and imagined worlds."

Read more here:

AI and machine learning will make everyone a musician - Wired.co.uk

Posted in Artificial Intelligence | Comments Off on AI and machine learning will make everyone a musician – Wired.co.uk

Artificial Intelligence can predict whether someone will attempt suicide two years later: Study – Hindustan Times

Posted: at 11:11 am

Your next doctor could very well be a bot. And bots, or automated programs, are likely to play a key role in finding cures for some of the most difficult-to-treat diseases and conditions.

Consider these examples:

-California researchers detected cardiac arrhythmia with 97 percent accuracy on wearers of an Apple Watch with the AI-based Cariogram application, opening up early treatment options to avert strokes.

-Scientists from Harvard and the University of Vermont developed a machine learning tool - a type of AI that enables computers to learn without being explicitly programmed - to better identify depression by studying Instagram posts, suggesting new avenues for early screening and detection of mental illness.

- Researchers from Britains University of Nottingham created an algorithm that predicted heart attacks better than doctors using conventional guidelines.

While technology has always played a role in medical care, a wave of investment from Silicon Valley and a flood of data from connected devices appear to be spurring innovation. I think a tipping point was when Apple released its Research Kit, said Forrester Research analyst Kate McCarthy, referring to a program letting Apple users enable data from their daily activities to be used in medical studies. McCarthy said advances in artificial intelligence has opened up new possibilities for personalized medicine adapted to individual genetics. We now have an environment where people can weave through clinical research at a speed you could never do before, she said.

Shutterstock (Shutterstock)

- Predictive analytics -

AI is better known in the tech field for uses such as autonomous driving. But it can also be used to glean new insights from existing data such as electronic health records and lab tests, says Narges Razavian, a professor at New York Universitys Langone School of Medicine who led a research project on predictive analytics for more than 100 medical conditions. Our work is looking at trends and trying to predict (disease) six months into the future, to be able to act before things get worse, Razavian said.

- NYU researchers analysed medical and lab records to accurately predict the onset of dozens of diseases and conditions including type 2 diabetes, heart or kidney failure and stroke. The project developed software now used at NYU which may be deployed at other medical facilities.

- Googles DeepMind division is using artificial intelligence to help doctors analyse tissue samples to determine the likelihood that breast and other cancers will spread, and develop the best radiotherapy treatments.

- Microsoft, Intel and other tech giants are also working with researchers to sort through data with AI to better understand and treat lung, breast and other types of cancer.

- Google parent Alphabets life sciences unit Verily has joined Apple in releasing a smartwatch for studies including one to identify patterns in the progression of Parkinsons disease. Amazon meanwhile offers medical advice through applications on its voice-activated artificial assistant Alexa.

- Finding depression -

Artificial intelligence is also increasingly seen as a means for detecting depression and other mental illnesses, by spotting patterns that may not be obvious, even to professionals. A research paper by Florida State Universitys Jessica Ribeiro found it can predict with 80 to 90 percent accuracy whether someone will attempt suicide as far off as two years into the future. Facebook uses AI as part of a test project to prevent suicides by analysing social network posts. And San Franciscos Woebot Labs this month debuted on Facebook Messenger what it dubs the first chatbot offering cognitive behavioural therapy online - partly as a way to reach people wary of the social stigma of seeking mental health care.

New technologies are also offering hope for rare diseases. Boston-based startup FDNA uses facial recognition technology matched against a database associated with over 8,000 rare diseases and genetic disorders, sharing data and insights with medical centers in 129 countries via its Face2Gene application.

- Cautious optimism -

Lynda Chin, vice chancellor and chief innovation officer at the University of Texas System, said she sees a lot of excitement around these tools but that technology alone is unlikely to translate into wide-scale health benefits. One problem, Chin said, is that data from sources as disparate as medical records and Fitbits is difficult to access due to privacy and other regulations. More important, she said, is integrating data in health care delivery where doctors may be unaware of whats available or how to use new tools. Just having the analytics and data get you to step one, said Chin. Its not just about putting an app on the app store.

Follow @htlifeandstyle for more

See the original post:

Artificial Intelligence can predict whether someone will attempt suicide two years later: Study - Hindustan Times

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence can predict whether someone will attempt suicide two years later: Study – Hindustan Times

Amazon just acquired a training ground for retail artificial intelligence research – GeekWire

Posted: June 17, 2017 at 2:09 pm

(Whole Foods photo)

Amazon didnt acquire an iconic grocery store brand just for the quinoa: Whole Foods operates hundreds of retail data mines, and Amazon just married a world-class artificial intelligence team with one of the best sources of in-store consumer shopping data in the U.S.

There are lots of reasons, to be sure, why Amazon would want to spend $13.7 billion on Whole Foods. But the quintessential online retailer has been trying to establish a physical store presence for a few years now, and with one big check, it will now control more than 400 sources of prime data on consumer behavior.

Big-box grocery stores are easy sources of data on human purchasing behavior. Any modern retail outlet monitors activity such as customer flow through the aisles, brand affinity, and, of course, the customer loyalty cards that do as good a job of profiling a person as anything. After all, you are what you eat.

Obviously, Amazon already collects a ton of data on consumer purchasing behavior, but its relatively new to groceries and brick-and-mortar retail in general. Whole Foods instantly gives Amazon a reliable source of the purchasing habits of well-off Americans, and that data can be used to train artificial intelligence models that will allow retailers to better predict demand and someday automate much of the labor involved in grocery retailing, no matter what the company said Friday about layoffs.

As Amazons Swami Sivasubramanian explained at our GeekWire Cloud Tech Summit last week, Amazon has thousands of engineers focused on AI, and a lot of that work goes toward making Amazons fulfilment centers more efficient and toward giving Amazon Web Services customers access to cutting-edge artificial intelligence models theyd never be able to build on their own.

Amazon just acquired a company that can improve its AI models on both of those counts. The logistics of shipping fresh food around the country are not easy, and that generates a ton of specialized data that Amazon can use to improve its own distribution strategies as well as build a cloud retail AI product for AWS customers.

Investing in big data products just isnt enough any more for retailers. Artificial intelligence models are going to dictate how products are sold over the next decade, and there are only a few companies with the expertise and data sets necessary to build those models at scale.

A few years down the road, if youre an established but aging grocery brand say Safeway or Albertsons or Publix (try the subs) youll either watch Amazon and Whole Foods eat your lunch with improved efficiency and incredible reach, or youll become an AWS customer because youll need the retail AI products that could emerge from this deal to compete.

Visit link:

Amazon just acquired a training ground for retail artificial intelligence research - GeekWire

Posted in Artificial Intelligence | Comments Off on Amazon just acquired a training ground for retail artificial intelligence research – GeekWire

Three barriers to artificial intelligence adoption – ModernMedicine

Posted: at 2:09 pm

Artificial intelligence (AI) will play a major role in healthcare digital transformation, according to new research.

The study, Human Amplification in the Enterprise, surveyed more than 1,000 business leaders from organizations of more than 1,000 employees, with $500 million or more annual revenue and from a range of sectors, all in the U.S.

Survey respondents from the healthcare sectorindicated that the following AI-supported activities will play a significant role in their transformations: Machine learning (77%), robotic automation (61%), institutionalization of enterprise knowledge using AI (59%), cognitive AI-led processes or tasks (50%) and automated predictive analytics (47%).

The research also found that almost half of the respondentsin healthcareindicate their organizations priorities for automation initiatives is to automate processes to:

Dalwani

This suggests that many processes in the healthcare sector are still manual-driven and produce a high volume of errors as a result, says Sanjay Dalwani, vice president and head of hospital and healthcare at Infosys.

The survey found that 73% of respondents want AI to process complete structured and unstructured data and to automate insights-led decisions. It also found that 72% want AI to provide human-like recommendations for automated customer support/advice.

More widely,healthcare sectorrespondents shared that the top three digital transformation goals of their organizations are to build an innovation culture (65%), build a mobile enterprise (63%) and become more agile and customer-centric (58%).

The findings underscore that healthcare organizations are well on their way with starting to work alongside AI to selectively use it to inform and improve patient care, Dalwani says. However, in this process, its pertinent that the industry establishes ethical standards as well as metrics to assess the performance of AI systems.

The study also indicates that as automation becomes more widely adopted in healthcare, employees will be retrained for higher-value work, according to Dalwani. Healthcare organizations can benefit from redirecting a section of this talent to managing and ensuring ethical use of AI, he says.

Even though the majority of enterprises in the healthcare and life sciences sector are undergoing digital transformation, few have fully accomplished their goals. This is due to three primary reasons, according to Dalwani:

Lack of time (64%)

Lack of collaboration amongst teams (63%)

Lack of data-led insights on demand (61%)

Furthermore, when healthcare IT professionals were asked about the challenges of adopting more AI-supported activities as component of their digital transformation initiatives, 78% of respondents indicated lack of financial resources, 78% state lack of in-house knowledge and skills around the technology and 66% say theres a lack of clarity regarding the value proposition of AI, according to the study.

This suggests that the healthcare IT sector still has a long way to go in terms of AI buy-in, Dalwani says. Until more senior level IT-decision makers are bought into the benefits of bringing AI to healthcare, teams wont have access to the proper resources to support full-scale implementations.

Visit link:

Three barriers to artificial intelligence adoption - ModernMedicine

Posted in Artificial Intelligence | Comments Off on Three barriers to artificial intelligence adoption – ModernMedicine

Page 161«..1020..160161162163..170180..»