Page 211«..1020..210211212213..220230..»

Category Archives: Ai

True AI cannot be developed until the ‘brain code’ has been cracked: Starmind – ZDNet

Posted: August 9, 2017 at 5:13 am

Marc Vontobel, CTO & Pascal Kaufmann, CEO, Starmind

Artificial intelligence is stuck today because companies are likening the human brain to a computer, according to Swiss neuroscientist and co-founder of Starmind Pascal Kaufmann. However, the brain does not process information, retrieve knowledge, or store memories like a computer does.

When companies claim to be using AI to power "the next generation" of their products, what they are unknowingly referring to is the intersection of big data, analytics, and automation, Kaufmann told ZDNet.

"Today, so called AI is often just the human intelligence of programmers condensed into source code," said Kaufmann, who worked on cyborgs previously at DARPA.

"We shouldn't need 300 million pictures of cats to be able to say whether something is a cat, cow, or dog. Intelligence is not related to big data; it's related to small data. If you can look at a cat, extract the principles of a cat like children do, then forever understand what a cat is, that's intelligence."

He even said that it's not "true AI" that led to AlphaGo -- a creation of Google subsidiary DeepMind -- mastering what is revered as the world's most demanding strategy game, Go.

The technology behind AlphaGo was able to look at 10 to 20 potential future moves and lay out the highest statistics for success, Kaufmann said, and so the test was one of rule-based strategy rather than artificial intelligence.

The ability for a machine to strategise outside the context of a rule-based game would reflect true AI, according to Kaufmann, who believes that AI will cheat without being programmed not to do so.

Additionally, the ability to automate human behaviour or labour is not necessarily a reflection of machines getting smarter, Kaufmann insisted.

"Take a pump, for example. Instead of collecting water from the river, you can just use a pump. But that is not artificial intelligence; it is the automation of manual work ... Human-level AI would be able to apply insights to new situations," Kaufmann added.

While Facebook's plans to build a brain-computer interface and Elon Musk's plans to merge the human brain with AI have left people wondering how close we are to developing true AI, Kaufmann believes the "brain code" needs to be cracked before we can really advance the field. He said this can only be achieved through neuroscientific research.

Earlier this year, founder of DeepMind Demis Hassabis communicated a similar sentiment in a paper, saying the fields of AI and neuroscience need to be reconnected, and that it's only by understanding natural intelligence that we can develop the artificial kind.

"Many companies are investing their resources in building faster computers ... we need to focus more on [figuring out] the principles of the brain, understand how it works ... rather than just copy/paste information," Kaufmann said.

Kaufmann admitted he doesn't have all the answers, but finds it "interesting" that high-profile entrepreneurs such as Musk and Mark Zuckerberg, none of whom have AI or neuroscience backgrounds, have such strong and opposing views on AI.

Musk and Zuckerberg slung mud at each other in July, with the former warning of "evil AI" destroying humankind if not properly monitored and regulated, while the latter spoke optimistically about AI contributing to the greater good, such as diagnosing diseases before they become fatal.

"One is an AI alarmist and the other makes AI look charming ... AI, like any other technology, can be used for good or used for bad," said Kaufmann, who believes AI needs to be assessed objectively.

In the interim, Kaufmann believes systems need to be designed so that humans and machines can work together, not against each other. For example, Kaufmann envisions a future where humans wear smart lenses -- comparable to the Google Glass -- that act as "the third half of the brain" and pull up relevant information based on conversations they are having.

"Humans don't need to learn stuff like which Roman killed the other Roman ... humans just need to be able to ask the right questions," he said.

"The key difference between human and machine is the ability to ask questions. Machines are more for solutions."

Kaufmann admitted, however, that humans don't know how to ask the right questions a lot of the time, because we are taught to remember facts in school, and those who remember the most facts are the ones who receive the best grades.

He believes humans need to be educated to ask the right questions, adding that the question is 50 percent of the solution. The right questions will not only allow humans to understand the principles of the brain and develop true AI, but will also keep us relevant even when AI systems proliferate, according to Kaufmann.

If we want to slow down job loss, AI systems need to be designed so that humans are at the centre of it, Kaufmann said.

"While many companies want to fully automate human work, we at Starmind want to build a symbiosis between humans and machines. We want to enhance human intelligence. If humans don't embrace the latest technology, they will become irrelevant," he added.

The company claims its self-learning system autonomously connects and maps the internal know-how of large groups of people, allowing employees to tap into their organisation's knowledge base or "corporate brain" when they have queries.

Starmind platform

Starmind is integrated into existing communication channels -- such as Skype for Business or a corporate browser -- eliminating the need to change employee behaviour, Kaufmann said.

Questions typed in the question window are answered instantly if an expert's answer is already stored in Starmind, and new questions are automatically routed to the right expert within the organisation, based on skills, availability patterns, and willingness to share know-how. All answers enhance the corporate knowledge base.

"Our vision is if you connect thousands of human brains in a smart way, you can outsmart any machine," Kaufmann said.

On how this is different to asking a search engine a question, Kaufmann said Google is basically "a big data machine" and mines answers to questions that have been already asked, but is not able to answer brand new questions.

"The future of Starmind is we actually anticipate questions before they're even asked because we know so much about the employee. For example, we can say if you are a new hire and you consume a certain piece of content, there will be a 90 percent probability that you will ask the following three questions within the next three minutes and so here are the solutions."

Starmind is being currently used across more than 40 countries by organisations such as Accenture, Bayer, Nestl, and Telefonica Deutschland.

While Kaufmann thinks it is important at this point in time to enhance human intelligence rather than replicate it artificially, he does believe AI will eventually substitute humans in the workplace. But unlike the grim picture painted by critics, he doesn't think it's a bad thing.

"Why do humans need to work at all? I look forward to all my leisure time. I do not need to work in order to feel like a human," Kaufmann said.

When asked how people would make money and sustain themselves, Kaufmann said society does not need to be ruled by money.

"In many science fiction scenarios, they do not have money. When you look at the ant colonies or other animals, they do not have cash," Kaufmann said.

Additionally, if humans had continuous access to intelligent machines, Kaufmann said "the acceleration of human development will pick up" and "it will give rise to new species".

"AI is the ultimate tool for human advancement," he firmly stated.

Link:

True AI cannot be developed until the 'brain code' has been cracked: Starmind - ZDNet

Posted in Ai | Comments Off on True AI cannot be developed until the ‘brain code’ has been cracked: Starmind – ZDNet

Salesforce AI helps brands track images on social media | TechCrunch – TechCrunch

Posted: at 5:13 am

Brands have long been able to search for company mentions on social media, but theyve lacked the ability to search for pictures of their logos or products in an easy way. Thats where Salesforces latest Einstein artificial intelligence feature comes into play.

Today the company introduced Einstein Vision for Social Studio, which provides a way for marketers to search for pictures related to their brands on social media in the same way they search for other mentions. The product takes advantage of a couple of Einstein artificial intelligence algorithms including Einstein Image Classification for image recognition. It uses visual search, brand detection and product identification. It also makes use of Einstein Object Detection to recognize objects within images including the type and quantity of object.

AI has gotten quite good at perception and cognition tasks in recent years. One result of this has been the ability to train an algorithm to recognize a picture. With cheap compute power widely available and loads of pictures being uploaded online, it provides a perfect technology combination for better image recognition.

Rob Begg, VP of product marketing for social and advertising products at Salesforce, says its about letting the machine loose on tasks for which its better suited. If you think of it from a company point of view, there is a huge volume of tweets and [social] posts. What AI does best is help surface and source the ones that are relevant, he says.

As an example, he says there could be thousands of posts about cars, but only a handful of those would be relevant to your campaign. AI can help find those much more easily.

Begg sees three possible use cases for this tool. First of all, it could provide better insight into how people are using your products. Secondly it could provide a way to track brand displays online hidden within pictures, and finally it could let you find out when influencers such as actors or athletes are using your products.

The product comes trained to recognize two million logos, 60 scenes (such as an airport), 200 foods and 1000 objects. That should be enough to get many companies started. Customizing isnt available in the first release, so if you have a logo or object not included out of the box, you will need to wait for a later version to be able to customize the content.

Begg says it should be fairly easy for marketers used to using Social Studio to figure out how to incorporate the visual recognition tools into their repertoire. The new functionality should be available immediately to Salesforce Social Studio users.

Follow this link:

Salesforce AI helps brands track images on social media | TechCrunch - TechCrunch

Posted in Ai | Comments Off on Salesforce AI helps brands track images on social media | TechCrunch – TechCrunch

REVEALED: AI is turning RACIST as it learns from humans – Express.co.uk

Posted: at 5:13 am

GETTY

In parts of the US, when a suspect is taken in for questioning they are given a computerised risk assessment which works out the likelihood of the person reoffending.

The judges can then use this data when giving his or her verdict.

However, an investigation has revealed that the artificial intelligence behind the software exhibits racist tendencies.

Reporters from ProPublica obtained more than 7,000 test results from Florida in 2013 and 2014 and analysed the reoffending rate among the individuals.

GETTY

The suspects are asked a total of 137 questions by the AI system Correctional Offender Management Profiling for Alternative Sanctions (Compas) including questions such as Was one of your parents ever sent to jail or prison? or How many of your friends/acquaintances are taking drugs illegally?, with the computer generating its results at the end.

Overall, the AI system claimed black people (45 per cent) were almost twice as likely as white people (24 per cent) to reoffend.

In one example outlined by ProPublica, risk scores were provided for a black and white suspect, both of which on drug possession charges.

GETTY

The white suspect had prior offences of attempted burglary and the black suspect had resisting arrest.

Seemingly giving no indication as to why, the black suspect was given a higher chance of reoffending and the white suspect was considered low risk.

But, over the next two years, the black suspect stayed clear of illegal activity and the white suspect was arrested three more times for drug possession.

However, researchers warn the problem does not lie with robots, but with the human race as AI uses machine learning algorithms to pick up on human traits.

Joanna Bryson, a researcher at the University of Bath, told the Guardian: People expected AI to be unbiased; thats just wrong. If the underlying data reflects stereotypes, or if you train AI from human culture, you will find these things.

Asus

1 of 9

Asus Zenbo: This adorable little bot can move around and assist you at home, express emotions, and learn and adapt to your preferences with proactive artificial intelligence.

This is not an isolated incident either.

Microsofts TayTweets (AI) chatbot was unleashed on Twitter last year which was designed to learn from users.

However, it almost instantly turned to anti-semitism and racism, tweeting: "Hitler did nothing wrong" and "Hitler was right I hate the Jews.

See the original post:

REVEALED: AI is turning RACIST as it learns from humans - Express.co.uk

Posted in Ai | Comments Off on REVEALED: AI is turning RACIST as it learns from humans – Express.co.uk

DOE Backs AI for Clean Tech Investors – IEEE Spectrum

Posted: at 5:13 am

The U.S. Department of Energy wants to make investing in energy technology easier, less risky, and less expensive (for the government, at least).

A new initiative by the DOEs office of Energy Efficiency & Renewable Energy (EERE) is looking for ideas on how to reduce barriers to private investment in energy technologies. Rho AI, one of 11 companies awarded a grant through the EEREs US $7.8-million programcalled Innovative Pathways, plans to use artificial intelligence and data science to efficiently connect investors to startups. By using natural language processing tools to sift through publicly available information, Rho AI will build an online network of potential investors and energy technology companies, sort of like a LinkedIn for the energy sector.The Rho AI team wants to develop a more extensive network than any individual is capable of having on their own, and theyre relying on artificial intelligence to make smarter connections faster than a human could.

Youre limited by the human networking capability when it comes to trying to connect technology and investment, says Josh Browne, co-Founder and vice president of operations at Rho AI. Theres only so many hours in a day and theres only so many people in your network.

Using theUS $750,000 it received from the DOE, Rho AI has just two years to build, test, and prove the efficacy of its system. The two-year timeline for demonstrating proof of concept is a stipulation of the grant. With this approach,the DOE hopes to streamline the underlying process for getting new energy technologies to the market, instead of investing inparticularcompanies.

Its a fairly small grant, relative to some of the larger grants where they invest in the actual hard technology, Browne says. In this case, theyre investing in ways to unlock money to invest in hard technology.

Rho AIs database will not only contain information about energy technology companies and investor interests, it will also track where money is coming from and who its going to in the industry. Browne imagines the interface will look something like a Bloomberg terminal.

To build the database, Rho AI will use Google Tensor Flow and Natural Language Toolkittools that can read and analyze human languageto scan public documents such as Securities and Exchange Commission filings and news articles on energy companies. The system will then use software tools that help analyze and visualize patterns in data, such as MUXviz and NetMiner, to understand how people and companies are connected.

In order to measure how well its helping investors and emerging clean technology companies find better business partners faster, Rho AI will compare the machine-built network with the real professional network of Carmichael Roberts, a leading venture capitalist in clean technology.

This tool is intended to emulate and perhaps surpass the networking capability of a leading clean tech venture capitalist, Browne says. It should be able to match their network, and it should be able to very rapidly be ten times their network.

Rho AIs program should create a longer, more comprehensive list of possible investments than Roberts canwithin seconds. The intention is for the final product to be robust enough that members of the private sector could and would adopt it after one year.

If Rho AI is able to be successful in what theyre building, that will be in some sense self-scaling, says Johanna Wolfson, director of the tech-to-market program at the DOE.

In other words, Rho AI could grow on its own and the industry could start seeing the effects of these connections. Investors and clean energy technology companies could find each other directly, while reducing the burden on the government to invest so much in energy innovation.

Improving the underlying pathway for getting new energy technology to market actually can be done for relatively small dollar amounts, relative to what the government sometimes supports, in ways that can be catalytic, but sustained by the private sector, said Wolfson.

Editors note: This post was corrected on August 8to reflect thespecifications of the DOEs grant.

IEEE Spectrums energy, power, and green tech blog, featuring news and analysis about the future of energy, climate, and the smart grid.

Sign up for the EnergyWise newsletter and get biweekly news on the power & energy industry, green technology, and conservation delivered directly to your inbox.

CCS still not close to widespread deployment, in spite of continued government support16Aug2011

South Carolina utilities abandoned a pair of troubled reactors projected to cost more than twice their original price 2Aug

Why the carmakers audacious plan to electrify every Volvo from 2019 may stall out 21Jul

Clean coal technology suffered a setback when efforts to start up the gasification portion of an IGCC plant in Mississippi were halted 30Jun

The world running on wind, sunlight, and hydropowerchampioned by Stanford's Mark Jacobsonhas captured the public imagination, but it faces a fierce attack from 21 climate and power-grid experts 19Jun

Leaders of major tech companies say they have good reason to stay the courseeven if the federal government won't 2Jun

Methane rules and the Paris Accord expose friction within the GOP and the Trump administration over climate and energy policy 15May

Values are personal, not necessarily logical, and when applied to electricity choices they can impact the market in unpredictable ways 11May

The incubator agencys 2017 budget victory last week still says little about its fate in 2018 9May

A crisis that threatened Southern California's electric grid enabled energy storage to demonstrate its flexibility and rapid deployment 8May

When considering only the portion of fossil fuel support that relates to electric power, renewables receive far more federal help 4May

Unfavorable market conditions are forcing some nuclear power plants to close, removing a carbon-free source of power from the U.S. grid. Plant operators are fighting back 18Apr

A mathematical rethink suggests that carbon dioxide will warm Earth more in the future than it does today. But better satellitessuch as those Trump wants to scrapare needed to reduce climate uncertainty 17Apr

The sweeping attack on climate action that President Trump demanded in his executive order is likely to prove but short-lived relief for coal miners who cheered him at the EPA 29Mar

How changes in regulations allowed wind power to increase in Texas without increasing operational reserves 23Mar

The new president could use advice from a practical problem solver 22Mar

President Donald Trump outlined a fiscal 2018 budget request that asks Congress to stamp out federal climate science and slash investment in energy innovation 17Mar

NASAs new geosensing satellites may be on the chopping block. The timing could hardly be worse 9Mar

Market-calibrated forecasts for natural gas prices show historical trends, bode well for future updates with recent data sets 7Mar

The EPA approaches this task differently for greenhouse gases than for other pollutants 23Feb

Go here to read the rest:

DOE Backs AI for Clean Tech Investors - IEEE Spectrum

Posted in Ai | Comments Off on DOE Backs AI for Clean Tech Investors – IEEE Spectrum

An artificial intelligence researcher reveals his greatest fears about the future of AI – Quartz

Posted: August 8, 2017 at 4:12 am

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?

The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systemsthe RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plantengineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disastersinking a ship, blowing up two shuttles, and spreading radioactive contamination across Europe and Asiaa set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm, and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master; these are not world-changing consequences. Indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.

Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolutionand factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty, and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collectedand get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political selftogether with the rest of humanitymay be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

There is one last fear, embodied by HAL 9000, the Terminator, and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge, or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some timesomewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different thingsas are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.

This article was originally published on The Conversation. Read the original article.

Read more:

An artificial intelligence researcher reveals his greatest fears about the future of AI - Quartz

Posted in Ai | Comments Off on An artificial intelligence researcher reveals his greatest fears about the future of AI – Quartz

AI can now detect anthrax which could help the fight against … – The Verge

Posted: at 4:12 am

In an effort to combat bioterrorism, scientists in South Korea have trained artificial intelligence to speedily spot anthrax. The new technique is not 100 percent accurate yet, but its orders of magnitude faster than our current testing methods. And it could revolutionize how we screen mysterious white powders for the deadly bioweapon.

Researchers at the Korea Advanced Institute of Science and Technology combined a detailed imaging technique called holographic microscopy with artificial intelligence. The algorithm they created can analyze images of bacterial spores to identify whether theyre anthrax in less than a second. Its accurate about 96 percent of the time, according to a paper published last week in the journal Science Advances.

Anthrax can kill quickly, if left untreated

Anthrax is an infection caused by the bacteria Bacillus anthracis, which lives in soil. (Both the infection and the bacteria are often referred to as anthrax.) People can accidentally get anthrax infections when they handle the skin or meat of infected animals. But anthrax can also be a dangerous bioweapon: in 2001, anthrax spores sent in the mail infected 22 people and killed five of them.

Once the spores enter the body, they germinate and multiply, causing a flu-like illness that poisons the blood. At least 85 percent of people infected by inhaling the spores die if left untreated, sometimes within just one to two days after symptoms appear. (Anthrax infections of the skin, by contrast, tend to be less fatal.) For people especially at risk of contracting anthrax, like lab workers or people who work with animal hair, theres a vaccine. For the rest of us, there are antibiotics but these work best when theyre started as soon as possible after exposure.

Its important to detect anthrax fast

So its important to detect anthrax fast. Right now, one of the most common methods is to analyze the genetic material of the spores or, once someone is infected, of the bacteria found in infected tissue. But that typically requires giving the spores a little time to multiply in order to yield enough genetic material to analyze. Its still going to take the better part of a day with the most rapid approaches to get a result, says bacteriologist George Stewart at the University of Missouri, who has also developed an anthrax detector and was not involved in this study.

In search of a quicker screening technique, the studys lead author, physicist YongKeun Park, teamed up with South Koreas Agency for Defense Development. The goal is to be prepared in case North Korea is developing anthrax as a bioweapon, he says.

Park turned to an imaging technique called holographic microscopy: unlike conventional microscopes, which can only capture the intensity of the light scattering off an object, a holographic microscope can also capture the direction that light is traveling. Since the structure and makeup of a cell can change how light bounces off of it, the researchers suspected that the holographic microscope might capture key, but subtle, differences between spores produced by anthrax and those produced by closely related, but less toxic species.

The AI could ID the anthrax spores within seconds

Park and his team then trained a deep learning algorithm to spot these key differences in more than 400 individual spores from five different species of bacteria. One species was Bacillus anthracis, which causes anthrax, and four were closely related doppelgngers. The researchers didnt tell the neural network exactly how to spot the different species the AI figured that out on its own. After some training, it could distinguish the anthrax spores from the non-anthrax doppelgnger species about 96 percent of the time.

The technique isnt perfect, and as a tool intended to detect bioweapons, it has to be. The drawback is that the accuracy is lower than conventional methods, Park says. There are also multiple strains of each of the bacteria species analyzed but the machine was trained on only one strain per species. Subtle differences between the strains might be able to throw off the algorithm, Stewart says. Still, the new technique is so rapid that it could come in handy. It doesnt require culturing organisms, it doesnt require extracting DNA, it doesnt require much of anything other than being able to visualize the spores themselves, Stewart says.

It could enhance our preparation for this kind of biological threat.

Next, Park wants to feed the neural network more spore images, in order to boost accuracy. In the meantime, the method could be used as a pre-screening tool to rapidly determine whether a white powder that people have been exposed to is anthrax, and if they should start antibiotics. A slower, more accurate method could then confirm the results.

This paper will not change everything, Park says, but its one step toward a method that can quickly detect anthrax. It could enhance our preparation for this kind of biological threat.

Go here to see the original:

AI can now detect anthrax which could help the fight against ... - The Verge

Posted in Ai | Comments Off on AI can now detect anthrax which could help the fight against … – The Verge

SK Telecom launches portable version of AI speaker – ZDNet

Posted: at 4:12 am

Courtesy of SK Telecom.

SK Telecom has launched an outdoor version of its AI speaker in South Korea.

The NUGU Mini is a portable version of NUGU, the mobile carrier's AI speaker for the home that has sold over 150,000 units since launching last year.

NUGU, the name of both the speaker and the AI platform, uses a cloud-based deep-learning framework that increases its speech recognition accuracy over time.

The mobile carrier said over 130 million conversations have been stored in its cloud server. Voice recognition rate for both children and regional Korean dialects significantly increased since launching, it said.

NUGU Mini weighs 219 grams -- almost a fifth of NUGU's 1,030 grams -- and can connect to external speakers and a battery that lasts over four hours.

Users can turn on Melon, Korea's number one music streamer, use smart home features offered by SK Telecom, and control their set-top boxes.

They can also order food, manage schedules, check for weather and traffic information, enquire about currency exchange rates, and register to visit local bank branches.

NUGU Mini will cost 99,000 won ($87) but SK Telecom will offer it for 49,900 won for three months when sales begin August 11.

AI speakers and speech recognition have been a big trend in the Korean tech scene this year. Kakao, which owns the country's biggest chat app KakaoTalk, launched the Kakao Mini, its own AI speaker, and is also cooperating with compatriot car maker Hyundai for in-car speech recognition.

South Korea's largest search giant Naver and its chat app subsidiary Line also partnered with Qualcomm to make AI-based Internet of Things devices.

Read more:

SK Telecom launches portable version of AI speaker - ZDNet

Posted in Ai | Comments Off on SK Telecom launches portable version of AI speaker – ZDNet

Legal Services Innovator Axiom Unveils Its First AI Offering – Above the Law

Posted: at 4:12 am

Since its founding in 2000 as an alternative provider of legal services to large corporations, Axiom has grown to have more than 1,500 lawyers and 2,000 employees across three continents, serving half the Fortune 100. With a reputation for innovation, it describes itself as a provider of tech-enabled legal services.

Given that description, it would seem inevitable that Axiom would bring artificial intelligence into the mix of the services it offers. Now it has. This week, it announced the launch of AxiomAI, a program that aims to leverage AI to improve the efficiency and quality of Axioms contract work.

AxiomAI has two components, Paul Carr, senior executive overseeing Axioms AI efforts, told me during an interview Friday. One is research, development and testing of AI tools for legal services. The other is deploying AI tools within Axioms standard workflows as testing proves them ready.

The first such deployment will come later this month, as Axiom embeds Kira Systems machine-learning contract analysis technology into its M&A diligence and integration offering. In an M&A deal, which can require review of thousands of corporate contracts, Kira automates the process of identifying key provisions such as change of control and assignment.

In the context of M&As, the AI will be invisible to our clients, Carr said. They know they have to understand the risks that may be in those agreements. They need someone to sort that out which agreements apply, whats in them in a very accurate way. And they need actionable recommendations, very specific recommendations. Thats what we deliver today, but well deliver it better and faster using AI behind the scenes.

Beyond this immediate deployment, AxiomAI will encompass a program of ongoing research and testing of AIs applicability to the delivery of legal services. In fact, it turns out that Axiom has quietly been performing this research for four years, including partnering with leading experts and vendors in the field of machine learning.

Weve been watching this space for a while, Carr said. Weve been testing really actively, running proofs of concept, of various AI tools over the last four years. At a fundamental level, we do believe that for a lot of legal work, AI will have really important applications and will change legal workflows into the future.

The focus of Axioms AI research is, as Carr put it, all things contracting, from creating single contracts to applying analytics to collections of contracts. And the type of AI on which it is focused is machine learning. We think the area that is most interesting is machine learning and, specifically, the whole area of deep learning within machine learning.

In the case of Kira, Axioms testing had demonstrated that the product was ready for deployment. We felt that the maturity of the technology which is really code for the ability of the technology to perform at a level that makes economic sense was such that it makes sense to move it, in a sense, from the lab to production, in a business-as-usual context.

Going forward, Axiom plans to keep testing other AI tools in partnership with leading practitioners in the field. A key benefit Axiom brings to the equation is an enormous collection of contractual data that can be used to train the AI technology.

We analyze over 10 million pieces of contractual information every year, Carr said. We have a very powerful data set that we plan to use to train AI technology. What we will certainly do is train and improve that technology with our training data.

The training that is performed using Axioms data will remain proprietary, and Carr believes that will add greater value for Axioms customers in the use of these AI tools.

The roadmap for Axioms research has two tracks, Carr said. One is to explore how to go deeper and further into the M&A offering its launching this month, in order to train AI tools to do even more of the work. The second is to consider the other use cases to focus on next.

One use case under consideration involves regulatory remediation for banks. Another would assist pharmaceutical companies in the negotiation and execution of clinical trial agreements.

Carr came to Axiom in 2008 from American Express, where he had run its International Insurance Services division and was its global head of strategy. He started his career working on systems integration design. He believes that technological integration takes much longer to achieve than technological innovation.

You need to put in place the surrounding capabilities that allow you to take advantage of that technology and, not immaterially, you need to go through the process of change management and behavioral change, he said. In the legal industry, thats a big deal. Theres a lot that has to happen for technical innovations to be consumed.

Driving that adoption curve is the heart of Axioms business, Carr suggests. The best way to do that, the company believes, is to combine people, process and technology in ways that allow the value of the technology to be realized. That is what Axiom now plans to do for AI.

AI today is like the internet in the late 90s, Carr said. I have no doubt that in a couple of decades, AI will be embedded in everything that impacts corporate America. But how it unfolds and takes shape is the stage were in now.

Robert Ambrogiis aMassachusetts lawyerand journalist who has been covering legal technology and the web for more than 20 years, primarily through his blogLawSites.com. Former editor-in-chief of several legal newspapers, he is a fellow of theCollege of Law Practice Managementand an inauguralFastcase 50honoree. He can be reached by email atambrogi@gmail.com, and you can follow him onTwitter(@BobAmbrogi).

Link:

Legal Services Innovator Axiom Unveils Its First AI Offering - Above the Law

Posted in Ai | Comments Off on Legal Services Innovator Axiom Unveils Its First AI Offering – Above the Law

Demystifying AI: Understanding the human-machine relationship – MarTech Today

Posted: at 4:12 am

The artificial intelligence oftodayhas almost nothing in common with the AI of science fiction.In Star Wars, Star Trek and Battlestar Galactica, were introduced to robots who behave like we do they are aware of their surroundings, understand the context of their surroundings and can move around and interact with people just as I can with you. These characters and scenarios are postulated by writers and filmmakers as entertainment, and while one day humanity will inevitably develop an AI like this, it wont happen in the lifetime of anyone reading this article.

Because we can rapidly feed vast amounts of data to them, machines appear to be learning and mimicking us, but in fact they are still at the mercy of the algorithms we provide. The way for us to think of modern artificial intelligence is to understand two concepts:

To illustrate this in grossly simplified terms, imagine a computer system in an autonomous car. Data comes from cameras placed around the vehicle, from road signs, from pictures that can be identified as hazards and so on. Rules are then written for the computer system to learn about all the data points and make calculations based on the rules of the road. The successful result is the vehicle driving from point A to B without making mistakes (hopefully).

The important thing to understand is that these systems dont think like you and me.People are ridiculously good at pattern recognition, even to the point where we prefer forcing ourselves to see patterns when there are none.We use this skill to ingest less information and make quick decisions about what to do.

Computers have no such luxury; they have to ingest everything, and if youll forgive the pun, they cant think outside the box. If a modern AI were to be programmed to understand a room (or any other volume) it would have to measure all of it.

Think of the little Roomba robot that can automatically vacuum your house.It runs randomly around until it hits every part of your room.An AI would do this (very fast) and then would be able to know how big the room is.A person could just open the door, glance at the room and say (based on prior experience), Oh, its about 20 ft. long and 12 ft. wide. Theyd be wrong, but it would be close enough.

Over the past two decades, weve delved into data science and developed vast analytical capabilities.Data is put into systems, people look it, manipulate it, identify trends and make decisions based on it.

Broadly speaking, any job like this can be automated.Computer systems are programmed with machine learning algorithms and continuously learn to look at more data more quickly than any human would be able to.Any rule or pattern that a person is looking for, a computer can be programmed to understand and will be more effective than a person at executing.

We see examples of this while running digital advertising campaigns. Before, a person would log into a system, choose which data provider to use, choose which segments to run (auto intenders, fashionistas, moms and so on), run the campaign, and then check in on it periodically to optimize.

Now, all the data is available to an AI the computer system decides how to run the campaign based on given goals (CTR, CPA, site visits and so on) and tells you during and after the campaign about the decisions it made and why.Put this AI up against the best human opponent, and the computer should win unless a new and hitherto unknown variable is introduced or required data is unavailable.

There are still lots of things computers cannot do for us. For example, look at the United Airlines fiasco last April, when a man was dragged out of his seat after the flight was overbooked. Uniteds tagline is Fly the friendly skies. The incident was anything but friendly, and any current ad campaign touting so would be balked at.

To a human, the negative sentiment is obvious. The ad campaign would be pulled and a different strategy would be attempted in this case, a major PR push. But a computer would just notice that the ads arent performing as they once were but would continue to look for ways to optimize the campaign. It might even notice lots of interactions when Fly the Friendly Skies ads are placed next to images of a person being brutally pulled off the plane and place more ads there!

The way that artificial intelligence will affect us as consumers is more subtle than we think.Were unlikely to have a relationship with Siri or Alexa (see the movie Her), and although self-driving cars will become real in our lifetime, its unlikely that traffic will improve dramatically, since not everyone will use them, and ride-sharing or service-oriented vehicles will still infiltrate our roads, contributing to traffic.

The difference will be that cars, roads and signals may all be connected with an AI running the system based on our rules. We could expect the same amount of traffic, but the flow of traffic will be much better because AI will follow the rules, meaning no slow drivers in the fast lane! And we can do whatever we want while stuck in traffic rather than being wedded to the steering wheel.

Artificial intelligence, machine learning and self-aware systems are real.They will affect us and the way we do our jobs. All of us have opportunities in our current work to embrace these new tools and effect change in our lives that will make us more efficient.

While these systems may not be R2-D2, they are still revolutionary. If you invest in and take advantage of what AI can do for your business, good things are likely to happen to you. And if you dont, youll still discover that the revolution is real but you might not be on the right side of history.

Some opinions expressed in this article may be those of a guest author and not necessarily MarTech Today. Staff authors are listed here.

Go here to read the rest:

Demystifying AI: Understanding the human-machine relationship - MarTech Today

Posted in Ai | Comments Off on Demystifying AI: Understanding the human-machine relationship – MarTech Today

AI is Here To Stay and No, It Won’t Take Away Your Job – Entrepreneur

Posted: August 6, 2017 at 3:10 am

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

Free Webinar | August 16th

Find out how to optimize your website to give your customers experiences that will have the biggest ROI for your business. Register Now

There are many examples of artificial intelligence technology that are used in our daily lives. Each example shows us how this technology is becoming important to solve our problems. But what concerns many tech leaders is that how humans and robots working together will radically change the way that we react to some of our greatest problems.

At RISE 2017 in Hong Kong, Ritu Marya, editor-in-chief, Franchise India moderated a panel discussion chaired by Michael Kaiser, Executive Director, National Cyber Security Alliance, Elisabeth Hendrickson, VP Engineering, Big Data at Pivotal Software and Adam Burden, Group Technology Office, Accenture.

The discussion addressed certain critical theories on how to see the world which is probably going to see robots and humans working together.

AI Will Make Humans Super Rather Than Being a Super Human

We spend a lot of time thinking about the role of AI in the future because we do business advisory services for clients and strategic thinking about where the businesses are heading? I think there is one fundamental guiding principle that we have that the impact of automation and artificial intelligence is more about making humans super rather than being the super human, said Burden adding that AI enabling people on amplifying their experience is right way to look at it

He feels a lot of companies looking at artificial intelligence and automation as a means of labour savings is a short term view.

Elaborating the role of AI Burden shared an example of his work in the insurance industry where he is implementing AI to save time.

We have trained the AI systems so that onecan add the site of the accident and add the pictures of the vehicle to automatically get the claim against the damage. Your time gets saved in this process and overall the experience and profitability also gets better, he said.

Talking about countries quickly adopting robotic automation in their daily lives, Burden shared that United States and China will use AI technology to the fullest to lower down the increase of labour population. India having an increasing population presents some different set of challenges but AI technology will help in solving those challenges too.

The Integrity Of That Data Becomes Credible

With too much data floating around, cybersecurity is an area where AI can truly show its capability. Kaiser believes AI technology is going to transform cyber security.

The new concept thats been most talked about now a days is the data thats been flowing everywhere. Very few of our systems are self-contained. Take smart city as an example where you have cars moving in the city that must get information from the municipality about traffic flows, accident or other kind of things. That data is collected somewhere and needs to go to the car. When you start looking at the interdependence of that data, the integrity of that data becomes credible, explained Kaiser.

He further suggested that every smart city should have a safe platform where the car knows that what information its getting is true and real.

Robot Will Only Make Human Jobs Better

Robots are doing more number of jobs that once were done by humans. Elisabeth, however, thinks that a robot will only give an ability to make human jobs better and easier by automating pieces that are time-consuming.

We dont talk about howa large number of people dont need help in scheduling because Google Calendar helps us to do that. So when you think about your job, you are not going to get replaced but your job will get easier which is going to free you up to focus on more creative aspects of it, she said.

A self confessed Bollywood Lover, Travel junkie and Food Evangelist.I like travelling and I believe it is very important to take ones mind off the daily monotony .

The rest is here:

AI is Here To Stay and No, It Won't Take Away Your Job - Entrepreneur

Posted in Ai | Comments Off on AI is Here To Stay and No, It Won’t Take Away Your Job – Entrepreneur

Page 211«..1020..210211212213..220230..»