Page 280«..1020..279280281282..»

Category Archives: Ai

Humans engage AI in translation competition – The Stack

Posted: February 15, 2017 at 12:17 am

Human translators will face off against artificially intelligent (AI) machine translators next week in Seoul, South Korea.

The competition, sponsored by Sejong Cyber University and the International Interpretation and Translation Association (IITA) will pit human translators against Google Translate and Naver Papago.

Google Translate and Naver Papago are two of the most popular English-Korean AI translation services.Both use Neural Machine Translation (NMT), which replaces traditional word-for-word systems with full-sentence translation, helping to improve syntax and flow in machine translations.

Neural Machine Translation also creates machines that are capable of self-improvement through deep learning and big data analysis.

The challenge for competitors will be to translate two randomly selected news articles from English to Korean, and two from Korean to English. Each competitor will have 30 minutes per article to complete the translation, which will be judged for accuracy. Judges have been selected from professors at Sejong University and professional translators with the IITA.

Sejong University will live stream the event. The expectation is that the artificial intelligence programs will win for speed, but that the human translators will win for accuracy. However, in the research paper published by Google with the initial announcement of Neural Machine Translation, humanversus-AI tests showed that the AI program was able to translate random sentences with near-human accuracy.

IITA Secretary-General Kang Dae-young said, We hope to confirm that humans and machines have different strengths and weaknesses and highlight that human professionals will still have their roles in translation and interpretation of the future.

In one study comparing Google Translate to Naver Papago in English-Korean translations, it was found that Naver was better at translating slang, while Google was better at long sentences. Both performed well in translating short sentences accurately.

With advances in deep learning, human versus AI competitions are growing in popularity. Just last month, a poker competition at Carnegie Mellon University pitted humans against an AI program called Libratus. The 20-day competition ended with the machine winning by a resounding $1.76 million in chips over human competitors. A different program, DeepStack, recently became the first to beat humans at no-limit poker.

Read the rest here:

Humans engage AI in translation competition - The Stack

Posted in Ai | Comments Off on Humans engage AI in translation competition – The Stack

Watch Drive.ai’s self-driving car handle California city streets on a … – TechCrunch

Posted: at 12:17 am


CNET

See the article here:

Watch Drive.ai's self-driving car handle California city streets on a ... - TechCrunch

Posted in Ai | Comments Off on Watch Drive.ai’s self-driving car handle California city streets on a … – TechCrunch

How to Keep Your AI From Turning Into a Racist Monster – WIRED

Posted: February 13, 2017 at 9:20 am

Slide: 1 / of 1. Caption: Getty Images

Working on a new product launch? Debuting a new mobile site? Announcing a new feature? If youre not sure whether algorithmic bias could derail your plan, you should be.

About

Megan Garcia (@meganegarcia) is a senior fellow and director of New America California, where she studies cybersecurity, AI, and diversity in technology.

Algorithmic biaswhen seemingly innocuous programming takes on the prejudices either of its creators or the data it is fedcauses everything from warped Google searches to barring qualified women from medical school. It doesnt take active prejudice to produce skewed results (more on that later) in web searches, data-driven home loan decisions, or photo-recognition software. It just takes distorted data that no one notices and corrects for.

It took one little Twitter bot to make the point to Microsoft last year. Tay was designed to engage with people ages 18 to 24, and it burst onto social media with an upbeat hellllooooo world!! (the o in world was a planet earth emoji). But within 12 hours, Tay morphed into a foul-mouthed racist Holocaust denier that said feminists should all die and burn in hell. Tay, which was quickly removed from Twitter, was programmed to learn from the behaviors of other Twitter users, and in that regard, the bot was a success. Tays embrace of humanitys worst attributes is an example of algorithmic biaswhen seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed.

Tay represents just one example of algorithmic bias tarnishing tech companies and some of their marquis products. In 2015, Google Photos tagged several African-American users as gorillas, and the images lit up social media. Yonatan Zunger, Googles chief social architect and head of infrastructure for Google Assistant, quickly took to Twitter to announce that Google was scrambling a team to address the issue. And then there was the embarrassing revelation that Siri didnt know how to respond to a host of health questions that affect women, including, I was raped. What do I do? Apple took action to handle that as well after a nationwide petition from the American Civil Liberties Union and a host of cringe-worthy media attention.

One of the trickiest parts about algorithmic bias is that engineers dont have to be actively racist or sexist to create it. In an era when we increasingly trust technology to be more neutral than we are, this is a dangerous situation. As Laura Weidman Powers, founder of Code2040, which brings more African Americans and Latinos into tech, told me, We are running the risk of seeding self-teaching AI with the discriminatory undertones of our society in ways that will be hard to rein in, because of the often self-reinforcing nature of machine learning.

As the tech industry begins to create artificial intelligence, it risks inserting racism and other prejudices into code that will make decisions for years to come. And as deep learning means that code, not humans, will write code, theres an even greater need to root out algorithmic bias. There are four things that tech companies can do to keep their developers from unintentionally writing biased code or using biased data.

The first is lifted from gaming. League of Legends used to be besieged by claims of harassment until a few small changes caused complaints to drop sharply. The games creator empowered players to vote on reported cases of harassment and decide whether a player should be suspended. Players who are banned for bad behavior are also now told why they were banned. Not only have incidents of bullying dramatically decreased, but players report that they previously had no idea how their online actions affected others. Now, instead of coming back and saying the same horrible things again and again, their behavior improves. The lesson is that tech companies can use these community policing models to attack discrimination: Build creative ways to have users find it and root it out.

Second, hire the people who can spot the problem before launching a new product, site, or feature. Put women, people of color, and others who tend to be affected by bias and are generally underrepresented in tech companies development teams. Theyll be more likely to feed algorithms a wider variety of data and spot code that is unintentionally biased. Plus there is a trove of research that shows that diverse teams create better products and generate more profit.

Third, allow algorithmic auditing. Recently, a Carnegie Mellon research team unearthed algorithmic bias in online ads. When they simulated people searching for jobs online, Google ads showed listings for high-income jobs to men nearly six times as often as to equivalent women. The Carnegie Mellon team has said it believes internal auditing to beef up companies ability to reduce bias would help.

Fourth, support the development of tools and standards that could get all companies on the same page. In the next few years, there may be a certification for companies actively and thoughtfully working to reduce algorithmic discrimination. Now we know that water is safe to drink because the EPA monitors how well utilities keep it contaminant-free. One day we may know which tech companies are working to keep bias at bay. Tech companies should support the development of such a certification and work to get it when it exists. Having one standard will both ensure sectors sustain their attention to the issue and give credit to the companies using commonsense practices to reduce unintended algorithmic bias.

Companies shouldnt wait for algorithmic bias to derail their projects. Rather than clinging to the belief that technology is impartial, engineers and developers should take steps to ensure they dont accidentally create something that is just as racist, sexist, and xenophobic as humanity has shown itself to be.

See the rest here:

How to Keep Your AI From Turning Into a Racist Monster - WIRED

Posted in Ai | Comments Off on How to Keep Your AI From Turning Into a Racist Monster – WIRED

How Chinese Internet Giant Baidu Uses AI And Machine Learning – Forbes

Posted: at 9:20 am


Forbes
How Chinese Internet Giant Baidu Uses AI And Machine Learning
Forbes
Baidu, the Chinese internet giant and their counterpart to Google and Amazon, is using artificial intelligence, machine learning and deep learning effectively to ...

and more »

See the original post here:

How Chinese Internet Giant Baidu Uses AI And Machine Learning - Forbes

Posted in Ai | Comments Off on How Chinese Internet Giant Baidu Uses AI And Machine Learning – Forbes

Ford pledges $1bn for AI start-up – BBC News

Posted: at 9:20 am


The Register
Ford pledges $1bn for AI start-up
BBC News
Car giant Ford has announced that it is investing $1bn (800m) over the next five years in artificial intelligence (AI) company Argo. The firms will collaborate on developing a virtual driver system for driverless cars. Ford intends to have an ...
Ford fills up ex-Google, Uber engineers' tank: $1bn pours into Argo AIThe Register
Ford invests massively in AI startupElectronics EETimes (registration)
Argo AI: Ford to invest $1 billion in Artificial IntelligenceComputer Business Review
PYMNTS.com -NDTV -Silicon UK
all 41 news articles »

Visit link:

Ford pledges $1bn for AI start-up - BBC News

Posted in Ai | Comments Off on Ford pledges $1bn for AI start-up – BBC News

Dyson opens new Singapore tech center with focus on R&D in AI and software – TechCrunch

Posted: at 9:20 am

Dyson is expanding its footprint in Singapore, with a new Technology Centre opened today by the maker of vacuums and other smart home electronics. The UK company will be investing $561 million as part of its commitment to the new facility, which hosts working labs where research and development teams can pool their cumulative hardware and software know-how to help advance the companys growing ambitions.

If youre only passingly familiar with Dysons work, you might be wondering what a company that makes vacuums needs with a half-million dollar tech facility with a focus the company says is on artificial intelligence, machine learning and software development. But Dyson has always emphasized its tech edge in the domestic cleaning hardware market, and its only doing more to push that advantage lately, including more work in robotics, computer visions systems and machine learning with products like its Dyson 360 Eye robot vacuum.

As you can see from the photos of the facility, the company also put a lot of engineering work into one of its most recent products, the Supersonic hair dryer. There has also been some speculation that Dyson could extend some of its expertise around electric motors and battery tech into the automotive space, though the company isnt saying much one way or another about those reports just yet.

crose

Dyson R&D by Gareth Phillips

Dyson R&D by Gareth Phillips

Dyson R&D by Gareth Phillips

Dyson R&D by Gareth Phillips

crose

crose

crose

Dysons new facility also includes what they call The Control Tower, which shows real-time supply chain and logistics data, and which they use to help ensure things run smoothly in terms of global production and shipping, and the new tech centre is very close to Dysons West Park production facility, where the company says one of its digital motors leaves the line every 2.6 seconds, thanks to highly automated production lines.

Dyson has already said that it will do much more in robotics, machine learning and robotics according to the engineer leading its robotics program, Mike Aldred, and it seems like this new tech center will help with those pursuits. The company has already admitted its working on next-generation robot vacuums, even as it launched the first, and it also says that computer vision and other tech it created for the 360 Eye will apply more broadly across its offerings.

Go here to see the original:

Dyson opens new Singapore tech center with focus on R&D in AI and software - TechCrunch

Posted in Ai | Comments Off on Dyson opens new Singapore tech center with focus on R&D in AI and software – TechCrunch

Google’s New AI Has Learned to Become "Highly Aggressive" in Stressful Situations – ScienceAlert

Posted: at 9:20 am

Late last year, famed physicist Stephen Hawking issued a warning that the continued advancement of artificial intelligence will either be "the best, or the worst thing, ever to happen to humanity".

We've all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity, and now results from recent behaviour tests of Google's new DeepMind AI system are making it clear just how careful we need to be when building the robots of the future.

In tests late last year, Google's DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world's best Go players at their own game.

It's since been figuring out how to seamlessly mimic a human voice.

Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.

The Google team ran 40 million turns of a simple 'fruit gathering' computer game that asks two DeepMind 'agents' to compete against each other to gather as many virtual apples as they could.

They found that things went smoothly so long as there were enough apples to go around, but as soon as the apples began to dwindle, the two agents turned aggressive, using laser beams to knock each other out of the game to steal all the apples.

You can watch the Gathering game in the video below, with the DeepMind agents in blue and red, the virtual apples in green, and the laser beams in yellow:

Now those are some trigger-happy fruit-gatherers.

Interestingly, if an agent successfully 'tags' its opponent with a laser beam, no extra reward is given. It simply knocks the opponent out of the game for a set period, which allows the successful agent to collect more apples.

If the agents left the laser beams unused, they could theoretically end up with equal shares of apples, which is what the 'less intelligent' iterations of DeepMind opted to do.

It was only when the Google team tested more and more complex forms of DeepMind that sabotage, greed, and aggression set in.

As Rhett Jones reports for Gizmodo, when the researchers used smaller DeepMind networks as the agents, there was a greater likelihood for peaceful co-existence.

But when they used larger, more complex networks as the agents, the AI was far more willing to sabotage its opponent early to get the lion's share of virtual apples.

The researchers suggest that the more intelligent the agent, the more able it was to learn from its environment, allowing it to use some highly aggressive tactics to come out on top.

"This model ... shows that some aspects of human-like behaviour emerge as a product of the environment and learning," one of the team, Joel Z Leibo, told Matt Burgess at Wired.

"Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself."

DeepMind was then tasked with playing a second video game, called Wolfpack. This time, there were three AI agents - two of them played as wolves, and one as the prey.

Unlike Gathering, this game actively encouraged co-operation, because if both wolves were near the prey when it was captured, they both received a reward - regardless of which one actually took it down:

"The idea is that the prey is dangerous - a lone wolf can overcome it, but is at risk of losing the carcass to scavengers," the team explains in their paper.

"However, when the two wolves capture the prey together, they can better protect the carcass from scavengers, and hence receive a higher reward."

So just as the DeepMind agents learned from Gathering that aggression and selfishness netted them the most favourable result in that particular environment, they learned fromWolfpackthat co-operation can also be the key to greater individual success in certain situations.

And while these are just simple little computer games, the message is clear - put different AI systems in charge of competing interests in real-life situations, and it could be an all-out war if their objectives are not balanced against the overall goal of benefitting us humans above all else.

Think traffic lights trying to slow things down, and driverless cars trying to find the fastest route - both need to take each other's objectives into account to achieve the safest and most efficient result for society.

It's still early days for DeepMind, and the team at Google has yet to publish their study in a peer-reviewed paper, but the initial results show that, just because we build them, it doesn't mean robots and AI systems will automatically have our interests at heart.

Instead, we need to build that helpful nature into our machines, and anticipate any 'loopholes' that could see them reach for the laser beams.

As the founders of OpenAI, Elon Musk's new research initiative dedicated to the ethics of artificial intelligence, said back in 2015:

"AI systems today have impressive but narrow capabilities.It seems that we'll keep whittling away at their constraints, and in the extreme case, they will reach human performance on virtually every intellectual task.

It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly."

Tread carefully, humans...

More:

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations - ScienceAlert

Posted in Ai | Comments Off on Google’s New AI Has Learned to Become "Highly Aggressive" in Stressful Situations – ScienceAlert

An artificially intelligent pathologist bags India’s biggest funding in healthcare AI – Tech in Asia

Posted: at 9:20 am


Tech in Asia
An artificially intelligent pathologist bags India's biggest funding in healthcare AI
Tech in Asia
The World Health Organization requires a pathologist to spend 20 minutes examining a blood smear on a slide under a microscope before ruling out malaria if no parasites are seen. You can imagine how fatigue, an urge to get home, or some other ...
SigTuple, an AI healthcare started by techies, raises $5.8M to help better diagnosisYourStory.com

all 14 news articles »

View post:

An artificially intelligent pathologist bags India's biggest funding in healthcare AI - Tech in Asia

Posted in Ai | Comments Off on An artificially intelligent pathologist bags India’s biggest funding in healthcare AI – Tech in Asia

Google Test Of AI’s Killer Instinct Shows We Should Be Very Careful – Gizmodo

Posted: February 12, 2017 at 7:17 am

If climate change, nuclear weapons or Donald Trump dont kill us first, theres always artificial intelligence just waiting in the wings. Its been a long time worry that when AI gains a certain level of autonomy it will see no use for humans or even perceive them as a threat. A new study by Googles DeepMind lab may or may not ease those fears.

The researchers at DeepMind have been working with two games to test whether neural networks are more likely to understand motivations to compete or cooperate. They hope that this research could lead to AI being better at working with other AI in situations that contain imperfect information.

In the first game, two AI agents (red and blue) were tasked with gathering the most apples (green) in a rudimentary 2D graphical environment. Each agent had the option of tagging the other with a laser blast that would temporarily remove them from the game.

The game was run thousands of times and the researchers found that red and blue were willing to just gather apples when they were abundant. But as the little green dots became more scarce, the dueling agents were more likely to light each other up with some ray gun blasts to get ahead. This video doesnt really teach us much but its cool to look at:

Using a smaller network, the researchers found a greater likelihood for co-existence. But with a larger, more complex network, the AI was quicker to start sabotaging the other player and horde the apples for itself.

In the second, more optimistic, game called Wolfpack the agents were tasked to play wolves attempting to capture prey. Greater rewards were offered when the wolves were in close proximity during a successful capture. This incentivised the agents to work together rather than heading off to the other side of the screen to pull a lone wolf attack against the prey. The larger network was much quicker to understand that in this situation cooperation was the optimal way to complete the task.

While all of that might seem obvious, this is vital research for the future of AI. More and more complex scenarios will be needed to understand how neural networks learn based on incentives as well as how they react when theyre missing information.

The most practical short-term application of the research is to be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet - all of which depend on our continued cooperation.

For now, DeepMinds research is focused on games with strict rules like the ones above and Go, a strategy game which it famously beat the worlds top champion. But it has recently partnered up with Blizzard in order to start learning Starcraft II, a more complex game in which reading an opponents motivations can be quite tricky. Joel Leibo, the lead author of the paper tells Bloomberg, Going forward it would be interesting to equip agents with the ability to reason about other agents beliefs and goals.

Lets just be glad the DeepMind team is taking things very slowly methodically learning what does and does not motivate AI to start blasting everyone around it.

[DeepMind Blog via Bloomberg]

Here is the original post:

Google Test Of AI's Killer Instinct Shows We Should Be Very Careful - Gizmodo

Posted in Ai | Comments Off on Google Test Of AI’s Killer Instinct Shows We Should Be Very Careful – Gizmodo

Ford bets $1B on Argo AI: Why Silicon Valley and Detroit are teaming up – Christian Science Monitor

Posted: at 7:17 am

February 11, 2017 In the race to make self-driving cars an everyday reality, Ford may have just pulled into the lead.

The automaker will invest $1 billion in artificial intelligence startup Argo AI, Ford announced Friday."The next decade will be defined by the automation of the automobile," said Ford president and chief executive officer Mark Fields, "and autonomous vehicles will have as significant an impact on society as Ford's moving assembly line did 100 years ago."

The investment, which will be used to develop a virtual driver system for Fords autonomous vehicle due to hit the assembly line in 2021 makes the Dearborn-based manufacturer the majority stakeholder in Argo AI, which was started last fall by former Google self-driving car project director Bryan Salesky and Uber engineering lead Peter Rander.

Argo was looking to extend "the incredible advancements in machine learning, artificial intelligence and computer vision to the general public, said Mr. Salesky, and Ford is the perfect partner to do that.

We are energized by Fords commitment and vision for the future of mobility, and we believe this partnership will enable self-driving cars to be commercialized and deployed at scale to extend affordable mobility to all, Salesky said in a statement.

As competitors, including Uber, Tesla, and Aurora, snatch up top robotics talent, Fords five-year investment will help both companies attract and retain top engineers and software developers in this increasingly competitive battleground.

Argo, which is headquartered in Pittsburgh but has offices in both Michigan and California, hopes to expand its team to have more than 200 employees by the end of the year.

The announcement came as a growing number of traditional auto industry giants join forces with tech companies. In March, General Motors acquired Cruise Automation a startup developing self-driving technology for more than $1 billion, after investing $500 million in Lyft at the beginning of the year. Toyota and Volkswagon followed suit, investing in Uber and Israel-based Gett, respectively.

Ford's rapidly expanding business model has expanded into several emerging fields in recent years, including ride sharing and bicycle rentals. In its partnership with Argo AI, the century-old automaker said it hopes to further transform itself into a broad-based mobility company.

"Working together with Argo AI gives Ford a distinct competitive advantage at the intersection of the automotive and technology industries," said Raj Nair, Ford's executive vice president. "This open collaboration is unlike any other partnership allowing us to benefit from combining the speed of a startup with Fords strengths in scaling technology, systems integration, and vehicle design.

View original post here:

Ford bets $1B on Argo AI: Why Silicon Valley and Detroit are teaming up - Christian Science Monitor

Posted in Ai | Comments Off on Ford bets $1B on Argo AI: Why Silicon Valley and Detroit are teaming up – Christian Science Monitor

Page 280«..1020..279280281282..»