Skip Article Header. Skip to: Start of Article.
I sat down with Kevin Systrom, the CEO of Instagram, in June to interview him for my feature story, Instagrams CEO Wants to Clean Up the Internet, and for Is Instagram Going Too Far to Protect Our Feelings, a special that ran on CBS this week.
It was a long conversation, but here is a 20-minute overview in which Systrom talks about the artificial intelligence Instagram has been developing to filter out toxic comments before you even see them. He also discusses free speech, the possibility of Instagram becoming too bland, and whether the platform can be considered addictive. Our conversation occurred shortly before Instagram introduced the AI to the public.
A transcript of the conversation follows.
Nicholas Thompson, Editor-in-Chief: Morning, Kevin
Kevin Systrom, CEO of Instagram: Morning! How are you?
NT: Doing great. So what I want to do in this story is I want to get into the specifics of the new product launch and the new things youre doing and the stuff thats coming out right now and the machine learning. But I also want to tie it to a broader story about Instagram, and how you decided to prioritize niceness and how it became such a big thing for you and how you reoriented the whole company. So Im gonna ask you some questions about the specific products and then some bigger questions
KS: Im down.
NT: All right so lets start at the beginning. I know that from the very beginning you cared a lot about comments. You cared a lot about niceness and, in fact, you and your co-founder Mike Krieger would go in early on and delete comments yourself. Tell me about that.
KS: Yeah. Not only would we delete comments but we did the unthinkable: We actually removed accounts that were being not so nice to people.
NT: So for example, whom?
KS: Yeah well I dont remember exactly whom, but the back story is my wife is one of the nicest people youll ever meet. And that bleeds over to me and I try to model it. So, when we were starting the app, we watched this video, basically how to start a company. And it was by this guy who started the LOLCats meme and he basically said, To form a community you need to do something, and he called it Prune the trolls. And Nicole would always joke with me, shes like, Hey listen, when your community is getting rough, you gotta prune the trolls. And thats something she still says to me today to remind me of the importance of community, but also how important it is to be nice. So back in the day we would go in and if people were mistreating people, wed just remove their accounts. I think that set an early tone for the community to be nice and be welcoming.
NT: But whats interesting is that this is 2010, and 2010 is a moment where a lot of people are talking about free speech and the internet, and Twitters role in the Iranian revolution. So it was a moment where free speech was actually valued on the internet, probably more than it is now. How did you end up being more in the prune the trolls camp?
KS: Well theres an age-old debate between free speechwhat is the limit of free speech, and is it free speech to just be mean to someone? And I think if you look at the history of the law around free speech, youll find that generally theres a line where you dont want to cross because youre starting to be aggressive or be mean or racist. And you get to a point where you wanna make sure that in a closed community thats trying to grow and thrive, you make sure that you actually optimize for overall free speech. So if I dont feel like I can be myself, if I dont feel like I can express myself because if I do that, I will get attacked, thats not a community we want to create. So we just decided to be on the side of making sure that we optimized for speech that was expressive and felt like you had the freedom to be yourself.
NT: So, one of the foundational decisions at Instagram that helped make it nicer than some of your peers, was the decision to not allow re-sharing, and to not allow something that I put out there to be kind of appropriated by someone else and sent out into the world by someone else. How was that decision made and were there other foundational design and product decisions that were made because of niceness?
KS: We debate the re-share thing a lot. Because obviously people love the idea of re-sharing content that they find. Instagram is full of awesome stuff. In fact, one of the main ways people communicate over Instagram Direct now is actually they share content that they find on Instagram. So thats been a debate over and over again. But really that decision is about keeping your feed focused on the people you know rather than the people you know finding other stuff for you to see. And I think that is more of a testament of our focus on authenticity and on the connections you actually have than about anything else.
NT: So after you went to VidCon, you posted an image on your Instagram feed of you and a bunch of celebrities
KS: Totally, in fact it was a Boomerang.
NT: It was a Boomerang, right! So Im going to read some of the comments on @kevins post.
KS: Sure.
NT: These are the comments: Succ, Succ, Succ me, Succ, Can you make Instagram have auto-scroll feature? That would be awesome and expand Instagram as a app that could grow even more, #memelivesmatter, you succ, you can delete memes but not cancer patients, I love #memelivesmatter, #allmemesmatter, succ, #MLM, #memerevolution, cuck, mem, #stopthememegenocide, #makeinstagramgreatagain, #memelivesmatter, #memelivesmatter, mmm, gang, melon gangIm not quite sure what all this means. Is this typical?
KS: It was typical, but Id encourage you to go to my last post which I posted for Fathers Day
NT: Your last post is all nice!
KS: Its all nice.
NT: Theyre all about how handsome your father is.
KS: Right? Listen, he is taken. My mom is wonderful. But there are a lot of really wonderful comments there.
NT: So why is this post from a year ago full of cuck and #memelivesmatter and the most recent post is full of how handsome Kevin Systroms dad is?
KS: Well thats a good question. I would love to be able to explain it, but the first thing I think is back then there were a bunch of people who I think were unhappy about the way Instagram was managing accounts. And there are groups of people that like to get together and band up and bully people, but its a good example of how someone can get bullied, right. The good news is I run the company and I have a thick skin and I can deal with it. But imagine youre someone whos trying to express yourself about depression or anxiety or body image issues and you get that. Does that make you want to come back and post on the platform? And if youre seeing that, does that make you want to be open about those issues as well? No. So a year ago I think we had much more of a problem, but the focus over that year, over both comment filtering so now you can go in and enter your own words that basically filter out comments that include that word. We have spam filtering that works pretty well, so probably a bunch of those would have been caught up in the spam filter that we have because they were repeated comments. And also just a general awareness of kind comments. We have this awesome campaign that we started called #kindcomments. I dont know if you know the late night show were they reads off mean comments on another social platform; we started kind comments to basically set a standard in the community that it was better and cooler to actually leave kind comments. And now there is this amazing meme that has spread throughout Instagram about leaving kind comments. But you can see the marked difference between the post about Fathers Day and that post a year ago on what technology can do to create a kinder community. And i think were making progress which is the important part.
NT: Tell me about sort of steps one, two, three, four, five. How do you you dont automatically decide to launch the seventeen things youve launched since then? Tell me about the early conversations.
KS: The early conversations were really about what problem are we solving and we looked to the community for stories. We talked to community members. We have a giant community team here at Instagram, which I think is pretty unique for technology companies. Literally, their job is to interface with the community and get feedback and highlight members who are doing amazing things on the platform. So getting that type of feedback from the community about what types of problems they were experiencing in their comments then led us to brainstorm about all the different things we could build. And what we realized was there was this giant wave of machine learning and artificial intelligenceand Facebook had developed this thing that basicallyits called deep text
NT: Which launches in June of 2016, so its right there.
KS: Yup, so they have this technology and we put two and two together and we said: You know what? I think if we get a bunch of people to look at comments and rate them good or badlike you go on pandora and you listen to a song, is it good or is it badget a bunch of people to do that. Thats your training set. And then what you do is you feed it to the machine learning system and you let it go through 80 percent of it and then you hold out the other 20 percent of the comments. And then you say, Okay, machine, go and rate these comments for us based on the training set, and then we see how well it does and we tweak it over time, and now were at a point where basically this machine learning can detect a bad comment or a mean comment with amazing accuracybasically a 1 percent false positive rate. So throughout that process of brainstorming, looking at the technology available and then training this filter over time with real humans who are deciding this stuff, gathering feedback from our community and gathering feedback from our team about how it works, were able to create something were really proud of.
NT: So when you launch it you make a very important decision: Do you want it to be aggressive, in which case itll probably knock out some stuff it shouldnt? Or do you want it to be a little less aggressive, in which case a lot of bad stuff will get through?
KS: Yeah, this is the classic problem. If you go for accuracy, you will misclassify a bunch of stuff that actually was pretty good. So you know if your my friend and I go on your photo and Im just joking around with you and giving you a hard time, Instagram should let that through because were friends and Im just giving you a hard time and thats a funny banter back and forth. Whereas if you dont know me and I come on and I make fun of your photo, that feels very different. Understanding the nuance between those two is super important and the thing we dont want to do is have any instance where we block something that shouldnt be blocked. The reality is its going to happen. So the question is, is that margin of error worth it for all the really bad stuff that gets blocked? And thats a fine balance to figure out. Thats something were working on. We trained the filter basically to have a one-percent false positive rate. So that means one percent of things that get marked as bad are actually good. And that was a top priority for us because were not here to curb free speech, were not here to curb fun conversations between friends, but we want to make sure we are largely attacking the problem of bad comments on Instagram.
NT: And so you go, and every comment that goes in gets sort of run through an algorithm, and the algorithm gives it a score from 0 to 1 on whether its likely a comment that should be filtered or a comment that should not be filtered, right? And then that score is combined with the relationship of the two people?
KS: No, the score actually is influenced based on the relationship of the people
NT: So the original score is influenced by, and Instagram I believeif I have this correcthas something like a karma score for every user, where the number of times theyve been flagged or the number of critiques made of them is added into something on the back end, is that goes into this too?
KS: So without getting into the magic sauceyoure asking like Coca Cola to give up its recipeIm going to tell you that theres a lot of complicated stuff that goes into it. But basically it looks at the words, it looks at our relationship, and it looks at a bunch of other signals including account age, account history, and that kind of stuff. And it combines all those signals and then it spits out a score of 0 to 1 about how bad this comment is likely. And then basically you set a threshold that optimizes for one-percent false-positive rate.
NT: when do you decide its ready to go?
KS: I think at a point where the accuracy gets to a point that internally were happy with it. So one of the things we do here at instagram is we do this thing called dogfoodingand not a lot of people know this term but in the tech industry it means, you know, eat your own dog food. So what we do is we take the products and we always apply them to ourselves before we go out to the community. And there are these amazing groups on Instagramand I would love to take you through them but theyre actually all confidential but its employees giving feedback about how they feel about specific features.
NT: So this is live on the phone to a bunch of Instagram employees right now?
KS: There are always features that are not launched that are live on Instagram employees phones, including things like this.
NT: So theres a critique of a lot of the advances in machine learning that the corpus on which it is based has biases built into it. So DeepText analyzed all Facebook commentsanalyzed some massive corpus of words that people have typed into the internet. When you analyze those, you get certain biases built into them. So for example, I was reading a paper and someone had taken a corpus of text and created a machine learning algorithm to rank restaurants, and to look at the comments people had written under restaurants and then to try and guess the quality of the restaurants. He went through and he ran it, and he was like, Interesting, because all of the Mexican restaurants were ranked badly. So why is that? Well it turns out, as he dug deeper into the algorithm, its because in massive corpus of text the word Mexican is associated with illegalillegal Mexican immigrant because that is used so frequently. And so there are lots of slurs attached to the word Mexican, so the word Mexican has negative connotations in the machine learning-based corpus, which then affects the restaurant rankings of Mexican restaurants.
KS: That sounds awful
NT: So how do you deal with that?
KS: Well the good news is were not in the business of ranking restaurants
NT: But you are ranking sentences based on this huge corpus of text that Facebook has analyzed as part of DeepText
KS: Its a little bit more complicated than that. So all of our training comes from Instagram comments. So we have hundreds of raters and its actually pretty interesting what weve done with this set of raters: basically, human beings that sit there and by the way human beings are not unbiased thats not what im claimingbut you have human beings. Each of those raters is bilingual. So they speak two languages, they have a diverse perpsective, theyre from all over the world. And they rank those comments basically, thumbs up or thumbs down. Basically the instagram corpus, right?
So you feed it a thumbs up, thumbs down based on an individual. And you might say, But wait, isnt a single individual biased in some way? Which is why we make sure every comment is actually seen twice and given a rating twice by at least two people to make sure that there is as minimal amount of bias in the system as possible. And then on top of that, we also gain feedback from not only our team but also the community, and then were able to tweak things on the margin to make sure things like that dont happen. Im not claiming that it wont happenthats of course a riskbut the biggest risk of all is doing nothing because were afraid of these things happening. And I think its more important that we are A) aware of them, and B) monitoring them actively, and C) making sure we have a diverse group of raters that not only speak two languages but are from all over the world and represent different perspectives to make sure we have an unbiased classifier.
NT: So lets take a sentence like These hos aint loyal, which is a phrase that I believe a previous study on Twitter had a lot of trouble with. Your theory is that some people will say, Oh thats a lyric, therefore its okay, some people wont know it will get through, but enough raters looking at enough comments over time will allow lyrics to get through, and These hoes aint loyal, I can post that on your Instagram feed if you post a picture which deserves that comment.
KS: Well I think what I would counter is, if you post that sentence to any person watching this, not a single one of them would say thats a mean spirited comment to any of us, right?
NT: Right.
NT: So I think thats pretty easy to get to. I think if there are more nuance in examples, and I think thats the spirit of your question, which is that there are grey areas. The whole idea of machine learning is that its far better about understanding those nuances than any algorithm has in the past, or any single human being could. And I think what we have to do over time is figure out how to get into that grey area, and judge the performance of this algorithm over time to see if it actually improves things. Because by the way, if it causes trouble and it doesnt work, well scrap it and start over with something new. But the whole idea here is that were trying something. And I think a lot of the fears that youre bringing up are warranted but is exactly why it keeps most companies from even trying in the first place.
NT: And so first youre going to launch this filtering bad comments, and then the second thing youre going to do is the elevation of positive comments. Tell me about how that is going to work and why thats a priority.
KS: The elevation of positive comments is more about modeling in the system. Weve seen a bunch of times in the system where we have this thing called the mimicry effect. So if you raise kind comments, you actually see more kind comments, or you see more people giving kind comments. its not that we ever ran this test but Im sure if you raised a bunch of mean comments you would see more mean comments. Part of this is the piling-on effect, and I think what we can do is by modeling what great conversations are, more people will see Instagram as a place for that, and less for the bad stuff. And its got this interesting psychological effect that people want to fit in and people want to do what theyre seeing, and that means that people are more positive over time.
NT: And are you at all worried that youre going to turn Instagram into the equivalent of an East Coast liberal arts college?
KS: I think those of us who grew up on the East Coast might take offense to that *laughs* Im not sure what you mean exactly.
NT: I mean a place where there are trigger warnings everywhere, where people feel like like they cant have certain opinions, where people feel like they cant say things. Where you put this sheen over all your conversations, as though everything in the world is rosy and the bad stuff, were just going to sweep it under the rug.
KS: Yeah, that would be bad. Thats not something we want. I think in the range of bad, were talking about the lower five percent. Like the really, really, bad stuff. I dont think were trying to play anywhere in the area of grey. Although I realize, theres no black or white and were going to have to play at some level. But the idea here is to take out, I dont know, the bottom five percent of nasty stuff. And I dont think anyone would argue that, that makes Instagram a rosy place, it just doesnt make it a hateful place.
So you feed it a thumbs up, thumbs down based on an individual. And you might say, But wait, isnt a single individual biased in some way? Which is why we make sure every comment is actually seen twice and given a rating twice by at least two people to make sure that there is as minimal amount of bias in the system as possible. And then on top of that, we also gain feedback from not only our team but also the community, and then were able to tweak things on the margin to make sure things like that dont happen. Im not claiming that it wont happenthats of course a riskbut the biggest risk of all is doing nothing because were afraid of these things happening. And I think its more important that we are A) aware of them, and B) monitoring them actively, and C) making sure we have a diverse group of raters that not only speak two languages but are from all over the world and represent different perspectives to make sure we have an unbiased classifier.
NT: So lets take a sentence like These hos aint loyal, which is a phrase that I believe a previous study on Twitter had a lot of trouble with. Your theory is that some people will say, Oh thats a lyric, therefore its okay, some people wont know it will get through, but enough raters looking at enough comments over time will allow lyrics to get through, and These hoes aint loyal, I can post that on your Instagram feed if you post a picture which deserves that comment.
KS: Well I think what I would counter is, if you post that sentence to any person watching this, not a single one of them would say thats a mean spirited comment to any of us, right?
NT: Right.
NT: So I think thats pretty easy to get to. I think if there are more nuance in examples, and I think thats the spirit of your question, which is that there are grey areas. The whole idea of machine learning is that its far better about understanding those nuances than any algorithm has in the past, or any single human being could. And I think what we have to do over time is figure out how to get into that grey area, and judge the performance of this algorithm over time to see if it actually improves things. Because by the way, if it causes trouble and it doesnt work, well scrap it and start over with something new. But the whole idea here is that were trying something. And I think a lot of the fears that youre bringing up are warranted but is exactly why it keeps most companies from even trying in the first place.
NT: And you wouldnt want all of the comments on your,You know, on your VidCon post, its a mix of sort of jokes, and nastiness, and vapidity, and useful product feedback. And youre getting rid of the nasty stuff, but wouldnt it be better, if you raised like the best product feedback and the funny jokes to the top?
KS: Maybe. And maybe thats a problem well decide to solve at some point. But right now were just focused on making sure that people dont feel hate, you know? And I think thats a valid thing to go after, and Im excited to do it.
NT: So the thing that interests me the most is that its like Instagram is a world with 700 million people, and youre writing the constitution for the world. When you get up in the morning and you think about that power, that responsibility, how does it affect you?
KS: Doing nothing felt like the worst option in thew world. So starting to tackle it means that we can improve the world; we can improve the lives of as many young people in the world that live on social media. I dont have kids yet; I will someday, and I hope that kid, boy or girl, grows up in a world where they feel safe online, where I as a parent feel like theyre safe online. And you know the cheesy saying, with great power comes great responsibility. We take on that responsibility. And were going to go after it. But that doesnt mean that not acting is the correct option. There are all sorts of issues that come with acting, youve highlighted a number of them today, but that doesnt mean we shouldnt act. That just means we should be aware of them and we should be monitoring them over time.
NT: One of the critiques is that Instagram, particularly for young people is very addictive. And in fact theres a critique being made my Tristen Harris who was a-classmate of yours, and a classmate of Mikes, and a student in the same class as Mikes. And he says that the design of Instagram deliberately addicts you. For example, when you open it up it just- KS: Sorry Im laughing just because I think the idea that anyone inside here tries to design something that is maliciously addictive is just so far fetched. We try to solve problems for people and if by solving those problems for people they like to use the product, I think weve done our job well. This is not a casino, we are not trying to eke money out of people in a malicious way. The idea of Instagram is that we create something that allows them to connect with their friends, and their family, and their interests, positive experiences, and I think any criticism of building that system is unfounded.
NT: So all of this is aimed at making Instagram better. And it sounds like changes so far have made Instagram better. Is any of it aimed at making people better, or is there any chance that the changes that happen on Instagram will seep into the real world and maybe, just a little bit, the conversations in this country will be more positive than theyve been?
KS: I sure hope we can stem any negativity in the world. Im not sure we would sign up from that day one. Um, but I actually want to challenge the initial premise which is that this is about making Instagram better. I actually think its about making the internet better. I hope someday the technology that we develop and the training sets we develop and the things we learn we can pass on to startups, we can pass on our peers in technology, and we actually together build a kinder, safer, more inclusive community online.
NT: Will you open source the software youve built for this?
KS: Im not sure. Im not sure. I think a lot of it comes back to how good it performs, and the willingness of our partners to adopt it.
NT: But what if this fails? What if actually people actually get kind of turned off by instagram, they say, Instagrams becoming like Disneyland, I dont want to be there. And they share less?
KS: The thing I love about Silicon Valley is weve bear hugged failure. Failure is what we all start with, we go through, and hopefully we dont end on, on our way to success. I mean Instagram wasnt Instagram initially. It was a failed start up before. I turned down a bunch of job offers that would have been really awesome along the way. That was failure. Ive had numerous product ideas at Instagram that were totally failures. And thats okay. We bear hug it because when you fail at least youre trying. And I think thats actually what makes Silicon Valley different from traditional business. Is that our tolerance for failure here is so much higher. And thats why you see bigger risks and also bigger payoffs.
Original post:
Instagram CEO Kevin Systrom on Free Speech, Artificial Intelligence, and Internet Addiction. - WIRED
- Classic reasoning systems like Loom and PowerLoom vs. more modern systems based on probalistic networks [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Using Amazon's cloud service for computationally expensive calculations [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Software environments for working on AI projects [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New version of my NLP toolkit [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Semantic Web: through the back door with HTML and CSS [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Java FastTag part of speech tagger is now released under the LGPL [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Defining AI and Knowledge Engineering [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Great Overview of Knowledge Representation [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Something like Google page rank for semantic web URIs [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My experiences writing AI software for vehicle control in games and virtual reality systems [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- The URL for this blog has changed [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- I have a new page on Knowledge Management [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- N-GRAM analysis using Ruby [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Good video: Knowledge Representation and the Semantic Web [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Using the PowerLoom reasoning system with JRuby [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Machines Like Us [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- RapidMiner machine learning, data mining, and visualization tool [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- texai.org [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- NLTK: The Natural Language Toolkit [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My OpenCalais Ruby client library [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Ruby API for accessing Freebase/Metaweb structured data [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Protégé OWL Ontology Editor [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New version of Numenta software is available [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Very nice: Elsevier IJCAI AI Journal articles now available for free as PDFs [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Verison 2.0 of OpenCyc is available [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- What’s Your Biggest Question about Artificial Intelligence? [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Minimax Search [Knowledge] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Decision Tree [Knowledge] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- More AI Content & Format Preference Poll [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New Planners Solve Rescue Missions [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Neural Network Learns to Bluff at Poker [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Pushing the Limits of Game AI Technology [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Mining Data for the Netflix Prize [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Interview with Peter Denning on the Principles of Computing [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Decision Making for Medical Support [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Neural Network Creates Music CD [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- jKilavuz - a guide in the polygon soup [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial General Intelligence: Now Is the Time [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Apply AI 2007 Roundtable Report [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- What Would You do With 80 Cores? [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Software Finds Learning Language Child's Play [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial Intelligence in Games [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial Intelligence Resources [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Alan Turing: Mathematical Biologist? [Last Updated On: April 25th, 2012] [Originally Added On: April 25th, 2012]
- BBC Horizon: The Hunt for AI ( Artificial Intelligence ) - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Can computers have true artificial intelligence" Masonic handshake" 3rd-April-2012 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Kevin B. Korb - Interview - Artificial Intelligence and the Singularity p3 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Artificial Intelligence - 6 Month Anniversary - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Science Breakthroughs [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Hitman: Blood Money - Part 49 - Stupid Artificial Intelligence! - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Research Members Turned Off By HAARP Artificial Intelligence - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Artificial Intelligence Lecture No. 5 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- The Artificial Intelligence Laboratory, 2012 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Charlie Rose - Artificial Intelligence - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Expert on artificial intelligence to speak at EPIIC Nights dinner [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Filipino software engineers complete and best thousands on Stanford’s Artificial Intelligence Course [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Vodafone xone™ Hackathon Challenges Developers and Entrepreneurs to Build a New Generation of Artificial Intelligence ... [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Rocket Fuel Packages Up CPG Booster [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- 2 Filipinos finishes among top in Stanford’s Artificial Intelligence course [Last Updated On: May 5th, 2012] [Originally Added On: May 5th, 2012]
- Why Your Brain Isn't A Computer [Last Updated On: May 5th, 2012] [Originally Added On: May 5th, 2012]
- 2 Pinoy software engineers complete Stanford's AI course [Last Updated On: May 7th, 2012] [Originally Added On: May 7th, 2012]
- Percipio Media, LLC Proudly Accepts Partnership With MIT's Prestigious Computer Science And Artificial Intelligence ... [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Google Driverless Car Ok'd by Nevada [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel and Forrester Research Announce Free Webinar [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Rocket Fuel Wins 2012 San Francisco Business Times Tech & Innovation Award [Last Updated On: May 13th, 2012] [Originally Added On: May 13th, 2012]
- Internet Week 2012: Rocket Fuel to Speak at OMMA RTB [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- How to Get the Most Out of Your Facebook Ads -- Rocket Fuel's VP of Products, Eshwar Belani, to Lead MarketingProfs ... [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- The Digital Disruptor To Banking Has Just Gone International [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel Announce Free Webinar Featuring an Independent Research Firm [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- MASA Showcases Latest Version of MASA SWORD for Homeland Security Markets [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- Bluesky Launches Drones for Aerial Surveying [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- Artificial Intelligence: What happened to the hunt for thinking machines? [Last Updated On: May 25th, 2012] [Originally Added On: May 25th, 2012]
- Bubble Robots Move Using Lasers [VIDEO] [Last Updated On: May 25th, 2012] [Originally Added On: May 25th, 2012]
- UHV assistant professors receive $10,000 summer research grants [Last Updated On: May 27th, 2012] [Originally Added On: May 27th, 2012]
- Artificial intelligence: science fiction or simply science? [Last Updated On: May 28th, 2012] [Originally Added On: May 28th, 2012]
- Exetel taps artificial intelligence [Last Updated On: May 29th, 2012] [Originally Added On: May 29th, 2012]
- Software offers brain on the rain [Last Updated On: May 29th, 2012] [Originally Added On: May 29th, 2012]
- New Dean of Science has high hopes for his faculty [Last Updated On: May 30th, 2012] [Originally Added On: May 30th, 2012]
- Cognitive Code Announces "Silvia For Android" App [Last Updated On: May 31st, 2012] [Originally Added On: May 31st, 2012]
- A Rat is Smarter Than Google [Last Updated On: June 5th, 2012] [Originally Added On: June 5th, 2012]