Quora Question: Which Company is Leading the Field in AI Research? – Newsweek

Quora Questions are part of a partnership between NewsweekandQuora, through which we'll be posting relevant and interesting answers from Quora contributors throughout the week. Read more about the partnershiphere.

Answer from Eric Jang, Research engineer at Google Brain:

Who is leading in AI research among big players like IBM, Google, Facebook, Appleand Microsoft?First, my response contains some bias, because I work at Google Brain, and I really like it there. My opinions are my own, and I do not speak for the rest of my colleagues or Alphabet as a whole.

I rank leaders in AI research among IBM, Google, Facebook, Apple, Baidu, Microsoft as follows:

I would say Deepmind is probably #1 right now, in terms of AI research.

Their publications are highly respected within the research community, and span a myriad of topics such as deep reinforcement learning, Bayesian neural nets, robotics, transfer learningand others. Being London-based, they recruit heavily from Oxford and Cambridge, which are great ML feeder programs in Europe. They hire an intellectually diverse team to focus on general AI research, including traditional software engineers to build infrastructure and tooling, UX designers to help make research tools, and even ecologists (Drew Purves) to research far-field ideas like the relationship between ecology and intelligence.

They are second to none when it comes to PR and capturing the imagination of the public at large, such as with DQN-Atari and the history-making AlphaGo. Whenever a Deepmind paper drops, it shoots up to the top of Reddits Machine Learning page and often Hacker News, which is a testament to how well-respected they are within the tech community.

Before you roll your eyes at me putting two Alphabet companies at the top of this list, I discount this statement by also ranking Facebook and OpenAI on equal terms at #2. Scroll down if you dont want to hear me gush about Google Brain.

With all due respect to Yann LeCun (he has a pretty good answer), I think he is mistaken about Google Brains prominence in the research community.

"But much of it is focused on applications and product development rather than long-term AI research."

This is categorically false, to the max.

TensorFlow (the Brain teams primary product) is just one of many Brain subteams, and is to my knowledge the only one that builds an externally-facing product. When Brain first started, the first research projects were indeed engineering-heavy, but today, Brain has many employees that focus on long-term AI research in every AI subfield imaginable, similar to FAIR and Deepmind.

FAIR has 16 accepted publications to the ICLR 2017 conference track (announcement by Yann: Yann LeCun - FAIR has co-authors on 16 papers accepted at...), with 3 selected for orals (i.e. very distinguished publications).

Google Brain actually slightly edged out FB this year at ICLR2017, with 20accepted papers and fourselected for orals. I'm excited that the Google Brain teamwill have a decent presence at ICLR 2017.

This doesnt count publications from Deepmind or other teams doing research within Google (Search, VR, Photos). Comparing the number of accepted papers is hardly a good metric, but I want to dispel any insinuations by Yann that Brain is not a legitimate place to do deep learning research.

Google Brain is also the industry research org with the most collaborative flexibility. I dont think any other research institution in the world, industrial or otherwise, has ongoing collaborations with Berkeley, Stanford, CMU, OpenAI, Deepmind, Google X and a myriad of product teams within Google.

I believe that Brain will soon be regarded as a top tier institution in the near future. I had offers from both Brain and Deepmind, and chose the former because I felt that Brain gave me more flexibility to design my own research projects, collaborate more closely with internal Google teams, and join some really interesting robotics initiatives that I cant disclose yet.

Microsoft claims its new speech recognition software has reached parity with humans but still isn't perfect. Microsoft/ YouTube

FAIRs papers are good and my impression is that a big focus for them is language-domain problems like question answering, dynamic memory, Turing-test-type stuff. Occasionally there are some statistical-physics-meets-deep-learning papers. Obviously they do computer vision type work, as well. I wish I could say more, but I dont know enough about FAIR besides their reputation is very good.

They almost lost the deep learning framework wars with the widespread adoption of TensorFlow, but well see if Pytorch is able to successfully capture back market share.

One weakness of FAIR, in my opinion, is that its very difficult to have a research role at FAIR without a PhD. A FAIR recruiter told me this last year. Indeed, PhDs tend to be smarter, but I dont think having a PhD is necessary to bring fresh perspectives and make great contributions to science.

OpenAI has an all-star list of employees: Ilya Sutskever (all-around deep learning master), John Schulman (inventor of TRPO, master of policy gradients), Pieter Abbeel (robot sent from the future to crank out a river of robotics research papers), Andrej Karpathy (Char-RNN, CNNs), Durk Kingma (co-inventor of VAEs), Ian Goodfellow (inventor of GANs), to name a few.

Despite being a small group of around 50 people (so I guess not a Big Player by headcount or financial resources), they also have a top-notch engineering team and publish top-notch, really thoughtful research tools like Gym and Universe. Theyre adding a lot of value to the broader research community by providing software that was once locked up inside big tech companies. This has added a lot of pressure on other groups to start open-sourcing their codes and tools as well.

I almost ranked them as #1, on par with Deepmind in terms of top-research talent, but they havent really been around long enough for me to confidently assert this. They also havent pulled off an achievement comparable to AlphaGo yet, though I cant overstate how important Gym/Universe are to the research community.

As a small nonprofit research group building all their infrastructure from scratch, they dont have nearly as much GPU resources, robots, or software infrastructure as big tech companies. Having lots of compute makes a big difference in research ability and even the ideas one is able to come up with.

Startups are hard and well see whether they are able to continue attracting top talent in the coming years.

Baidu SVAIL and Baidu Institute of Deep Learning are excellent places to do research, and they are working on a lot of promising technologies like home assistants, aids for the blindand self-driving cars.

Baidu does have some reputation issues, such as recent scandals with violating ImageNet competition rules, low-quality search results leading to a Chinese student dying of cancer, and being stereotyped by Americans as a somewhat-sketchy Chinese copycat tech company complicit in authoritarian censorship.

They are definitely the strongest player in AI in China though.

Before the Deep Learning revolution, Microsoft Research used to be the most prestigious place to go. They hire very experienced faculty with many years of experience, which might explain why they sort of missed out on deep learning (the revolution in deep learning has largely been driven by PhD students).

Unfortunately, almost all deep learning research is done on Linux platforms these days, and their CNTK deep learning framework havent gotten as attention as TensorFlow, torch, Chainer, etc.

Apple is really struggling to hire deep learning talent, as researchers tend to want to publish and do research, which goes against Apples culture as a product company. This typically doesnt attract those who want to solve general AI or have their work published and acknowledged by the research community. I think Apples design roots have a lot of parallels to research, especially when it comes to audacious creativity, but the constraints of shipping an insanely great product can be a hindrance to long-term basic science.

I know a former IBM employee who worked on Watson and describes IBMs cognitive computing efforts as a total disaster, driven from management that has no idea what ML can or cannot do but sell the buzzword anyway. Watson uses deep learning for image understanding, but as I understand it the rest of the information retrieval system doesnt really leverage modern advances in deep learning. Basically there is a huge secondary market for startups to capture applied ML opportunities whenever IBM fumbles and drops the ball.

No offense to IBM researchers; youre far better scientists than I ever will be. My gripe is that the corporate culture at IBM is not conducive to leading AI research.

To be honest, all the above companies (maybe with the exception of IBM) are great places to do deep learning research, and given open source software and how prolific the entire field is nowadays, I dont think any one tech firm leads AI research by a substantial margin.

There are some places like Salesforce/Metamind, Amazonthat I heard are quite good but I dont know enough about to rank them.

My advice for a prospective deep learning researcher is to find a team/project that youre interested in, ignore what others say regarding reputation, and focus on doing your best work so that your organization becomes regarded as a leader in AI research.

Who is leading in AI research among big players like IBM, Google, Facebook, Apple, and Microsoft? originally appeared on Quorathe place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions:

Read the original post:

Quora Question: Which Company is Leading the Field in AI Research? - Newsweek

Related Posts

Comments are closed.