artificial intelligence human emotions
Artificial Intelligence and machine learning have quickly become buzzwords this year. Shutterstock / Ociacia
Quora
Q&A site Quora. Quora

This question originally appeared on Quora. Answer by Eric Jang.

Firstly, my response contains some bias, because I work at Google Brain and I really like it there. My opinions are my own, and I do not speak for the rest of my colleagues or Alphabet as a whole.

I rank “leaders in AI research” among IBM, Google, Facebook, Apple, Baidu, Microsoft as follows:

1. Deepmind

I would say Deepmind is probably #1 right now, in terms of AI research.

Their publications are highly respected within the research community, and span a myriad of topics such as Deep Reinforcement Learning, Bayesian Neural Nets, Robotics, transfer learning, and others. Being London-based, they recruit heavily from Oxford and Cambridge, which are great ML feeder programs in Europe. They hire an intellectually diverse team to focus on general AI research, including traditional software engineers to build infrastructure and tooling, UX designers to help make research tools, and even ecologists (Drew Purves) to research far-field ideas like the relationship between ecology and intelligence.

They are second to none when it comes to PR and capturing the imagination of the public at large, such as with DQN-Atari and the history-making AlphaGo. Whenever a Deepmind paper drops, it shoots up to the top of Reddit’s Machine Learning page and often Hacker News, which is a testament to how well-respected they are within the tech community.

2. Google

Before you roll your eyes at me putting two Alphabet companies at the top of this list, I discount this statement by also ranking Facebook and OpenAI on equal terms at #2. Scroll down if you don’t want to hear me gush about Google Brain :)

With all due respect to Yann LeCun (he has a pretty good answer), I think he is mistaken about Google Brain’s prominence in the research community.

But much of it is focused on applications and product development rather than long-term AI research.

This is categorically false, to the max.

TensorFlow (the Brain team’s primary product) is just one of many Brain subteams, and is to my knowledge the only one that builds an externally-facing product. When Brain first started, the first research projects were indeed engineering-heavy, but today, Brain has many employees that focus on long-term AI research in every AI subfield imaginable, similar to FAIR and Deepmind.

FAIR has 16 accepted publications to the ICLR 2017 conference track (announcement by Yann) with 3 selected for orals (i.e. very distinguished publications).

Google Brain actually slightly edged out FB this year at ICLR2017, with 20 accepted papers and 4 selected for orals.

This doesn’t count publications from Deepmind or other teams doing research within Google (Search, VR, Photos). Comparing the number of accepted papers is hardly a good metric, but I want to dispel any insinuations by Yann that Brain is not a legitimate place to do Deep Learning research.

Google Brain is also the industry research org with the most collaborative flexibility. I don’t think any other research institution in the world, industrial or otherwise, has ongoing collaborations with Berkeley, Stanford, CMU, OpenAI, Deepmind, Google X, and a myriad of product teams within Google.

I believe that Brain will soon be regarded as a top tier institution in the near future. I had offers from both Brain and Deepmind, and chose the former because I felt that Brain gave me more flexibility to design my own research projects, collaborate more closely with internal Google teams, and join some really interesting robotics initiatives that I can’t disclose… yet.

2. Facebook

FAIR’s papers are good and my impression is that a big focus for them is language-domain problems like question answering, dynamic memory, Turing-test-type stuff. Occasionally there are some statistical-physics-meets-deep-learning papers. Obviously they do computer vision type work as well. I wish I could say more, but I don’t know enough about FAIR besides their reputation is very good.

They almost lost the Deep Learning Framework wars with the widespread adoption of TensorFlow, but we’ll see if Pytorch is able to successfully capture back market share.

One weakness of FAIR, in my opinion, is that it’s very difficult to have a research role at FAIR without a PhD. A FAIR recruiter told me this last year. Indeed, PhDs tend to be smarter, but I don’t think having a PhD is necessary to bring fresh perspectives and make great contributions to science.

2. OpenAI

OpenAI has an all-star list of employees: Ilya Sutskever (all-around Deep Learning master), John Schulman (inventor of TRPO, master of policy gradients), Pieter Abbeel (robot sent from the future to crank out a river of robotics research papers), Andrej Karpathy (Char-RNN, CNNs), Durk Kingma (co-inventor of VAEs) to name a few.

Despite being a small group of ~50 people (so I guess not a “Big Player” by headcount or financial resources), they also have a top-notch engineering team and publish top-notch, really thoughtful research tools like Gym and Universe. They’re adding a lot of value to the broader research community by providing software that was once locked up inside big tech companies. This has added a lot of pressure on other groups to start open-sourcing their codes and tools as well.

I almost ranked them as #1, on par with Deepmind in terms of top-research talent, but they haven’t really been around long enough for me to confidently assert this. They also haven’t pulled off an achievement comparable to AlphaGo yet, though I can’t overstate how important Gym / Universe are to the research community.

As a small non-profit research group building all their infrastructure from scratch, they don’t have nearly as much GPU resources, robots, or software infrastructure as big tech companies. Having lots of compute makes a big difference in research ability and even the ideas one is able to come up with.

Startups are hard and we’ll see whether they are able to continue attracting top talent in the coming years.

3. Baidu

Baidu SVAIL and Baidu Institute of Deep Learning are excellent places to do research, and they are working on a lot of promising technologies like home assistants, aids for the blind, and self-driving cars.

Baidu does have some reputation issues, such as recent scandals with violating ImageNet competition rules, low-quality search results leading to a Chinese student dying of cancer, and being stereotyped by Americans as a somewhat-sketchy Chinese copycat tech company complicit in authoritarian censorship.

They are definitely the strongest player in AI in China though.

3. Microsoft Research

Before the Deep Learning revolution, Microsoft Research used to be the most prestigious place to go. They hire very experienced faculty with many years of experience, which might explain why they sort of missed out on Deep Learning (the revolution in Deep Learning has largely been driven by PhD students).

Unfortunately, almost all deep learning research is done on Linux platforms these days, and their CNTK deep learning framework haven’t gotten as attention as TensorFlow, torch, Chainer, etc.

4. Apple

Apple is really struggling to hire deep learning talent, as researchers tend to want to publish and do research, which goes against Apple’s culture as a product company. This typically doesn’t attract those who want to solve general AI or have their work published and acknowledged by the research community. I think Apple’s design roots have a lot of parallels to research, especially when it comes to audacious creativity, but the constraints of shipping an “insanely great” product can be a hindrance to long-term basic science.

10. IBM

I know a former IBM employee who worked on Watson and describes IBM’s “cognitive computing efforts” as a total disaster, driven from management that has no idea what ML can or cannot do but sell the buzzword anyway. Watson uses Deep Learning for image understanding, but as I understand it the rest of the information retrieval system doesn’t really leverage modern advances in Deep Learning. Basically there is a huge secondary market for startups to capture applied ML opportunities whenever IBM fumbles and drops the ball.

No offense to IBM researchers; you’re far better scientists than I ever will be. My gripe is that the corporate culture at IBM is not conducive to leading AI research.

Remark

To be honest, all the above companies (maybe with the exception of IBM) are great places to do Deep Learning research, and given open source software + how prolific the entire field is nowadays, I don’t think any one tech firm “leads AI research” by a substantial margin.

There are some places like Salesforce / Metamind, Amazon, that I heard are quite good but I don’t know enough about to rank them.

My advice for a prospective Deep Learning researcher is to find a team / project that you’re interested in, ignore what others say regarding reputation, and focus on doing your best work so that your organization becomes regarded as a leader in AI research :)