As you might have heard, Google has acquired DeepMind, a London based artificial intelligence startup for an undisclosed sum, although rumor has it that the sum was somewhere close to $500M. Now that is a lot of money for a company which hasn’t released a product or service yet and has practically been in stealth mode since its beginning. I had heard about them before, but only when someone asked me whether I knew them and what they were up to, because they seemed to have a lot of money.
So what is going on? Is this the next bubble, the AI bubble? One cannot deny that there is a lot of interest in certain kinds of learning algorithms, in particular deep learning algorithms. I don’t want to argue whether this is AI or not, but such algorithms have proven to work well when dealing with data like images or sound, and “understanding” this kind of data to get better search and discovery is quite important to companies like Google or Facebook.
Companies have already invested quite heavily in that area (although not in this price range). In March 2013, Google bought DNNresearch, a company where neural network veteran Geoff Hinton (who co-invented the backprop training algorithm, among other things) was involved. Hinton later joined Google part time. In December 2013, Facebook announced at the annual NIPS conference that Yann LeCun, another neural networks veteran, joins Facebook to head their research center. Amazon has set up a new research lab headed by Ralf Herbrich, who formerly worked at Microsoft Research as well as Facebook, with offices in Seattle, Berlin, and Bangalore, attracting senior machine learning people as well.
Others on Twitter were also asking themselves what is going on and over the past day and a half we were putting together some pieces of the puzzle.
First of all, DeepMind really has a very strong talent pool. I haven’t checked all, but many are senior researchers with an excellent standing in the machine learning community. Probably not for being super-applied, but nonetheless very bright people. Shane Legg, one of the co-founders, has worked in an area of ML which employs certain complexity-based measures which allows one to construct “universal” learning machines with nice theoretical properties, but very little practical impact. In fact, most of these underlying measures aren’t even computable. (Yes, that’s right, you can prove that you cannot write a program which computes these)
Other who are known to work for DeepMind are Alex Graves, who also worked with Geoff Hinton and has worked on recurrent neural networks, which are well suited to deal with time-series, in particular with audio data. Apparently, he has the state of the art on the TIMIT corpus for speech recognition. Then there is Koray Kavukcuoglu who also worked on deep learning, in particular for vision. He is also a co-author of the Torch7 machine learning library, and in the discussion we asked half seriously whether Google wanted to make sure that Facebook didn’t get the whole Torch team, as the other two authors of Torch have close ties with Yann LeCun and might therefore prefer to go to Facebook if they decided to leave their current position in academia.
Apparently, DeepMind had a pretty impressive demo at the deep learning workship at last year’s NIPS (yes, the one which Mark Zuckerberg attended) where they trained a computer to play pong using Reinforcement Learning, which a learning setting where there is very little and indirect feedback for chosen actions. For input, the raw pixels on the screen were used so that the learning algorithm indeed had to make quite a lot of inference to somehow learn concepts such as what the ball is and what the bat is and what the rules of the game are.
Martin Riedmiller, who also appears in the author list with a deepmind.com email address, is a well-known professor from Freiburg, Germany, who has already some experience in applying reinforcement learning to real-world problems. In 2010, he gave a talk at Dagstuhl about controlling slot cars only using a video feed of the track.
So from all this, I think that DeepMind managed to attract a significant number of top-notch researchers from the field of machine learning. However, this demo, while technically impressive, can still be considered to be close to the published state of the art IMHO. So saying that DeepMind got somehow closer to “solving AI” than the rest of the community seems like a long shot. (I should probably add that I don’t think the current state of the art in machine learning is anywhere close to real AI, but that is another post worth of thoughts.)
People attending the deep learning workshop also reported that there was quite a bit of interest from both Facebook and Google towards DeepMind, but the talks with Facebook apparently lead nowhere, maybe because Facebook was more interesting in hiring a few of the people, but not buying the whole company.
Which still leads to the question whether $500M was justified or not. According to recode.net, the company had raised $50M, so that posed some lower limits from the investor side. I admit that initially my thoughts were “What The F…”, but now I think it’s probably justified if you consider that people who master the technology on this level are very rare, probably in the low hundreds world wide, and if an acquisition of DeepMind secures you 50 of them, then this can be quite important strategically.
So what is going to happen? According to recode, DeepMind will join Google in the “Search” division, which already contains such well-reputed people like Samy Bengio, who joined Google a while ago and was principally responsible for improving their image search. So at least DeepMind won’t die a horrible death of never meeting with Google’s infrastructure, as there already people who understand very well how to turn academia-level research to something that works.
On the other hand, in the 15 years or so I’ve followed the machine learning community, there is a recurring event of companies hiring many bright people, usually under the premise that you can just work on interesting stuff. The first such company was WhizBang! labs in the late 90s, then came BIOwulf in the early 00s. One of these years, I heard stories about a promotional video they showed to potential recruitees at NIPS, promising to build the next “mathtopia” inside BIOwulf.
In a way, that is the dream of every researcher, just do interesting and cool stuff unencumbered by the tradition and the bureaucracies of academia. So far, this often led to companies which closed down a few years later because people were taking the promise a bit to seriously and did just that: work on interesting and cool stuff, neglecting the business side of it.
DeepMind probably would have met the same fate. Or not. Having raised such an enormous amount of money and closing this deal suggests that they are quite good at selling their idea and their company. But the current interest and arms race into deep learning technology certainly made it easier for them to have this fabulous exit.
Of course, now they have to deal with the bureaucracy and politics inside Google. I hope they will succeed. Because worse than the occasional mind-boggling expensive acquisition would be the relevation that it’s not really worth it.
Thanks to beaucronin, johnmyleswhite, ogrisel, syhw, and quesada for their contributions and the interesting conversation!
Posted by Mikio L. Braun at 2014-01-28 21:35:00 +0000
blog comments powered by Disqus