MARGINALLY INTERESTING


MACHINE LEARNING, COMPUTER SCIENCE, JAZZ, AND ALL THAT

Head Over To margint.blog

Hello Fellow Readers,

I’ve set up a new blog at margint.blog and will continue posting there (hopefully more frequently than I did in the past two years). This blog here will stay around indefinitely, of course, but I’ve also started to repost best-ofs to the new blog.

I moved over to a wordpress hosted blog instead of my handrolled Jekyll plus static files setup. If you’re interested, here are my main reasons to make the switch:

  • The main reason was that editing files became more and more difficult. Recompiling took longer and longer. I couldn’t just drag and drop images, and I was starting to want a WYSIWYG style editor (yeah, I’m getting old…).
  • Having full control is nice, but the last redesign, moving from my own CSS files to something more responsive took all my mental capacities to make it through.
  • I was considering again rolling my own based on ghost or something like it, but I would have needed to do some customization, for example, to make old URLs work. Also, I never found the time.
  • It was nice to have full Google Analytics integration, but let’s be honest, I never needed the full feature set of that anyway, daily graphs of what people are reading were enough for my post-publishing-are-people-reading-this-urges.
  • I was becoming interesting in scheduled posts and having updates automatically propagated to social media.

I don’t know how I ended up with wordpress, but they’ve been around forever and they seem to know their business. There was also a pleasant surprise, they essentially give you one domain for free. No idea whether it’s for the first year only or not, but that was definitely nice.

In any case, if you want to continue reading, bookmark margint.blog, or start following me on Twitter or LinkedIn.

Thanks for reading!

AI's Road to the Mainstream

A totally subjective history of the past 20 years

First posted on July 30, 2016 on medium. This version contains minor corrections and a few links.

When I enrolled in Computer Science in 1995, Data Science didn’t exist yet, but a lot of the algorithms we are still using already did. And this is not just because of the return of the neural networks, but also because probably not that much has fundamentally changed since back then. At least it feels to me this way. Which is funny considering that starting this year or so AI seems to finally have gone mainstream.

1995 sounds like an awful long time ago, before we had cloud computing, smartphones, or chatbots. But as I have learned these past years, it only feels like a long time ago if you haven’t been there yourself. There is something about the continuation of the self which pastes everything together and although a lot has changed, the world didn’t feel fundamentally different than it does today.

Not even Computer Science was nowhere as mainstream as it was today, that came later, with the first dot com bubble around the year 2000. Some people even questioned my choice to study computer science at all, because apparently programming computers was supposed to become so easy no specialists are required anymore.

Actually, artificial intelligence was one of the main reasons for me to study computer science. The idea to use it as an constructive approach to understanding the human mind seemed intriguing to me. I went through the first two years of training, made sure I picked up enough math for whatever would lie ahead, and finally arrived in my first AI lectured held by Joachim Buhmann, back then professor at the University of Bonn (where Sebastian Thrun was just about to leave for the US).

I would have to look up where in his lecture cycle I joined but he had two lectures on computer vision, one on pattern recognition (mostly from the old editions of the Duda & Hart book), and one in information theory (following closely the book by Cover & Thomas). The material was interesting enough, but also somewhat disappointing. As I now know, people stopped working on symbolic AI and instead stuck to more statistical approaches to learning, where learning essentially was reduced to the problem of picking the right function based on a finite amount of observations.

The computer vision lecture was even less about learning and relied more on explicit physical modelling to derive the right estimators, for example, to reconstruct motion from a video. The approach back then was much more biologically and physically motivated than nowadays. Neural networks existed, but everybody was pretty clear that they were just “another kind of function approximators.”

Everyone with the exception of Rolf Eckmiller, another professor where I worked as a student. Eckmiller had built his whole lab around the premise that “neural computation” was somehow inherently better than “conventional computation”. This was back in the days when NIPS had full tracks devoted to studying the physiology and working mechanisms of neurons, and there were people who believed there is something fundamentally different happening in our brains, maybe on a quantum level, that gives rise to the human mind, and that this difference is a blocker for having truly intelligent machines.

While Eckmiller was really good at selling his vision, most of his staff was thankfully much more down to earth. Maybe it is a very German thing, but everybody was pretty matter of fact about what these computational models could or couldn’t do, and that has stuck with me throughout my studies.

I graduated in October 2000 with a pretty farfetched master thesis trying to make a connection between learning and hard optimization problems, then started on my PhD thesis and stuck around in this area of research till 2015.

While there had always been attempts to prove industry relevance, it was a pretty academic endeavor for a long while, and the community was pretty closed up. There were individual success stories, for example around handwritten character recognition, but many of the companies around machine learning failed. One of these companies I remember was called Biowulf Technologies and one NIPS they went around recruiting people with a video which promised it to be the next “mathtopia”. In essence, this was the story of DeepMind, recruiting a bunch of excellent researchers and then hoping it will take off.

The whole community also revolved around one fashion to the next. One odd thing about machine learning as a whole is that there exist only a handful of fundamentally different problems like classification, regression, clustering, and so on, but a whole zoo of approaches. It is not like in physics (I assume) or mathematics where some generally agreed upon unsolved hard problems exist whose solution would advance the state of the art. This means that progress is often done laterally, by replacing existing approaches with a new one, still solving the same problem in a different way. For example, first there were neural networks. Then support vector machines came, claiming to be better because the associated optimization problem is convex. Then there was boosting, random forests, and so on, till the return of neural networks. I remember that Chinese Restaurant Processes were “hot” for two years, no idea what their significance is now.

Big Data and Data Science

Then there came Big Data and Data Science. Being still in academia at the time, it always felt to me as if this was definitely coming from the outside, possibly from companies like Google who had to actually deal with enormous amounts of data. Large scale learning always existed, for example for genomic data in bioinformatics, but one usually tried to solve problems by finding more efficient algorithms and approximations, not by parallelizing brute force.

Companies like Google finally proved that you can do something with massive amounts of data, and that finally changed the mainstream perception. Technologies like Hadoop and NoSQL also seemed very cool, skillfully marketing themselves as approaches so new, they wouldn’t suffer from the technological limitations of existing systems.

But where did this leave the machine learning researchers? My impression always was that they were happy that they finally got some recognition, but they were also not happy about the way this happened. To understand this, one has to be aware that most ML researchers aren’t computer scientists or very good or interested in coding. Many come from physics, mathematics or other sciences, where their rigorous mathematical training was an excellent fit for the algorithm and modeling heavy approach central to machine learning.

Hadoop on the other hand was extremely technical. Written in Java, a language perceived as being excessively enterprise-y at the time, it felt awkward and clunky compared to the fluency and interactiveness of first Matlab and then Python. Even those who did code usually did so in C++, and to them Java felt slow and heavy, especially for numerical calculations and simulations.

Still, there was no way around it, so they rebranded everything they did as Big Data, or began to stress, that Big Data only provides the infrastructure for large scale computations, but you need someone who “knows what he is doing” to make sense of the data.

Which is probably also not entirely wrong. In a way, I think this divide is still there. Python is definitely one if the languages of choice for doing data analysis, and technologies like Spark try to tap into that by providing Python bindings, whether it makes sense from a performance point of view or not.

The Return of Deep Learning

Even before DeepDream, neural networks began making their return. Some people like Yann LeCun have always stuck to this approach, but maybe ten years ago, there where a few works which showed how to use layerwise pretraining and other tricks to train “deep” networks, that is larger networks than one previously thought possible.

The thing is, in order to train neural networks, you evaluate it on your training examples and then adjust all of the weights to make the error a bit smaller. If one writes the gradient across all weights down, it naturally occurs that one starts in the last layer and then propagate the error back. Somehow, the understanding was that the information about the error got smaller and smaller from layer to layer and that made it hard to train networks with many layers.

I’m not sure that is still true, as far as I know, many people are just using backprop nowadays. What has definitely changed is the amount of available data, as well as the availability of tools and raw computing power.

So first there were a few papers sparking the interest in neural networks, then people started using them again, and successively achieved excellent results for a number of application areas. First in computer vision, then also for speech processing, and so on.

I think the appeal here definitely is that you can have one approach for all. Why the hassle of understanding all those different approaches, which come from so many different backgrounds, when you can understand just one method and you are good to go. Also, neural networks have a nice modular structure, you can pick and put together different kinds of layers and architectures to adapt them to all kinds of problems.

Then Google published that ingenious deep dream paper where they let a learned network generate some data, and we humans with our immediate readiness to read structure and attribute intelligence picked up quickly on this.

I personally think they were surprised by how viral this went, but then decided the time is finally right to go all in on AI. So now Google is an “AI first” company and AI is gonna save the world, yes.

The Fundamental Problem Remains

Many academics I have talked to are unhappy about the dominance of deep learning right now, because it is an approach which works well, maybe even too well, but doesn’t bring us much closer to really understand how the human mind works.

I also think the fundamental problem remains unsolved. How do we understand the world? How do we create new concepts? Deep learning stays an imitation on a behavioral level and while that may be enough for some, it isn’t for me.

Also, I think it is dangerous to attribute too much intelligence to these systems. In raw numbers, they might work well enough, but when they fail they do so in ways that clearly show they operate in an entirely different fashion.

While Google translate lets you skim the content of website in a foreign language, it is still abundantly clear that the system has no idea what it is doing.

Sometimes I feel like nobody cares, also because nobody gets hurt, right? But maybe it is still my German cultural background that would rather prefer we see things as they are, and take it from there.

Hey Ho, Thanks for Sticking Around!

So this was definitely a long radio silence! Since I blogged last time, a lot has happened.

I’ve quit my PostDoc job and joined Zalando, a big (the biggest?) European fashion retailer (revenue in 2015: about three billion Euros). I’m a “delivery lead”, kind of a technical lead role for two teams. One is running the recommendation service for all of Zalando, the other team is creating a new search backend service. It definitely is a management position, so I don’t code much (except for sometimes), but I find leadership extremely interesting, and challenging, and Zalando with its agile culture seems like the perfect place to be for me right now.

One of my last projects at the university was a lecture series on Scalable Machine Learning. I had originally planned to spend about the same amount of time on Big Data technology as on the theoretical underpinnings, but after interventions from other professors who were concerned about people earning “double credits” it became the most mathematical piece of teaching I ever did. This was quite an experience. I had planned to prepare a few weeks worth in advance, but in the end, I was often spending all of Tuesday and Wednesday to churn out those 40-50 slides per week for the lecture on Thursday.

I attended StrataHadoop in London last year, where Ben Lorica talked me into doing a video on Scalable Machine Learning for O’Reilly (finally, also being allowed to talk about the technical side). We recorded the video in Amsterdam during OSCON in October in a single day (also, quite an experience). It is geared at people who already have a good understanding of Data Science and ML, but are not yet familiar with large scale learning, or Big Data technology, and is intended as a starting point into technologies like Spark.

So my interests have shifted a bit, away from pure machine learning towards topics like facilitating collaboration between data scientists and engineers. Yesterday I gave a talk titled Hardcore Data Science in Practice to present my current state of insights into the matter, and I put my slides on the Internet. Zalando has heavily invested in data scientists and it’s very interesting to see how that works out day to day. A friend of mine has remarked that I’m the only person he knows who uses the term “Data Scientist” unironically. ;)

I also greatly enjoy working with developers on a daily basis. After all, I’m a computer scientist by training, but having worked in machine learning for so long where people come from many backgrounds like physics, mathematics, and so on, I almost forgot about this.

One blog post that just didn’t happened was a lengthy treatement of my reasons for leaving academia. Personally it makes total sense to me now, and I think there is a lot left to improve in academia, but I don’t see the point in talking about it right now.

There is obviously a lot of things happening right now in the hypespace. Internet of Things, chat bots, and suddenly AI is back and there are a few upcoming things I just have to say about that, too ;)

So thanks for sticking around, hopefully the next post will not take another year to write.