When machine learning meets complexity: why Bayesian deep learning is unavoidable

Posted on
ai bayesian deeplearning news

By now, all of you have probably followed deep learning research for quite a while. In 1998, LeCun et al. proposed the very simple MNIST data set of handwritten digits and showed with their LeNet-5 that we can achieve a high validation accuracy on it. The data sets subsequently proposed became more complex (e.g., ImageNet or Atari games), hence the models performing well on them became more sophisticated, i.e. complex, as well.

Simultaneously, the tasks these models can perform also became more complex as, e.g., Goodfellow et al. ’s GANs (2014) or Kingma & Welling’s VAEs (2014).

One of my personal highlights is Eslami et al.’s Neural scene representation and rendering (2018), which clearly shows that neural networks can perform fairly complex tasks to-date. I see a clear upwards slope in the complexity of tasks neural networks can perform and do not believe this trend will stop or slow down soon.

Source: medium.com