There are lots of awesome reading lists or posts that summarized materials related to Deep Learning. So why would I commit another one? Well, the primary objective is to develop a complete reading list that allows readers to build a solid academic and practical background of Deep Learning. And this list is developed while I’m preparing my Deep Learning workshop. My research is related to Deep Neural Networks (DNNs) in general. Hence, this posts tends to summary contributions in DNNs instead of generative models.

For Novice

If you have no idea about Machine Learning and Scientific Computing, I suggest you learn the following materials while you are reading Machine Learning or Deep Learning books. You don’t have to master these materials, but basic understanding is important. It’s hard to open a meaningful conversation if the person has no idea about matrix or single variable calculus.

Theory of Computation, Learning Theory, Neuroscience, etc

Fundamentals of Deep Learning

Tutorials, Practical Guides, and Useful Software

Literature in Deep Learning and Feature Learning

Deep Learning is a fast-moving community. Therefore the line between “Recent Advances” and “Literature that matter” is kind of blurred. Here I collected articles that are either introducing fundamental algorithms, techniques or highly cited by the community.

Recent Must-Read Advances in Deep Learning

Most of papers here are produced in 2014 and after. Survey papers or review papers are not included.

Dataset

  • Caltech 101 by L. Fei-Fei, R. Fergus and P. Perona.

  • Caltech 256 by G. Griffin, AD. Holub, P. Perona.

  • CIFAR-10 by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.

  • CIFAR-100 by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.

  • The Comprehensive Cars (CompCars) dataset by Linjie Yang, Ping Luo, Chen Change Loy, Xiaoou Tang.

  • Flickr30k by Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier.

  • ImageNet by Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg and Li Fei-Fei.

  • Microsoft COCO by Microsoft Research.

  • MNIST by Yann LeCun, Corinna Cortes, Christopher J.C. Burges.

  • Places by MIT Computer Science and Artificial Intelligence Laboratory.

  • STL-10 by Adam Coates, Honglak Lee, Andrew Y. Ng.

  • SVHN by Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y. Ng.

  • TGIF by Yuncheng Li, Yale Song, Liangliang Cao, Joel Tetreault, Larry Goldberg, Alejandro Jaimes, Jiebo Luo.

  • Visual Perception of Forest Trails by IDSIA, USI/SUPSI and Robotics and Perception Group, UZH.

  • WWW Crowd Dataset by Jing Shao, Kai Kang, Chen Change Loy, and Xiaogang Wang.

Podcast, Talks, etc.

Amazon Web Service Public AMI for Deep Learning

I configured 2 types of GPU instances that are available in AWS and installed necessary software for Deep Learning practice. The first one is DGYDLGPUv4. It is a machine that provides 8-core CPU, 15GB RAM, 500GB SSD and 1 NVIDIA Grid K520 GPU, you can use it to learn Deep Learning or conduct normal size experiment. If you need even more computing resources, you can choose DGYDLGPUXv1. This new released GPU instance offered a 32-core CPU, 60GB RAM, 500GB SSD and 4 NVIDIA Grid K520 GPUs.

NVIDIA Driver, CUDA Toolkit 7.0, cuDNNv2, Anaconda, Theano are preinstalled.

Currently they are only available at Asia Pacific (Singapore). You can copy the instance to your close region.

If you are doing analysis or experiments, I suggest you to request spot instance instead of on-demand instance. This will save you a lot of costs.

  • DGYDLGPUv4 (ami-ba516ee8) [Based on g2.2xlarge]
  • DGYDLGPUXv1 (ami-52516e00) [Based on g2.8xlarge]

Currently, my build of Caffe in the instance is failed. You can use the AMI that is provided by Caffe community. You can get more details from here

So far the instance is only available at US East (N. Virginia)

  • Caffe/CuDNN built 2015-05-04 (ami-763a331e) [For both g2.2xlarge and g2.8xlarge]

Practical Deep Neural Networks - GPU computing perspective

The following entries are materials I use in the workshop.

Slides

Practical tutorials

Codes

  • Telauges
    • A new deep learning library for learning DL.
    • MLP Layers: Tanh Layer, Sigmoid Layer, Identity Layer, ReLU Layer
    • Softmax Regression
    • ConvNet layers: Tanh Layer, Sigmoid Layer, Identity Layer, ReLU Layer
    • Max-Pooling layer
    • Max-Pooling same size
    • Feedforward Model
    • Auto-Encoder Model
    • SGD, Adagrad, Adadelta, RMSprop, Adam
    • Dropout