Stay tuned – Receive JSM-news !

Join the JSM mailing list to receive our latest updates.
Email address
ABOUT
All too often companies have only the vaguest idea about what kind of data they’re holding; because such data is very often hidden deeply away in a variety of databases and fragmented across different departments. We identify this data and bring it to light, making it visible, cohesive, comparable and easy to understand so that it really does support YOU in making the right decisions. And if need be, we can also identify any lacking data and define a concept to fill in the gap.

Insightful and effective: “Deep Learning Specialization” by Andrew Ng on Coursera

Posted by on Mar 8, 2018 in Insights, Thoughts | No Comments

Not enough can be said about the contribution of Coursera and Andrew Ng to machine learning education. The very beginning of Coursera was with a machine learning MOOC in 2012, and this was, as far as I can tell, the first time machine learning education was made available freely and publicly.

Such was the excitement surrounding this MOOC that Facebook groups, mailing lists, and meetups popped up all over the world during the course. I was briefly a member of a such a group in Pune — consisting chiefly of data science enthusiasts from academia and industry alike. At the time, I had already gained significant exposure to data science. Also, I had undertaken a couple of courses in college on neural networks and soft computing and I was using scikit-learn at work. This was only six years ago, but machine learning was not as popular then as it is now. It was still quite arcane, and was largely the domain of overqualified academics. The mathematics that makes up the foundation of statistical learning and analysis was not as widely accessible as today.

It was not easy to explain what the models, the ones that worked well, had learned. It was not easy to debug a model that did not work well. Deep learning was just called neural networks then, and it was random forests that were all the rage. Of course, it was sufficiently established that neural networks would outperform most other models in computer vision, but not everyone was solving computer vision problems and GPUs were still not cheap enough. All of these factors together had created an impostor syndrome among ML amateurs. It is still there among many of us, but today we are a lot more hopeful of getting rid of it.

This is exactly what Andrew Ng and Coursera have enabled us to do. It was perhaps the first time that a large number of novices were gaining confidence and realizing that they could train a logistic regression with gradient descent armed with little more than elementary differential calculus and linear algebra. And these elementary methods were the very stepping stones towards taming bigger monsters like backpropagation and kernel methods. If I was a romantic,  I would say that this was a bit like having Prometheus steal fire from the Gods and give it to mortals. When the deep learning specialization was announced last year, I hoped they would do with deep learning what they had earlier, so effectively done with machine learning. And boy, have they delivered!

I finished the specialization, consisting of five courses, in a period of six months. Since this was the first time this specialization was being offered, there was some delay in rolling out the material, especially before the fourth and the fifth course. But since now the courses are out, new participants will be able to finish the specialization or the individual courses in an arbitrarily short period. The courses consist of video lectures, MCQ tests, and programming assignments. The programming assignments are in Python, and use Tensorflow and Keras for implementing neural network abstractions. (The earlier 2012 course was in MATLAB. This shows how far deep learning software has come — the state of the art in deep learning can be implemented using fully free and open source software). The first three courses also feature a special segment called Heroes of Deep Learning, where Andrew interviews some of the stalwarts of deep learning (Geoffrey Hinton, Youshua Bengio, Andrej Karpathy and Ian Goodfellow to name a few).

The five courses offered in the specialization are as follows:

  1. Neural Networks and Deep Learning — While this course does not shove you into deep networks right away, having a basic understanding of shallow neural networks will definitely help speed things up (conventionally, any network that has more than one hidden layer qualifies as a deep network). This particular course is mostly dedicated to revising shallow networks and generalizing the same technique for deeper networks. Relatively, the most novel thing about this course is Andrew’s explanation of using computational graphs to obtain gradients. Automating these computations is what allows frameworks like Tensorflow and PyTorch to perform gradient descent on arbitrarily complex networks and loss functions. In the programming exercises we learn how to implement the basic building blocks of a neural network (with NumPy and Tensorflow), and a simple but effective neural network that can distinguish cats from dogs to ~ 80% accuracy.
  2. Improving Deep Networks — This is easily the most important of the five courses. Building deep networks by stacking up layers is easy (thanks to wonderful tools like Keras), but effectively training networks, making sure they don’t overfit, that they converge within a reasonable time (making guesses as to whether they even will converge within a reasonable time) is where the art of deep learning lies. This course deals with regularization, configuring optimizers or solvers and tuning hyperparameters. These are a few things that end up being disproportionately more relevant in practice, than the size of your network or the capacity of your GPU. The programming exercises in this course will show you the effects of regularization (or those of the lack of it), the effects of different optimization algorithms and what they physically do to the loss function, and strategies for tuning hyperparameters.
  3. Structuring Machine Learning Projects — This course deals with the overall design and execution of machine learning projects. This means that you get to learn how to put together an end-to-end project, including the things to keep in mind when acquiring data, deciding on a single metric and running experiments and drawing conclusions from them. This course does not have any programming assignments, as it is mostly a conceptual course about putting together everything you have learned in the previous courses.
  4. Convolutional Neural Networks — This is the course where you start learning about computer vision problems in earnest. The state-of-the-art in computer vision rests more or less solely on convolutional neural networks (CNNs). Broadly, the key to solving many computer vision problems is finding a tradeoff between the context of an object (in an image or a video) and the detail of that object. The convolution operation captures this information effectively, and by piling up a bunch of these operations (convolution layers), you can build a network which, at different levels, learns to capture information at different levels of abstraction. Starting with motivating examples like edge detection, this course explores a bunch of popular convolutional models and builds up towards object detection (is there a cat in an image?) and localization (where is the cat in the image?). This is not an easy course if you have no prior experience with CNNs. There is a lot of content here about popular CNN architectures from the seminal LeNet to more complex architectures like Inception Net (yes, it is named after the movie!) The programming assignments here are relatively more challenging than the previous courses, but correspondingly more rewarding as well. I have been able to apply what I learned in this course successfully to a few object detection problems with reasonable success.
  5. Sequence Models — The best starting place for RNNs (one kind of sequence models) is Karpathy’s blog post — The Unreasonable Effectiveness of Recurrent Neural Networks, in which he explains how RNNs are very versatile in the kind of information they can process. Because of this versatility, their underlying algorithms, too, are naturally more complicated than those of other kinds of neural networks. Of all types of neural networks, I am most intimidated by sequence models. Perhaps a lot of other people are too, and that is why they offered this course at the end. However, the video lectures alongwith the programming assignments in this course do soften the blow considerably. The programming assignments in this course are more important than in any other course in this specialization. It is very easy to get confused by the implementation details of sequence models and the exercises help you understand things at a lower level. This course deals with popular variations of RNNs like LSTMs, Bidirectional RNNs, etc. The second week of the course is almost exclusively dedicated to word embeddings, and the third and last week deals with language models and machine translation. The course itself is great, but given the richness of content in sequence models, I feel like they wrapped it up too quickly.

In summary, the course is well worth its cost in time and money (I thank my employers for generously sponsoring part of the specialization). Andrew Ng is a fantastic teacher and does a great job at whetting your appetite and explaining complex ideas in a very effective manner. However, I do not believe the certification alone makes you more eligible to be hired as a deep learning specialist. There is a significant gap between finishing the specialization successfully and being able to execute deep learning projects in practice. Having done the certification does equip you with the tools required to bridge that gap, but by itself, it simply implies that you know what you are talking about when it comes to deep learning. This is understandable since a MOOC cannot guarantee that level of quality in its students.

A MOOC is simply not that mature of a medium, yet, to ensure that its students are unambiguously well versed with the content and the application of that content. By their very nature, MOOCs cannot test their students thoroughly enough. But given how far MOOCs have come (the evolution of Coursera’s MOOCs from the original ML class in 2012 to the deep learning specialization in 2017 being a case in point), I am very hopeful that before long, MOOCs will have transcended this barrier too, and will become a veritable substitute for university courses.

Finally, let us not forget that the best teacher is experience. There is nothing more attractive to employers than a demonstration of what you have learned and how you have applied it. Should you wish to ask me questions about the course, please mention them in the comments section below and I shall respond to them.

Jaidev Deshpande

Data Scientist and Software Developer. Jaidev specialises in building end-to-end machine learning applications to automate business processes.

More Posts

- Website

Leave a Reply