Book Review: Deep learning for coders with fastai and PyTorch

For a few years now, the team from fast.ai has been providing free education about deep learning on their website. Their video course promises a hands-on approach that aims to de-mystify the technologies of modern deep learning. With the book “Deep Learning for coders with fast.ai“, they bring these education principles to the written format, either as a printed book from O’Reilly or on Github (for free).

Deep Learning for Coders with fastai & PyTorch: An excellent guide to deep learning for everyone who learns best when reading and writing actual source code.

Before I talk about the book, some context: fast.ai is the name of a website with a video course of the same name. The course is taught using a Python library (called fastai, no dot) which is built on top of PyTorch, the popular deep learning framework. Nomenclature can be confusing. I’ll try to be specific and reference “the fast.ai team” or “the fastai library” in this review of the book.

Teaching structure: Top-down, then bottom-up

The authors are very vocal about their teaching principles: The goal is to “teach the whole game” while skipping the often demotivating mathematical principles at the beginning.

Instead, the first example gives you all instructions needed to train a state-of-the-art image classification model from scratch.

Then, the book progresses deeper into the technical and mathematical foundations, which they use to build up a (simple) version of their fastai library from scratch.

I’m torn: While this structure lowers the barrier of entry, it also makes for a repetitive experience: You encounter the same example many times, just at different levels of abstraction.

What’s in the book

The book covers a wide range of deep learning topics at different levels of depth.

You see practical examples across the main applications areas of deep learning: Computer vision, natural language processing, tabular modeling, and collaborative filtering

All examples are presented with full code listings and everything in the book invites you to go and try things out yourself.

The book presents a wide collection of deep learning techniques that help to get trainings running properly in practice.

From core concepts like gradient descent to advanced architecture constructs: The book comes with many helpful and well-designed illustrations to digest the information.

Popular deep learning architectures are explained, including ResNet, LSTMs, and U-Nets. With a mix of code, visualization, and (some) maths, the authors do a good job of conveying the core ideas of important architectures.

The authors don’t stop at the technical explanations but stress that it’s important to think further. Deep learning is a powerful tool and one that should be used responsibly. Yes, the technical implementor has in fact a responsibility to consider fairness criteria and ask the question “should we even do this at all?”

What I liked

The book is packed with code examples. I personally learn best when implementing something by hand and seeing how an abstract idea translates to actual source code, so this really matched my learning style.

Code examples everywhere. Great for everyone who learns best in this format, myself included.

The language of the text is also very easy to digest. You can tell that the fast.ai team wants to teach a little differently and is genuinely excited about the topic. The text is mixed with personal anecdotes and examples of Twitter conversations to create a sense of community around the otherwise technical topic.

The collection of the latest deep learning techniques and condensed experience is immensely valuable: You learn how a proper training process looks like, which techniques you can use to improve the training and how to investigate if the training is not behaving nicely.

What I didn’t like

My biggest gripe with the fast.ai material is their Python coding style: Everything has to be an abbreviation, apparently. I don’t know why you call a parameter ni when it could just as well be called num_inputs. If the goal is to “reduce jargon”, using explicit naming in the code would be part of that, if you ask me.

Secondly, the teaching principle of “top-down, then bottom-up” has its quirks: You repeat the same example over and over again, just on different levels of abstraction. When I want to look up “the chapter on convolutional neural networks”, it’s not one chapter I have to browse, but 4 or 5. This may make for a good didactic progression but feels quite repetitive at times.

Who should read the book

The name and subtitle of the book capture it quite well: The code-centric approach of learning (and trying out) deep learning lends itself for people who self-identify as “coders” and not so much as academic scholars who want to have the theory laid out first.

Still, it shows that this book originated in a course. The material will stick if you really follow along and try things for yourself. If you don’t, and you’re completely new to deep learning, it will be hard to map out where in the level of abstractions each chapter is situated.

I actually found the book very helpful for myself, because it helped me understand how to use the latest deep learning technologies such as learning rate finder, 1-cycle training, label smoothing, and mixup augmentation. Having worked with deep learning for a while, I still learned quite some new methods and was able to gain a deeper understanding of concepts I had known before.

Summary

Overall, I really liked the book. The authors did a great job of covering a wide range of deep learning applications while showing both: easy-to-use black box examples and the deepest insides of that black box. This helps to de-mystify the ai hype and teaches helpful hands-on skills.

They share a lot of expert advice on how to set up training procedures properly and I actually agree with their claim: Those who really complete this material have a great starting point working in the field of deep learning.

The didactic style may not be for everyone and I personally hope the fastai coding style doesn’t stick, but I am grateful for the fast.ai team’s contribution: Making deep learning accessible for anyone who is interested.

While you can read the book for free on Github, I personally really enjoyed the printed copy: The print is high-quality and it forces you to type out examples yourself rather than copy-pasting everything.

Understanding your cat’s meows using a neural network

“Meow” — I’m sorry? “Meow!” — Oh, right! Here you go.

What if I could understand exactly what my cat is trying to tell me? We live in 2021, which is basically the future. How hard can it be?

What’s on your mind, little Loki? With the power of neural networks, maybe soon I’ll know.

A dataset of meows

A group of dedicated researchers from northern Italy has recently released a public dataset of cat vocalizations (let’s call them “meows”). 21 cats from two different breeds were exposed to three different situations while a microphone was listening:

  1. Brushing: The owner brushed the cat in a familiar environment.
  2. Isolation: The cat was placed in an unfamiliar environment for a few minutes.
  3. Food: The cat was waiting for food.

In total, the dataset comprises 440 audio files.

Dataset statistics

The dataset is not evenly split between those three situations.

Number of recordings per situation

Neither is it evenly split between cat breeds or the sex of the cat.

Number of recordings per breed
Number of recordings per sex of the cat

In fact, some cats occur way more often in the recordings than others. I don’t know why. Maybe “CAN01” is just very talkative whereas “NIG01” prefers to keep to himself?

Number of recordings per individual cat. “CAN01” appears most often and NIG01 least often in the data.

Looking at these distributions is important. When we train a neural network to classify a given voice recording, we want to make sure it performs better than simply guessing the most frequent label.

For example, always guessing “female” when asked for the cat’s gender would be correct in 78% of cases because there are 345 female voice recordings and only 95 recordings of male cats.

Any classifier that is supposed to be useful has to surpass this baseline of “informed” guessing.

FeatureMost frequent labelAbsolute countRelative count = baseline accuracy
Situation
(3 classes)
isolation221 of 440 recordings50.2 %
Sex
(2 classes)
female345 of 440 recordings78.4 %
Breed
(2 classes)
european_shorthair225 of 440 recordings51.1 %
Table that lists the most frequent label per feature. The numbers highlight which baseline accuracy a model has to achieve to be better than guessing.

Now we have an idea of what our data distributions look like. In total, there are three interesting tasks we can have a model learn from the data: (1) What situation was the cat in, (2) what is the sex of the cat, and (3) what is the breed of the cat. It will be interesting to see if these tasks can be learned from the data at all. Let’s start preparing our data to train a model.

Turning audio into images

There are many ways to encode an audio signal before passing it into a neural network. For my project, I am choosing a visual approach: We plot the spectrogram of the audio recordings as an image.

This allows us to use well-established neural networks from the field of computer vision. Also, spectrograms look nice.

Spectrograms are a plot where the location in the image represents a given frequency at a given point in time in the audio file. The brightness of a pixel represents the intensity of the audio signal.

The following example shows one of the recordings as a spectrogram. The time axis goes from top left (zero) to bottom left. The x-axis denotes the frequencies.

We turn our audio recordings into images by drawing their spectrogram

Image classification using a pretrained ResNet

Having turned our audio classification task into an image classification task, we can start with our model training. We are going to train three models for three different tasks:

  1. Given a spectrogram image, classify the situation the cat was in.
  2. Given a spectrogram image, classify the sex of the cat.
  3. Given a spectrogram image, classify the breed of the cat.

I have been playing around with the fast.ai library in the past few weeks which provides convenient wrappers around the PyTorch framework, so I decided to use fast.ai for this project.

Like most deep learning frameworks, it is easy to re-use popular computer vision architectures in fast.ai. With one(-ish) line of Python, you have a capable neural network for image classification at your hands. It comes pre-trained so that you need fewer images for your task at hand.

create_cnn_model(
    models.resnet18,
    n_classes,
    pretrained=True)

ResNets are a popular neural network architecture from 2015 that introduced residual connections – a mechanism that improves training behavior and allows the training of (very) deep networks.

The catmeows dataset is quite small, so I was satisfied with the smallest ResNet flavor (called ResNet-18). It has “only” 18 layers and it is still oversized for my 440 images.

The ResNet implementation wants to have square images as its input, so I took random square crops from the spectrograms during training. The crops were 81 x 81 pixels in size and could be from different points in time of the recording, but always contain the full spectrogram.

The pre-processed images as they go into the neural network. Here we are comparing recordings of female cats with male cats. Do you see a clear difference? I admit that I don’t.

Splitting the data for training and validation

When training a classifier it is important not to show all of your data to the model during training. You want to hold out some samples for validating the classifier during the training process. That way you get an idea if the model learns the training data by heart or if it actually learns something useful.

Sometimes it is fine to take a random percentage of the dataset as the validation set. In this case, I wanted to separate the cats across train and validation split so that the model can’t cheat by memorizing the characteristics of an individual cat.

I took 4 individual cats out of the training data. Their recordings combined made up 66 samples of the dataset, which means 15% of the data was reserved for validation and only the remaining 85% were used for training.

The results

For the three different tasks, the 3 models I trained achieved the following accuracy scores.

TaskClassification accuracyGuessing baseline (see above)
Situation63.6 %50.2 %
Sex90.9 %78.4 %
Breed93.9 %51.1 %
Results: The accuracy scores of the three task-specific models. For easy comparison, I also list the guessing baseline as described above.
Results plot: Achieved model accuracy (blue) versus guessing baseline (grey).

Across all three tasks, the models performed well above the guessing baseline we have determined earlier.

Let’s also take a look at the confusion matrix for each task. A confusion matrix plots each sample of the validation set and indicates how many were classified correctly and which errors were made.

Confusion matrix that shows how well the classification of the situation worked. Some uncertainty shows: 10 samples are incorrectly classified as “waiting for food”, for example.
Confusion matrix of the task to classify the sex of the cat in the recording. 60 out of 66 were classified correctly. Not bad, I think.
Confusion matrix that shows how well the breed was classified. 62 out of 66 samples were classified correctly. I would not have expected this to work at all, to be honest.

What to make of this

First of all, these are quick results. We haven’t built a super AI that understands every single cat in the world. (Yet.)

What these results mostly show are interesting aspects of the dataset: Most of all, I was surprised how well the sex and breed can be told apart by the model. As I made sure to separate individual cats across train and validation data, I do have some confidence that the model didn’t cheat. There may still be some information leakage that I’m not aware of, of course.

“Food”, “brush” and “isolation”. I’m afraid we’ll need a little more vocabulary so that Ginny can adequately explain to me the difficult situation of the Hamburg real estate market. “One room? Fine by me. But I think they tricked me on the square footage on this one”

What to improve

This is a small dataset. ResNet-18 is a big network. This mix can cause problems.

In my case, I am using a pre-trained version of ResNet, so the convolutional features don’t have to be learned from scratch. Still, I found myself re-running the training multiple times with varying success. I think with such little data it is still easy for the model to run into a local optimum and overfit on the training data.

Ideas for improvement:

Try freezing different layers and sets of layers of the network. It’s a tiny amount of data, we wouldn’t want to destroy the pre-trained features by accident. At the same time, spectrograms are not natural images, so fine-tuning probably makes sense.

Some additional data augmentation would surely help to enrich the training data. As these are not natural images but visualizations of an audio signal, I think some augmentation operations make sense (cropping at different points in time, jitter contrast, and brightness to simulate volume fluctuations). Some others are more questionable (perspective transformations, cropping different frequency bands). I haven’t tried them so far, but they could very well improve the results.

To learn more about the data, it would be interesting to extract quantitative audio characteristics and train a logistic regression or random forest on the data. These models are easier to interpret and could help to understand if the models look at something meaningful in the data or if there is some data leakage that allows the models to cheat.

Conclusion

Playing with public datasets is fun! You should try it.

I may continue with this pet project (pet! get it?) or start something fresh with the next dataset that looks interesting.

If you’ve found an issue in my data or training setup, please let me know.

You can find the complete project code in a messy Jupyter notebook on Github.

References

Turns out I don’t need a neural network to let me know: Ginny is waiting for food.