Dim Sum Classifier – from Data to App part 1

7 min. read

Picture Credits here

In a typically machine learning lifecycle, we will need to acquire data, process data, train and validate/test models and finally deploy the trained models in applications/services. In this first part of two post, inspired by fast.ai 2019 lesson 2, we shall build a Dim Sum (a Cantonese bite-size style of cuisine with many yummy choices) classifier application by leveraging on Google Images as a data source.

Due to the wide variety of choices, we shall focus on 5 common dim sum dishes below, with links for your interest:

The image links are curated with gi2ds (Google Image Search to Dataset), a very convenient JavaScript snippet created by Christoffer Björkskog. Details can be found at the blog link here and on Github.

After using the tool, 5 text files with 200 image links corresponding to each of the dishes are prepared and saved in Google Drive for import into Google Colab, which will be used for processing and training.

Download and verify data from Google Images

We shall proceed to setup the Google Colab environment, import the file containing image links from Google Drive and download the images using the download_images function. Fast.AI library also provides a very handy verify_images function that helps you to check for valid images and prunes off files that cannot be used.

!curl -s https://course.fast.ai/setup/colab | bash
Updating fastai...
Done.
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.vision import *
dirs_dimsum = ['hargow', 'siumai', 'charsiusou', 'cheecheongfun','lobakgo']
files_dimsum = ['urls_hargow200.txt', 'urls_siumai200.txt', 
                'urls_charsiusou200.txt', 'urls_cheecheongfun200.txt',
                'urls_lobakgo200.txt']
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
root_dir = '/content/gdrive/My Drive/'
base_dir = root_dir + 'fastaiv3/'
from shutil import copy2
path = Path('data/dimsum')
for folder, file in list(zip(dirs_dimsum, files_dimsum)):
    dest = path/folder
    dest.mkdir(parents=True, exist_ok=True)
    copy2(base_dir+'dimsum_class/'+file, dest)
    
path.ls()
[PosixPath('data/dimsum/charsiusou'),
 PosixPath('data/dimsum/siumai'),
 PosixPath('data/dimsum/hargow'),
 PosixPath('data/dimsum/lobakgo'),
 PosixPath('data/dimsum/cheecheongfun')]
path = Path('data/dimsum')
for folder, file in list(zip(dirs_dimsum, files_dimsum)):
    download_images(path/folder/file, path/folder, max_pics=200)
classes = dirs_dimsum
for c in classes:
    print(c)
    verify_images(path/c, delete=True, max_size=200)

Training the model

Once we have our dataset, we will use the ImageDataBunch.from_folder method to load the images from the folder and preview the images.

np.random.seed(42)
data = ImageDataBunch.from_folder(path, train='.', valid_pct=0.2,
                                  ds_tfms=get_transforms(), size=224,
                                 num_workers=4).normalize(imagenet_stats)
data.classes
['charsiusou', 'cheecheongfun', 'hargow', 'lobakgo', 'siumai']
data.show_batch(rows=3, figsize=(7,8))

From the batch, it seems that they are fine. There is one mislabeled char siu sou (叉烧酥) but we leave it for now since a small level of noisy data does not typically affect the model much. We are using transfer learning with the ResNet 34 model.

data.classes, data.c, len(data.train_ds), len(data.valid_ds)
(['charsiusou', 'cheecheongfun', 'hargow', 'lobakgo', 'siumai'], 5, 738, 184)
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
Downloading: "https://download.pytorch.org/models/resnet34-333f7ec4.pth" to /home/yoke/.cache/torch/checkpoints/resnet34-333f7ec4.pth
100%|██████████| 87306240/87306240 [00:03<00:00, 24127371.58it/s]
learn.fit_one_cycle(10)
epochtrain_lossvalid_losserror_ratetime
01.8928261.3632980.51630400:07
11.4734970.8317150.28260900:07
21.1587750.7285800.24456500:07
30.9397840.6272910.21739100:07
40.7850940.6225440.21195700:07
50.6707140.5682300.15760900:07
60.5987210.5474240.16304300:07
70.5279690.5523210.17391300:07
80.4765960.5498520.17934800:07
90.4435540.5505880.17934800:07
learn.save('stage-1')

After 10 epochs, we have an error rate (1-Accuracy) of 17.9% and proceed to save the model.

Finetuning the model

We shall now fine tune the model by unfreezing the pre-trained layers for training, along with using the learning rate finder to find optimal learning rates.

learn.unfreeze()
learn.lr_find()
LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.
learn.recorder.plot()
learn.fit_one_cycle(5, max_lr=slice(5e-4, 1e-3))
epochtrain_lossvalid_losserror_ratetime
00.2961870.6954140.21195700:07
10.3903511.6264300.30978300:08
20.4253842.6192030.42934800:08
30.3677410.6796810.16847800:08
40.3266810.5301720.14673900:08

After fine-tuning, the error rate drops to 14.6%. We shall save this model and use ClassificationInterpretation to examine the models top losses and confusion matrix.

learn.save('stage-2')
interpret = ClassificationInterpretation.from_learner(learn)
interpret.plot_confusion_matrix()
interpret.plot_top_losses(9, figsize=(15,11))

From the top losses, it seems that there are several images that are composite pictures that confuses the model, since we are only predicting one class per picture. We can use the ImageCleaner widget to view and cleanse the dataset of unwanted pictures.

Unfortunately, Google Colab does not support ipywidgets and hence we need to run some portions of the notebook on a local runtime, which is described in the follow section. We shall zip the images and saved models for download.

!zip -r download_colab.zip /content/data/dimsum

Run in local runtime for ImageCleaner

This section to be run only locally aims to prune misleading data and labels.

Please note that DatasetFormatter does not differentiate train/validation set, hence we need to load the images using DataBlock API with explicit command to not split into training and validation set.

%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.vision import *
#local setup rerun
path = Path('content/data/dimsum')
np.random.seed(42)
db = (ImageList.from_folder(path)
                   .split_none()
                   .label_from_folder()
                   .transform(get_transforms(), size=224)
                   .databunch()
     )
db.show_batch(rows=3, figsize=(7,8))
learn_cleaning = cnn_learner(db, models.resnet34, metrics=error_rate)
learn_cleaning.load('stage-2')
from fastai.widgets import *
ds, idxs = DatasetFormatter().from_toplosses(learn_cleaning) 
ImageCleaner(ds, idxs, path)
HBox(children=(VBox(children=(Image(value=b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x01\x00d\x00d\x00\x00\xff…



Button(button_style='primary', description='Next Batch', layout=Layout(width='auto'), style=ButtonStyle())





<fastai.widgets.image_cleaner.ImageCleaner at 0x7f7590297860>

Below is a screenshot of the widget when running locally.

From the cleaning exercise, quite a few images have been cleaned up. Typically invalid images are:

  • Expected char siu sou (叉烧酥) but images are char siu bao (叉烧包), another dim sum type or the char siu roasted meat (叉烧) itself
  • composite images with irrelevant images
  • Animated images

Training the model, Round 2

After running the image cleaner, a cleaned.csv will be generated. Upload this file to Google Colab, reload and then retrain the model. Please note that no images have been deleted, hence we need to reload the cleaned data using the cleaned.csv file as reference.

!mv /content/cleaned.csv /content/data/dimsum
np.random.seed(42)

data2 = ImageDataBunch.from_csv(path, folder=".", valid_pct=0.2, 
                                csv_labels='cleaned.csv', 
                                ds_tfms=get_transforms(), 
                                size=224, 
                                num_workers=4).normalize(imagenet_stats)
data2.classes, data2.c, len(data2.train_ds), len(data2.valid_ds)
(['charsiusou', 'cheecheongfun', 'hargow', 'lobakgo', 'siumai'], 5, 472, 118)
learn_cleaned = cnn_learner(data2, models.resnet34, metrics=error_rate)
learn_cleaned.fit_one_cycle(10)
epochtrain_lossvalid_losserror_ratetime
01.9580071.4730400.64406800:05
11.5357330.6638160.16101700:05
21.1771530.4918020.15254200:05
30.9154170.5153250.15254200:05
40.7479790.5132310.16949200:05
50.6338240.5200640.16949200:05
60.5444840.5160600.16949200:05
70.4751550.5149720.16949200:05
80.4136660.5124960.16949200:05
90.3705360.5157820.16949200:05

After 10 epochs, the error rate dropped by 1% as compared to the first part of training before finetuning in round 1.

Finetuning the model – Round 2

We then proceed with finetuning the model in round 2.

learn_cleaned.unfreeze()
learn_cleaned.lr_find()
LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.
learn_cleaned.recorder.plot()
learn_cleaned.fit_one_cycle(5, max_lr=slice(1e-4, 1e-3))
epochtrain_lossvalid_losserror_ratetime
00.1501900.4612280.16949200:05
10.1222650.6652170.17796600:05
20.1185680.5673860.15254200:05
30.1040630.5607970.12711900:05
40.0888340.4864710.12711900:05

After finetuning, the error rate is now at 12.7% as compared to that of 14.6% before cleaning. It is also noted that validation loss much higher than training loss – likely to improve with more data.

We can examine the model with ClassificationIntepretation again.

learn_cleaned.save('stage-3')
interpret_cleaned = ClassificationInterpretation.from_learner(learn_cleaned)
interpret_cleaned.plot_confusion_matrix()
interpret_cleaned.most_confused()
[('siumai', 'cheecheongfun', 4),
 ('lobakgo', 'cheecheongfun', 3),
 ('charsiusou', 'cheecheongfun', 2),
 ('cheecheongfun', 'hargow', 2),
 ('lobakgo', 'siumai', 2),
 ('hargow', 'cheecheongfun', 1),
 ('siumai', 'charsiusou', 1)]
interpret_cleaned.plot_top_losses(9, figsize=(15,11))

Try with ResNet 50

We can also attempt with a larger model like ResNet 50.

learn_cleaned50 = cnn_learner(data2, models.resnet50, metrics=error_rate)
learn_cleaned50.fit_one_cycle(10)
epochtrain_lossvalid_losserror_ratetime
01.6557761.0746440.42372900:07
11.0600380.4262860.14406800:06
20.7653340.4624260.16101700:06
30.6028310.4493300.14406800:06
40.4789730.4288200.12711900:06
50.3977650.4334570.11864400:06
60.3325550.4377460.12711900:06
70.2884480.4372880.13559300:06
80.2537400.4324070.13559300:06
90.2202420.4328290.13559300:06
learn_cleaned50.unfreeze()
learn_cleaned50.lr_find()
LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.
learn_cleaned50.recorder.plot()
learn_cleaned50.fit_one_cycle(5, max_lr=slice(1e-4, 5e-3))
epochtrain_lossvalid_losserror_ratetime
00.1093170.6822960.16949200:07
10.2778902.9392890.26271200:07
20.3315452.2632240.24576300:08
30.3096460.5972300.12711900:08
40.2640140.4294300.12711900:07

After training and finetuning, the same error rate, which isn’t much of a surprise but validation loss is smaller and smaller difference between train and validation loss.

learn_cleaned50.export('export.pkl')
interpret_cleaned50 = ClassificationInterpretation.from_learner(learn_cleaned50)
interpret_cleaned50.plot_confusion_matrix()
interpret_cleaned50.most_confused()
[('siumai', 'hargow', 3),
 ('siumai', 'lobakgo', 3),
 ('cheecheongfun', 'hargow', 1),
 ('cheecheongfun', 'lobakgo', 1),
 ('cheecheongfun', 'siumai', 1),
 ('hargow', 'charsiusou', 1),
 ('hargow', 'cheecheongfun', 1),
 ('lobakgo', 'charsiusou', 1),
 ('lobakgo', 'cheecheongfun', 1),
 ('siumai', 'charsiusou', 1),
 ('siumai', 'cheecheongfun', 1)]

Takeaways

In this post we created an image classifier for dim sums using google images as data source. We have also demonstrated transfer learning, ImageCleaner widget and model export using the fast.ai library. We shall be using the exported model for deployment in a web application in our next and final part – part 2.

The corresponding notebook can be found here for your review in Google Colab. One thing to note is that the image data acquired might not be fully reproducible since some links might expire.