In the previous article, I reached mAP 0.739 for VOC2007. After about two weeks, I add more tricks to reach mAP 0.740.
The most important trick is escalating the expand-scale of augmentation which is made from this patch. Increase the scale range could help the model to detect a smaller object. Moreover, to detect more hidden bird， I enhanced the RandomBrightness() and add ToGray() to let the model detect some black-white objects (I don’t man pandas). By using a confidence threshold of 0.4, I get these images which seems kind of promising:
I also tried learning rate warm up. But it can’t boost the performance. The explanation may be: warm up learning rate may cause overfit for the model.
After used and only used CUB-200-2011 dataset, I still got very bad performance for bird detection which seems like a mystery. I will go on my test to find out why.
I tried to reduce the learning rate, change optimizer from SGD to Adam, and use different types of initializer for parameters. None of these solved the problem. Then I realized it would be a hard job to find the cause of the problem. Thus I began to print the value of ‘loss’, then the values of ‘loss_location’ and ‘loss_confidence’. Finally, I noticed that ‘loss_location’ firstly became ‘nan’ because of the value of in the equation below (from paper) is ‘nan’:
‘loss_location’ from paper ‘SSD: Single Shot MultiBox Detector’
After checked the implementation in the ‘layers/box_utils.py’ of code:
I realized the (matched[:, 2:] – matched[:, :2]) has got a negative value which never happend when using CUB-200 dataset.
Now it’s time to carefully check the data pipeline for CUB-200-2011 dataset. I reviewed the bounding box file line by line and found out that the format of it is not (Xmin, Ymin, Xmax, Ymax), but (Xmin, Ymin, Width, Height)! Let’s show the images for an incorrect bounding box and correct one:
Parse bounding box by format (Xmin, Ymin, Xmax, Ymax) which is wrong
Parse bounding box by format (Xmin, Ymin, Width, Height) which is correct
After changed the parsing method for the bounding boxes of CUB-200-2011 dataset, my training process runs successfully at last.
The lesson I learned from this problem is that dataset should be seriously reviewed before using.
SSD (Single Shot Detection) is a type of one-stage object detection neural network which uses multi-scale feature maps for detecting. I forked the code from ssd.pytorch, and added some small modifications for my bird-detection task.
I have tried some different types of rectifier function at first, such as ELU and RRelu. But they only reduced the mAP (mean Average Precision). I also tried to change the hyperparameters about augmentation. But it still didn’t work. Only after I enabled the batch normalization by this patch, the mAP has been boosted significantly (from 0.658 to 0.739).
The effect looks like:
But actually, we don’t need all types of annotated objects. We only need annotated bird images. Hence I change the code to train the model with only bird images in VOC2007 and VOC2012. Unexpectedly, the mAP is extremely low, and the model can’t even detect all 16 bird heads in the above [Image 2].
Why using bird images only will hurt the effect? There are might be two reasons: first, a too small number of bird images (only 1000 in VOC2007 and VOC2012); second, not enough augmentations.
To prove my hypothesis, I found CUB-200, a much larger dataset for bird images (about 6000). After training by this dataset, the effect is unsatisfied also: it can’t detect all three birds in [Image 1]. I need more experiments to find the reason.
len(("birds",))# the inner '()' means tulple because of the comma
The result is:
2. Unlike TensorFlow’s static graph, PyTorch could run neural network just as the code. This means a lot of conveniences. The first advantage, we could print out any tensor in our program, no matter in prediction or training. Second, just adding ‘time.time()’ in code, could help us profiling every step of training.
3. Follow the example of NVIDIA’s apex, I wrote a prefetcher to let PyTorch loading data and computing parallelly. But in my test, the ‘data_prefetcher’ actually hurt the performance of training. The reason may be my model (VGG16) is not dense enough, thus computing cost less time than loading data.
Last weekend I exported my Jupyter Notebook records into a PDF format file. Surprisingly, the PDF file looks so good that I begin to think about using Jupyter Notebook or Markdown instead of LaTex to write technical papers because LaTex is an extremely powerful but inconvenient tool for writing. Then I created a file named ‘hello.md’:
There isafox on the bank.
Then using a command line to convert the Markdown file to PDF (if you meet problems like ‘Can’t find *.sty’, just use ‘sudo tlmgr install xxx’):
The PDF file looks like:
It does works, but the appearance looks too rigid. Then I found the ‘pandoc-latex-template‘. By downloading and installing the ‘eisvogel.tex’, I can generate PDF by:
1. Alex used the ‘SEE-ResNeXt50’. Instead, I used the standard ‘ResNeXt50’. Maybe this is the reason why my score ‘0.9716’ in public leaderboard is not as good as Alex’s. After the competition, I did spend some time to read the paper about ‘SE-ResNeXt50’. It’s really a simple and interesting idea about optimizing the architecture of the neural network. Maybe I can use this model on my next Kaggle competition.
2. In this competition, I split the training dataset into ten folds and train three different models on different train/eval splits. After ensembled these three models, it could get a nice score. Seems Bagging is a good method on practical application.
3. After training model to a ‘so far so good’ f1-score by using SGD with ReduceOnPlateu in Keras, I use this model as the ‘base model’ for following fine-tuning. By ensemble all high-score finetuning models, I eventually get the best score. This strategy comes from the Snapshot Ensembles.
4. By the way, ReduceOnPlateu is really useful when using SGD as the optimizer.
Pretty strange, the HADOOP_CLASSPATH contains ‘mapreduce’ directories. It supposed to be able to find ‘Job’ class, unless the MapReduce jar package is in other directories.
Finally, I found the real MapReduce jar is actually in other position. Therefore I add these directories into HADOOP_CLASSPATH: edit ~/.bashrc and add following line
My colleague’s bare metal PC with three-fans-RTX-2080-Ti
My previous colleague Jian Mei has bought a new GPU – RTX 2080 Ti for training bird images of http://dongniao.net/. After he installed all the power supply and GPU on his computer, I began to run my MobileNetV2 model on it. Unsurprisingly, the performance doesn’t boost significantly: training speed increase from 60 samples/sec to about 150 samples/sec.
The most possible reason for the poor performance is the TensorCore.
To use the full power of TensorCore, or my colleague’s RTX 2080 TI, only following the guide <Mixed Precision Training> is not enough. I directly used the complete code example from Nvidia’s Github.
By using the ResNeXt50 model from the example, the TensorCore do promote the performance of training:
In the document of Nvidia, it reports 20 times performance enhancement by TensorCore. But in our RTX 2080 Ti, it only gained 2 times performance. Actually, the mainstream neural network models, such as ResNet/Densenet/Nasnet, couldn’t use up a highend GPU of Nvidia, since its too strong coumputation power for floating point. To produce the best results of TensorCore, I need to try more complicated and dense model continously.
This time, I only spent one month on competition “Humpback Whale Identification”. But still, get a little step forward than previous competitions. Here are my summaries:
1. Do review ‘kernels’ in competition page, this will teach me a lot of information and new technology. By using Siamese Network rather than classic model, I eventually beat overfit problems. Thanks for suggestions from the ‘kernel’ page of competition.
2. Bravely use cutting-edge model, such as ResNeXt50 / Densenet121. They are more powerful and easy to use.
3. Do use fine-tuning. Don’t train model from scratch every time!
4. Ensemble learning is really powerful. I have used three different models to ensemble the final result.
There are also some tips for future challenge (may be correct, may be wrong):