Using ResNeXt in Keras 2.2.4

      3 Comments on Using ResNeXt in Keras 2.2.4

To use ResNeXt50, I wrote my code as the API documentation for Keras:

But it reported errors:

That’s weird. The code doesn’t work as documentation said.
So I checked the code of Keras-2.2.4 (the version in my computer), and noticed that this version of code use ‘keras_applications’ instead of ‘keras.applications’.
Then I changed my code:

But it reported another error:

Witout choice, I had to check code of ‘/usr/lib/python3.6/site-packages/keras_applications/resnet_common.py’ too. Finally, I realise the ResNeXt50() function need three more arguments:

Now the program could run ResNeXt50 model correctly. This github issue explained the detail: the ‘keras_applications’ could be used both for Keras and Tensorflow, so it needs to pass library details into model function.

Some tips about using Keras

      No Comments on Some tips about using Keras

1. How to use part of a model

The ‘img_embed’ model is part of ‘branch_model’. We should realise that ‘Model()’ is a heavy cpu-cost function so it need to be create only once and then could be used many times.

2. How to save a model when using ‘multi_gpu_model’

We should reserve original model. And only by using it, we can save the model to file.

Some tips about Python, Pandas, and Tensorflow

There are some useful tips for using Keras and Tensorflow to build models.

1. Using applications.inception_v3.InceptionV3(include_top = False, weights = ‘Imagenet’) to get pretrained parameters for InceptionV3 model, the console reported:

The solution is here. Just install some packages:

2. Could we use ‘add’ to merge two DataFrames of Pandas? Let’s try

The result is:

The operator ‘+’ just works as ‘pandas.DataFrame.add‘. It try to add all values column by column, but the second DataFrame is empty, so the result of adding a number and a nonexistent value is ‘Nan’.
To merge two DataFrames, we should use ‘append’:

3. Why Estimator of Tensorflow doesn’t print out log?

But the logging_hook hasn’t been run. The solution is just adding one line before running Estimator:

LinearSVC versus SVC in scikit-learn

In competition ‘Quora Insincere Questions Classification’, I want to use simple TF-IDF statistics as a baseline.

The result is not bad:

But after I change LinearSVC to SVC(kernel=’linear’), the program couldn’t work out any result even after 12 hours!
Am I doing anything wrong? In the page of sklearn.svm.LinearSVC, there is a note:

Similar to SVC with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples.

Also in the page of sklearn.svm.SVC, it’s another note:

The implementation is based on libsvm. The fit time complexity is more than quadratic with the number of samples which makes it hard to scale to dataset with more than a couple of 10000 samples.

That’s the answer: LinearSVC is the right choice to process a large number of samples.

Using keras.layers.Embedding instead of python dictionary

Firstly, I use a function to transform words into word-embedding:

But I noticed that it costs quite a few CPU resource while GPU usage is still low. The reason is simple: using single thread python to do search in dictionary is uneffective. We should use Embedding layer in Keras to put all word-embedding-table into GPU memory.
The code is not difficult to understand:

This time, the program run two times faster than before. Using GPU memory (GDDR) to find word embedding is the right way.

A few other lessons from Kaggle’s competition ‘Human Protein Atlas Image Classification’

Practice makes progress. Therefore I continued to join Kaggle’s new competition ‘Human Protein Atlas Image Classification’ after the previous one.
I used think I could get a higher rating in image processing competition. But actually, I haven’t even entered the top half of rankings. After almost three month trials and errors, here are my rethinkings:

1. To solve the unbalanced data problem, we need to use ‘focal loss’ instead of normal cross entropy loss. I should be looking at other experts’ kernels earlier, then I could use new techniques as soon as possible.

2. To augment images, ‘lower resolution’ may be a better way than ‘mix up’

3. Try SGD and Cosine Decay, not only RMSProp

4. MobileNet may cause severe overfitting than Resnet

5. If dropout and weight-decay still can’t get better affection for regularization, what should we do? (An open question, feature engineering may be the answer)

6. Use more powerful DNN framework, such as Keras, so I can spend more time on the model itself

Some errors in dataset pipeline of Tensorflow

To extend image datasets by using mixup,I use this snippet to mix two images:

But after generating images by using this snippet, the training report errors:

The size of each image is 512x512x4 = 1048576 bytes. But I can’t understand why there is image has the size of 8388608 bytes.
Firstly my suspected point is the dataset flow of Tensorflow. But after changing the code of dataset pipeline, I find the problem is not in Tensorflow.
Again and again, I reviewed my code of generating new images and also adding some debug stub. Finally, I found out the problem: it’s not Tensorflow’s fault, but mine.
By using

The type of ‘new_image’ is ‘float64’, not ‘uint8’ for ‘major_image’ and ‘minor_image’! The ‘float64’ use 8 bytes to store one element, so this explains the ‘8388608’ in error information.
To correctly mixup images, the code should be:

Books I read in year 2018

      No Comments on Books I read in year 2018

In the 2018 year, I continued to learn more knowledge about machine learning and deep Learning. “Deep Learning” is pretty suitable for me and “Hands-On Machine Learning with Scikit-Learn and TensorFlow” is also a wonderful supplement for programming practice. I also learned some basic knowledge about Reinforcement learning.

To teach my daughters programming, I read some books about Arduino. In the process of learning Arduino, I became more and more interested in electronics on myself! After reading more technical documents about electronics (diode, transistor, capacitor, relay, thyristor etc.), Microcontrollers (Atmega from Atmel, MSP430 from Texas Instruments, STM8 from ST and so on), I had opened my view to a new area.

History books are always my favorite type. The most astonishing history book I have read in 2018 is “The Last Panther”. This book tells us an extremely cruel but real story in WWII.

Kazuo Inamori is a famous entrepreneur in Japan. I read some books written by him at the end of this year. Surprisingly, his books definitely inspired me and even changed some parts of my mind. I really want to thank him for his teaching.

Write text to file with disabling buffer in Python3

In Python2 era, we could use these code to write the file without buffer:

But in Python3 we can only write binary file by disabling buffer:

The only way to write text file without buffering is:

Adding ‘flush()’ everywhere is a terrible experience for a programmer who need to migrate his code from Python2 to Python3. I really want to know: what’s in Python3’s developers mind ?

A successful rescue for a remote server

After installed CUDA-9.2 on a remote server, I found that the system can’t load nvidia.ko (kernel module) with dmesg:

The reason is the current kernel running on my system has turned on the CONFIG_CC_STACKPROTECTOR compiler option. Therefore I change the default entry of grub2 and reboot the server, for entering a new kernel without this option.
But unfortunately, the server never start up again. All my code and data (includes my colleague’s code and data) are on this server, so we get a little nervous then.

Since the server is in a remote datacenter, we can’t just plugin in a keyboard and a screen to debug. Thus I use the out-of-bound system to reboot this server to diskless-mode. After entering this mode, I mount the disk for ‘/boot’ directory:

and manually change the ‘/boot/grub2/grubenv’ like this (the ‘save_entry’ is 2 before):

Then reboot the server again. This time, the server started up smoothly now. All our code and data is untainted.