In the previous article, I introduced a new library for Object Detection. But yesterday, after I added slim.batch_norm() into ‘nets/ssd_vgg_512.py’ like this:
def ssd_arg_scope(weight_decay=0.0005, data_format='NHWC', is_training=False):
with slim.arg_scope([slim.conv2d, slim.fully_connected],
with slim.arg_scope([slim.conv2d, slim.max_pool2d],
Although training could still run correctly, the evaluation reported errors:
InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape=  rhs shape= 
[[Node: save/Assign_112 = Assign[T=DT_FLOAT, _class=["loc:@ssd_512_vgg/conv2/conv2_2/BatchNorm/moving_variance"], use_locking=true, validate_
shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](ssd_512_vgg/conv2/conv2_2/BatchNorm/moving_variance, save/RestoreV2/_283)]]
I wondered why adding some simple batch_norm will make shape incorrect for quite a while. Finally I find this page from google. It said this type of error is usually made by incorrect data_format setting. Then I check the code of ‘train_ssd_network.py’ and ‘eval_ssd_network.py’, and got the answer: the training code use ‘NCHW’ but evaluating code use ‘NHWC’!
After changing data_format to ‘NCHW’ in ‘eval_ssd_network.py’, the evaluation script runs successfully.