To extend image datasets by using mixup,I use this snippet to mix two images:

major_image = cv2.imread('1.jpeg')
minor_image = cv2.imread('2.jpeg')
new_image = major_image * 0.9 + minor_image * 0.1

But after generating images by using this snippet, the training report errors:

Traceback (most recent call last):
  File "/home/tops/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1292, in _do_call
    return fn(*args)
  File "/home/tops/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1277, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/home/tops/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1367, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]:
 [8388608], [batch]: [1048576]
         [[{{node IteratorGetNext}} = IteratorGetNext[output_shapes=[[?,?], [?,?], [?]], output_types=[DT_INT64, DT_UINT8, DT_STRING], _device="/job:l
ocalhost/replica:0/task:0/device:CPU:0"](IteratorV2)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "train.py", line 282, in 
    main()
  File "train.py", line 278, in main
    train(config)
  File "train.py", line 224, in train
    labels_r, images_r, ids_r = sess.run([labels, images, ids])
  File "/home/tops/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 887, in run
    run_metadata_ptr)
  File "/home/tops/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1110, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/tops/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1286, in _do_run
    run_metadata)
  File "/home/tops/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1308, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]:
 [8388608], [batch]: [1048576]
         [[{{node IteratorGetNext}} = IteratorGetNext[output_shapes=[[?,?], [?,?], [?]], output_types=[DT_INT64, DT_UINT8, DT_STRING], _device="/job:l
ocalhost/replica:0/task:0/device:CPU:0"](IteratorV2)]]

The size of each image is 512x512x4 = 1048576 bytes. But I can’t understand why there is image has the size of 8388608 bytes.
Firstly my suspected point is the dataset flow of Tensorflow. But after changing the code of dataset pipeline, I find the problem is not in Tensorflow.
Again and again, I reviewed my code of generating new images and also adding some debug stub. Finally, I found out the problem: it’s not Tensorflow’s fault, but mine.
By using

new_image = major_image * 0.9 + minor_image * 0.1

The type of ‘new_image’ is ‘float64’, not ‘uint8’ for ‘major_image’ and ‘minor_image’! The ‘float64’ use 8 bytes to store one element, so this explains the ‘8388608’ in error information.
To correctly mixup images, the code should be:

major_image = cv2.imread('1.jpeg')
minor_image = cv2.imread('2.jpeg')
new_image = major_image * 0.9 + minor_image * 0.1
new_image = new_image.astype(np.uint8)