To compare the performance of floating point arithmetic between Intel CPU and Nvidia GPU, I write some code to do the dot-product operation of two vectors with size of 2GB.
The code for CPU test is using AVX instrument:

void test_avx(float *left, float *right, float *result, size_t count) {
  __m256 *first, *second, *end, *res;
  struct timeval before, after;
  float *c;
  int only = 0, i;
  gettimeofday(&before, NULL);
  for (i = 0; i < LOOP; i++) {
    end = (__m256*)(left + count);
    first = (__m256*)left;
    second = (__m256*)right;
    res = (__m256*)result;
    while (first < end) {
      *res = _mm256_mul_ps(*first, *second);
      if (!only) {
        c = (float*)res;
        printf("[Sample: %f]\n", *c);
        only = 1;
      }
      first ++;
      second ++;
      res ++;
    }
  }
  gettimeofday(&after, NULL);
  printf("AVX:\t%lu\n", after.tv_usec + after.tv_sec * 1000000 -
         (before.tv_usec + before.tv_sec * 1000000));
}

and use

gcc -mavx2 -g -O2 cpu_test.c -o cpu_test

to compile it.
It cost 7.5 seconds to run this test program (LOOP is 10). But my colleague pointed out for me that this program is a "memory-intensive" program as it will sequentially access two 2GB vectors. The access of memory will cost CPU about 200~250 cycles but the _mm256_mul_ps() only cost 5~10 cycles, therefore the primary time has been waste on memory accessing. The effective way to test AVX instrument is using L1-cache of CPU artfully:

void test_avx(float *left, float *right, float *result, size_t count) {
  __m256 *first, *second, *end, *res;
  struct timeval before, after;
  float *c;
  int only = 0, i, j;
  gettimeofday(&before, NULL);
  for (i = 0; i < BUFF_LEN/STRIDE; i++) {
    float *begin = left + (i * STRIDE/sizeof(float));
    for (j = 0; j < LOOP; j++) {
      end = (__m256*)(begin + STRIDE/sizeof(float));
      first = (__m256*)begin;
      second = (__m256*)right;
      res = (__m256*)result;
      while (first < end) {
        *res = _mm256_mul_ps(*first, *second);
        if (!only) {
          c = (float*)res;
          printf("[Sample: %f]\n", *c);
          only = 1;
        }
        first ++;
        second ++;
        res ++;
      }
    }
  }
  gettimeofday(&after, NULL);
  printf("AVX:\t%lu\n", after.tv_usec + after.tv_sec * 1000000 -
         (before.tv_usec + before.tv_sec * 1000000));
}

By chopping vectors into 4K "stride" and repeatedly run AVX instrument on one stride, we can use L1-cache of CPU more intensely. The result is prodigious: it cost only 0.78 seconds, almost ten times faster!
My colleague proceeded recommending me to use MKL (Intel's Match Kernel Library) to test Xeon CPU because it was of many heavily optimizations for Intel-specific-hardware-architecture. In a word, it's better to use library instead of raw code to evaluate performance of CPU and GPU. So finally, I decided to use mxnet to test performance with real data.
Using

sudo make USE_BLAS=cublas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda-8.0/ USE_CUDNN=1 USE_MKL2017=1 USE_OPENMP=1 -j80

to build mxnet with cuDNN library (for GPU) and MKL(for CPU), I run my program for bird-classification. And the result shows: the performance of CPU and GPU is about 1 : 5, that GPU is much faster than total CPU-cores in a server.