fio is a effective tool to test IO-performance of a block device (also file system).
Today, my colleague tell me fio has report “Cache flush bypassed!” which means all IO have bypass the device cache. But I can’t agree because the cache of a RAID card usually could only be change by specific tool (such as MegaCLi), but not a test tool.
Therefore, the “Cache flush bypassed!” is not mean all IO will bypass the buffer of device, but actually means: “fio can’t flush the cache of device, so let’s ignore it”.
If you want to disable the DRAM cache on the RAID card, the correct way is set cache policy of RAID card to “Write Through”:
Zheng Liu from Alibaba lead the topic about ext4. The most important change in EXT-series filesystem this year is: ext3 has gone, people could only use ext3 by mount ext4 with special arguments in latest kernel (actually, in CentOS 7.0). Encrypt feature has complete in ext4.
Robin Dong (Yes, it’s me) from Alibaba give a presentation about cold storage (Slide is here). We develop distributed storage system based on a small open-source software called “sheepdog“, and modified it heavily to improve data recovery performance and make sure it could run in low-end but high-density storage servers.
Discussion in tea break
Yanhai Zhu from Alibaba (We have done so much works on storage) lead a topic about cache in virtual machines environment. Alibaba choose Bcache as code base to develop a new cache software.
Robin: Why Bcache? Why not flashcache?
Yanhai: I started my work on flashcache first, but flashcache is not profit to the product environment. First, flashcache is unfriendly to sequential-write. Second, it use hash data structure to distributed IO requests at beginning, which will split the cache data in multi-tenant environment. Bcache use B-tree instead of hash-table to store data, it’s better for our requirements.
They use radical write-back strategy on cache. It works very well because the cache sequentialize the write IOs and make backend easy to absorb the pressure peak.
The last topic is lead by Zhongjie Wu from Memblaze, a famous startup company in China on flash storage technology. It’s about NVDIMM, the most hot hardware technology in recent years. A NVDIMM is not expensive, it is only a DDR DIMM with a capacitance. Memblaze has develop a new 1U storage server with a NVDIMM and many flash cards. It contain their own developed OS and could use Fabric-Channel/Ethernet to connect to client. The main purpose of NVDIMM is to reduce latency, and they use write-back strategy(Surely).
The big problem they face with NVDIMM is CPU can’t flush data in its L1 cache to NVDIMM when whole server powers down. To solve this problem, Memblaze use write-combining in CPU multi-cores, it hurts the performance a little but avoid the data missing finally.
The first topic is lead by Haomai Wang from XSKY. He first introduce some basic concepts about Ceph and I did catch this opportunity to ask some questions.
Robin(from Alibaba): Dose Ceph cache all meta-data information (may called “cluster map”) on monitor-nodes so client could fetch data by just one jump in network?
Haomai: Yes. One jump, and comes to the OSD.
Robin: If I use Cephfs, is it still one jump?
Haomai: Still one jump. Although we add MDS in Cephfs, the MDS does not store data or meta-data of filesystem but only to store the context of distributed lock.
Ceph also support samba2.0/3.0 now. In linux, it is recommend to use iSCSI to access ceph storage cluster because it will have to update kernel in clients if we use rbd/libceph kernel modules. Ceph use pipeline model in message processing therefore it is good to Hard Disk but not SSD. In the future, developers will use async-framework (such as Seastar) to refactor the ceph.
Robin: If I use three replications in ceph, will the client write three copies concurrently?
Haomai: No. Firstly, the IO will come to the primary OSD, and then the primary OSD will issue two other replicated IOs to other two OSDs, waiting until the two IOs back, and return to client “the IO is success”.
Robin: Ok, now we still have two jumps….Is it difficult to change OSD to write at the same time so we can make the latency of ceph low?
Haomai: That will not be easy. Ceph use primary OSD to make sure the consistent of writing transaction.
The future developing plan for ceph is de-duplication on pool level. Coly Li(from Suse) said that de-duplication is better to be made on business level instead of block level because the duplicated information has be split in block level. But the developers in ceph community looks still want make ceph to be omnipotent.
Discussion about ceph
Jiaju Zhang from Redhat lead the topic about use cases of ceph in enterprises. Ceph has become the most famous open source storage software around the world and also be used in Redhat/Intel/Sandisk(Low-end Storage Array)/Samsung/Suse.
Next topic is about ScyllaDB and Seastar. Asias He from OSv lead this topic. ScyllaDB is a distributed Key/Value store engine which is written in C++14 code and completely compatible to Cassandra. It could also run CQL (Cassandra Query Language). In the graph, ScyllaDB is 40 times more faster than Cassandra. The asynchronous developing framework in ScyllaDB is called Seastar.
Robin: What’s the magic in ScyllaDB?
Asias: We shard requests to every CPU core, and run with no locks/no threads. Data is zero-copy and use bi-direction queue to transfer messages between cores. The test result is base on kernel TCP/IP network stack but we will use our own network stack in our future.
Yanhai Zhu(from Alibaba): I think the test you do is not fair enough: ScyllaDB is designed to be run in multi-cores but Cassandra is not. You guys should run 24 Cassandra instances to compare with ScyllaDB, not just one.
Asias: May be you are right. But ScyllaDB use message queues to transfer messages between CPU cores, so it avoid atomic-operation and lock-operation cost. And, Cassandra is written by Java, which means the performance will be low when the JVM do garbage- collection. ScyllaDB is written completely by c++ so its performance is much steady.
Last topic today is lead by Xu Wang, the CTO of Hyper ( A startup company in china, works on how to run container like VM).
Hyper means “hypervisor” addd “docker image”. Customers could run docker image on Xen/KVM/Virtualbox now.
Docker use thin-provision of device mapper as its default storage, therefore if we wan’t run docker on centos6, we should update kernel first. I use linux kernel 4.11 and notice these kernel options should be set:
After build and reboot the kernel, I still can’t launch docker service, and finally find out the solution:
With no choice, I have to change my code: the thread will launch a new process, then the new process will calculate the result and finally return the result to thread. It works fine in my Mesos cluster.
Currently, I am writing some python program for the front of our storage system. By using ‘BaseHTTPRequestHandler’ and ‘ThreadedHTTPServer’, we could implement a simple multi-thread http server quickly. But after add ‘__init__()’ for our ‘MyHandler’, it doesn’t work correctly now:
I am doing some source code porting works on linux kernel recently.
After I reboot my server (ubuntu 14.04) into new kernel version of 3.19.8, it can’t be connected by using ssh but only by using serial port.
The server is using a USB network card, so firstly I suspect some kernel driver for USB NIC has missing in .config file. Therefore I boot back into the old version kernel and try to find some useful information in ‘dmesg’:
The only place “ASIX AX88772B” driver call mii_check_media is in drivers/net/usb/asix_devices.c:
.description="ASIX AX88772B USB 2.0 Ethernet",
So far, the reason is that: the system does not “reset” the USB network card after booting up. But why it only forget to reset USB NIC in 3.19.8 kernel? After checking the /etc/network/interfaces:
iface eth0 inet dhcp
iface eth3 inet dhcp
the answer is: the device name of the USB NIC has been changed to “eth5” by udevd in 3.9.18 kernel (new version kernel recognise new network port so eth0/eth1/eth2/eth3/eth4 all has been occupied by system) so the network scripts can’t start it up.
Fix the name of USB NIC by adding below content into /etc/udev/rules.d/70-persistent-net.rules:
My OS version is “Centos 7” and puppet version is “3.7.5”. After I have tried the way as this page answered, the problem still exists. Therefore, I write down the final correct operations here for future reference:
service httpd stop
## use Ctrl + c to break when you see "Notice: Starting Puppet master version..."
In a test server I typed “sudo yum update”, it reported errors like:
$sudo yum update
Setting up Update Process
Bad id forrepo:base.$releasever.x86_64,byte=$8
Bad id forrepo:base.$releasever.noarch,byte=$8
Bad id forrepo:ops.$releasever.x86_64,byte=$4
Bad id forrepo:ops.$releasever.noarch,byte=$4
Loading mirror speeds from cached hostfile
http://mirrors.xxx.com/centos/%24releasever/os/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
Error:Cannot retrieve repository metadata(repomd.xml)forrepository:base.Please verify its path andtryagain<
Then I found this web in google for introducing how to get the value of “$releasever”, but it does not tell us how to set “$releasever” permanently. Therefore, I have to search word like “releasever” in the source code of ‘yum’:
rpm-ql yum|xargssudo grepreleasever*-rn
Finally the source code in ‘/usr/lib/python2.6/site-packages/yum/__init__.py’ comes out:
But where does ‘self.conf.yumvar’ get its values? The answer is ‘/etc/yum/vars/’. After