The size of pipe in linux

      No Comments on The size of pipe in linux

We use pipe in our program and face a new problem: it fail when we try to write 16MB data into a pipe in one time. Looks pipe has a limited size. But what exactly the size is? After searching on the web, the answers are not inconsistent, some say it’s 16KB and others say it’s 64KB. Therefore I have to watch kernel code by myself to find the correct answer.
Since all the servers in my company is using ali_kernel, which is based on 2.6.32 centos kernel, I find the original routine of codes:

Looks all the operations to the pipe about write are managed by “write_pipefifio_fops”. Let’s get in:

Clearly, pipe_write() is responsed for writting. Keep going.

As above, kernel will allocate a page if new operation of write comes and pipe has not enough space. Every time it add a page, it increase the ‘pipe->nrbufs’, and if the ‘nrbufs’ is great than PIPE_BUFFERS, the routine will be blocked, which means the system-call of write() will be waiting. The ‘PIPE_BUFFERS’ is setted to 16, and a page in linux kernel is 4KB, so a pipe in ali_kernel can store 64KB (16 * 4KB) data at one time.
This condition has changed since kernel version of 3.6.35, which add a new proc entry in ‘/proc/sys/fs/pipe-max-size’.

Problems about using zookeeper

Problem 1:

The zookeeper cluster is running well for half year a year. But today, after I re-configurate it and run command

It failed to startup and report

The point is the last term “Invalid config”(log4j is just warning); therefore I reviewed zoo.cfg many times but finding no mistake utterly.
After checking all configurations, I eventually find out the problem: the file “myid” missed. After adding the “myid” file, zookeeper startup correctly.

It seems the error log of zookeeper is misleading——it says the config file is invalid but the true reason is missing of a config file.

Problem 2:

For tolerating failure of four servers at most, we assumed that a five-servers zookeeper cluster will be enough. After learning of Paxos for a while, a problem occurs on me: the majority of five-servers-cluster is three-servers, how could zookeeper works to elect a new leader if more than two servers are down? So I do the test and find out that the zookeeper do fail to work if more than two servers are shutdown.
The correct number of zookeeper cluster which could tolerate failure of four servers is nine; because after four servers shutdown, the five survivors is also the majority of nine-server-cluster.

Running Django in docker

      4 Comments on Running Django in docker

I am trying to learn Django (a python framework for developing web) in docker container recently. After running the docker with port redirect

The output of Django server is

Then I use command sudo docker ps to find out the port number for host machine:

but when using curl 127.0.0.1:49198 in host machine it just report “Connection Refused”

After searching in google, I only find one article which seems useful for me. But my problem is still there after I follow its steps for solution. With no choice, I have to read the documents of docker carefully and do my experiment step by step.
First, I run a nc-server in docker:

Then using nc 127.0.0.1 8000 in host. It failed too. Why can’t nc client connect to server in docker even I followed the document of docker?After running netstat in docker, I find out the answer: my centos image is centos7 and the ‘nc’ in it will listen on ipv6 address by default. If anyone want to listen on ipv4 address, it should type in

Now, the nc client could connect to server now.
But how to run Django server in ipv4 address? This article tells me the way. Now, it seems everything is ok. I start Django again with python manage.py runserver 127.0.0.1:8000 but it still could not be connected with nc client in host. Oh, the ip “127.0.0.1” and “0.0.0.0” is very different so I should run Django like:

The browser in host could access the Django example site now.