Apache Semaphore issues
This is one of the rarest conditions which I came across. The webserver failed to load with following error messages in the server log file. The server had fairly good uptime and was a busy server.
[emerg] (28)No space left on device: Couldn't create accept lock [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [notice] Digest: generating secret for digest authentication ... [notice] Digest: done [warn] pid file /etc/httpd/run/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [emerg] (28)No space left on device: Couldn't create accept lock
The error message
No space left on device: Couldn’t create accept lock
made me confused as there were no disk space or quota related issues.
On a detailed analysis, it is found to be an Apache semaphore issue.
An Apache semaphore is an inter-process communication tool that is used by Apache to communicate with its child processes. Semaphore marks memory locations as locked and may not release it upon completion of process. In most case, if parent process dies before the child.
Due to the marking of too many memory locations as being used while they are not, the system will run out of memory locations after some time. This can happen in very busy servers with high uptime. In the previous case, the error message indicates that the apache server fails to allocate semaphores for new child processes.
Check to see how many semaphores are currently in use. If Apache is running correctly, you should see something like this
root@server [/]# ipcs -s ------ Semaphore Arrays -------- key semid owner perms nsems 0x00000000 0 root 600 1 0x00000000 241893377 nobody 600 1 0x00000000 241926146 nobody 600 1 0x00000000 241958915 nobody 600 1 0x00000000 241991684 nobody 600 1
In an ideal condition, the Apache semaphores should be cleaned up once the webserver is stopped. If Apache is stopped, and you still see these semaphores, then it indicates that Apache has not cleaned up after itself, and some semaphores are stuck. If the number of semaphores are high, it can lead to severe memory leakage and results resource non availability.
In my case, I got a large array of Semaphores even after the web server is down.
root@server [/]# ipcs -s ------ Semaphore Arrays -------- key semid owner perms nsems 0x00000000 0 root 600 1 0x00000000 242057217 nobody 600 1 0x00000000 242089986 nobody 600 1 0x00000000 242122755 nobody 600 1 0x00000000 242155524 nobody 600 1
Now the solution is safely remove the semaphores.
You can safely kill them by running this command for each semaphore id (in the second column)
ipcrm -s <semid>
Below is the command I used to delete the semaphore
root@server [/]# ipcrm -s 242155524
Once the semaphores are killed you need to check the queue size
root@server [/]# ipcs -s ------ Semaphore Arrays -------- key semid owner perms nsems 0x00000000 0 root 600 1 root@server [/]#
To destroy all semaphores, you can run this from the command line (with “nobody” being the apache-user:
for semid in `ipcs -s | grep nobody | cut -f2 -d" "`; do ipcrm -s $semid; done
or
for i in `ipcs -s | awk ‘/httpd/ {print $2}’`; do (ipcrm -s $i); done
Increase Apache semaphore limit
The permanent solution for any such issue would obviously be increasing the current limits. You can view the current parameters:
root@server [/]# ipcs -l ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 32768 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 ------ Semaphore Limits -------- max number of arrays = 128 max semaphores per array = 250 max semaphores system wide = 32000 max ops per semop call = 32 semaphore max value = 32767 ------ Messages: Limits -------- max queues system wide = 2048 max size of message (bytes) = 8192 default max size of queue (bytes) = 16384
To change these parameters, modify the file /etc/sysctl.conf and add the following lines:
kernel.msgmni = 1024 kernel.sem = 250 256000 32 1024
Then load these settings with the command:
sysctl -p