Table of Contents
Where can we go to investigate status and performance issues with respect to Squid ?
Two tools, provided with squid, are there to help you monitor and improve performance of your squid cache:
[Ref: FAQ, HOW-TO Warnings]
Squid allows many settings that will warn you of abnormal behaviour through different severity notifications in the log files.
Most log analysis tools focus on squid’s access records, you will need to manually review the cache.log file for system performance.
Squid provides some detailed logging that you can read at /var/squid/logs/access, and you can follow more specific caching activities with something as in the below:
# tail -f /var/squid/logs/access | awk '{ print $3" "$4" "$8" "$7 }'
Cache logging file. This is where general information about your cache's behavior goes. You can increase the amount of data logged to this file and how often its rotated with "debug_options"
# tail -f /var/squid/logs/cache.log
YYYY/MM/DD HH:MM:SS| WARNING: Your cache is running out of filedescriptors
[Ref: cache.log]
“Your cache is running out of filedescriptors” is a common enough problem that even after noting the solution above, we’ll describe it some more below:
Symptoms:
Before giving your machine more RAM or HDD, you should check the logs first before taking drastic (aka guess work) action.
For many of my machines, the answer to slow performance were in the WARNING messages left in the cache log (/var/squid/logs/cache.log)
grep WARNING /var/squid/logs/cache.log
YYYY/MM/DD HH:MM:SS| WARNING: Your cache is running out of filedescriptors
In this particular case, we had a lot of error messsages with the same text “Your cache is running out of filedescriptors.” How many file descriptors do I have for my cashing service/daemon?
Using your shell, you can determine the number of file descriptors by something like the below:
This will show you what your login class settings give to your squid daemon.
# usermod -s /bin/ksh _squid
# su _squid
$ ulimit -a
time(cpu-seconds) unlimited file(blocks) unlimited coredump(blocks) unlimited data(kbytes) 2097152 stack(kbytes) 8192 lockedmem(kbytes) 296673 memory(kbytes) 887008 nofiles(descripters) 128 processes 1310
$ exit
# usermod -s /sbin/nologin _squid
More simply
$ ulimit -n
128
As in the earlier example, you can also use squidclient to view what the running squid process sees as available file descriptors. This view will also let you watch over time as your file descriptors deplete etc.
# squidclient mgr:info | grep -i file
File descriptor usage for squid: Maximum number of file descriptors: Largest file desc currently in use: Number of file desc currently in use: Files queued for open: Available number of file descriptors: Reserved number of file descriptors: Store Disk files open:
The solution? Increase the number of file descriptors available to your daemon and a clean way of doing this is the login class as described here.
Make sure that your DNS service is responding quickly/effectively, the speed of your DNS service will impact the performance of Squid.
Review:
If you start squid at host startup, make sure that the DNS services are operational before squid is started, or there will be a significant delay as squid starts up.
# squidclient mgr:
... ipcache IP Cache Stats and Contents fqdncache FQDN Cache Stats and Contents idns Internal DNS Statistics
# squidclient mgr:idns
[Ref: login.conf]
userinfo _squid
login _squid passwd ***************** uid 515 groups _squid change NEVER class daemon gecos Squid Account dir /nonexistent shell /sbin/nologin expire NEVER
The login-class (/etc/login.conf)
Key attributes:
default:\
:path=/usr/bin /bin /usr/sbin /sbin /usr/X11R6/
:openfiles-cur=XXX:\
:openfiles-max=XXX:\
squid:\
:ignorenologin:\
:datasize=infinity:\
:maxproc=infinity:\
:openfiles-cur=128:\
:stacksize-cur=8M:\
:localcipher=blowfish,8:\
:tc=default
Whenever you make changes to login.conf, you need to update the database using: cap_mkdb(1)
# [ -f /etc/login.conf.db ] && /usr/bin/cap_mkdb /etc/login.conf
We’ve created a new login-class, with higher resource allocations, and we now use usermod(8) to change the user _squid to use the above created login-class:
# usermod -L squid _squid
Stop and start squid using the rc.d provided script:
# /etc/rc.d/squid stop
# /etc/rc.d/squid start
You can review whether your above configuration changes have taken effect by using squidclient.
# squidclient mgr:info | grep -i file
File descriptor usage for squid: Maximum number of file descriptors: Largest file desc currently in use: Number of file desc currently in use: Files queued for open: Available number of file descriptors: Reserved number of file descriptors: Store Disk files open:
For more options from squidclient, use
# squidclient mgr:
For the really insane, type in:
# squidclient mgr:filedescriptors
[Ref: FAQ: Squid Memory]
Use ps and look at vsz, and rss:
# ps -axuhm | egrep "(squid|%MEM)"
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND _squid 29894 0.3 26.9 100212 102408 ?? S 26Sep11 135:40.99 (squid) -D (squid) root 5847 0.0 0.3 4404 1260 ?? Is 26Sep11 0:00.01 /usr/local/sbin/squid -D _squid 30325 0.0 0.4 1092 1484 ?? S 26Sep11 1:07.61 (unlinkd) (unlinkd) USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
From manpage ps(1):
rss The real memory (resident set) size of the process (in 1024 byte units). vsz Alias: vsize. Virtual size, in Kilobytes.
From the FAQ: Squid Memory:
If your RSS (Resident Set Size) value is much lower than your process size, then your cache performance is most likely suffering due to Paging