Opened 12 years ago
Closed 12 years ago
#27 closed (fixed)
OpenVZ containers do not appear to collect system logs in the container's /var/log
Reported by: | chaos@bbn.com | Owned by: | somebody |
---|---|---|---|
Priority: | major | Milestone: | |
Component: | Monitoring | Version: | SPIRAL4 |
Keywords: | Cc: | ||
Dependencies: |
Description
I created an OpenVZ container on pc5.utah.geniracks.net. I notice that /var/log
within the container appears to be empty. I would expect logs from the container's userspace to be stored in /var/log
within the container for the experimenter to access.
Output of /var/log
, being empty:
[chaos@virt1 ~]$ ls -l /var/log/ total 40 -rw------- 1 root utmp 0 May 18 01:55 btmp -rw------- 1 root utmp 0 Apr 12 01:53 btmp-20120518 drwx------ 2 root root 4096 Nov 21 07:31 httpd -rw-r--r-- 1 root root 5840584 May 18 15:35 lastlog drwxr-xr-x 2 root root 4096 Nov 22 20:05 mail -rw------- 1 root root 0 Sep 13 2011 maillog -rw------- 1 root root 0 Sep 13 2011 messages drwxrwx--- 2 92 92 4096 Oct 18 2011 quagga -rw------- 1 root root 0 Sep 13 2011 secure -rw------- 1 root root 0 Sep 13 2011 spooler -rw------- 1 root root 0 Sep 13 2011 tallylog -rw-rw-r-- 1 root utmp 5760 May 18 15:35 wtmp -rw------- 1 root root 0 Apr 12 01:53 yum.log
Uname, fyi:
[chaos@virt1 ~]$ uname -a Linux virt1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net 2.6.32-042stab049.6.emulab.1 #1 SMP Tue Apr 3 12:08:02 MDT 2012 x86_64 x86_64 x86_64 GNU/Linux
Change History (5)
comment:1 Changed 12 years ago by
comment:2 Changed 12 years ago by
A pedantic coworker pointed out that i said above that /var/log
is empty: that's not actually true. What i meant to say is that the "files" in /var/log/
are empty (i.e. not gathering new logs based on new container activity). :>)
comment:3 Changed 12 years ago by
Huh, this caught my eye because I was pretty sure that I'd looked in a log file on an OpenVZ container before, and indeed, when I tried again, it seemed to work:
[jbs@rowan ~]$ ls -Flart /var/log | tail -5 -rw------- 1 root root 640064 May 18 15:58 faillog -rw-rw-r-- 1 root utmp 3840 May 18 16:01 wtmp -rw-r--r-- 1 root root 5840584 May 18 16:01 lastlog -rw------- 1 root root 1914 May 18 16:02 secure -rw------- 1 root root 1471 May 18 16:13 messages [jbs@rowan ~]$ sudo -v [jbs@rowan ~]$ ls -Flart /var/log | tail -5 -rw------- 1 root root 640064 May 18 15:58 faillog -rw-rw-r-- 1 root utmp 3840 May 18 16:01 wtmp -rw-r--r-- 1 root root 5840584 May 18 16:01 lastlog -rw------- 1 root root 1471 May 18 16:13 messages -rw------- 1 root root 2011 May 18 16:15 secure
I've got a syslogd process in my container:
[jbs@rowan ~]$ ps -ef | grep syslog root 5165 1 0 15:58 ? 00:00:00 rsyslogd -m 0 jbs 5887 5484 0 16:16 pts/0 00:00:00 grep syslog
Anyway, more data.
comment:4 Changed 12 years ago by
Indeed, there was no syslogd process running in my container. Starting one by hand was successful:
[chaos@virt1 init.d]$ sudo service rsyslog start Starting system logger: [ OK ] [chaos@virt1 init.d]$ ls -lart /var/log/ total 56 -rw------- 1 root root 0 Sep 13 2011 tallylog -rw------- 1 root root 0 Sep 13 2011 spooler -rw------- 1 root root 0 Sep 13 2011 maillog drwxrwx--- 2 92 92 4096 Oct 18 2011 quagga drwx------ 2 root root 4096 Nov 21 07:31 httpd drwxr-xr-x 2 root root 4096 Nov 22 20:05 mail drwxr-xr-x 18 root root 4096 Nov 22 21:30 .. -rw------- 1 root root 0 Apr 12 01:53 yum.log -rw------- 1 root utmp 0 Apr 12 01:53 btmp-20120518 -rw------- 1 root utmp 0 May 18 01:55 btmp -rw-rw-r-- 1 root utmp 5760 May 18 15:35 wtmp -rw-r--r-- 1 root root 5840584 May 18 15:35 lastlog -rw------- 1 root root 0 May 18 16:17 cron -rw------- 1 root root 0 May 18 16:17 boot.log drwxr-xr-x 5 root root 4096 May 18 16:17 . -rw------- 1 root root 589 May 18 16:18 secure -rw------- 1 root root 495 May 18 16:18 messages
So this problem may be as simple as the image which is on the Utah rack not configuring its containers to run rsyslog on boot.
comment:5 Changed 12 years ago by
Resolution: | → fixed |
---|---|
Status: | new → closed |
I requested an 11-VM experiment right now, so that the 11th VM will wind up spawning a separate physical node, which will have the new version of FEDORA15-OPENVZ-STD.
- Here's the rspec:
jericho,[~],11:00(0)$ cat omni/rspecs/request/rack-testing/acceptance-tests/IG-M ON-nodes-F.rspec <?xml version="1.0" encoding="UTF-8"?> <!-- This rspec will reserve eleven openvz nodes. It should work on any Emulab which has nodes available and supports OpenVZ. --> <rspec xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd" type="request"> <node client_id="virt01" exclusive="false"> <sliver_type name="emulab-openvz" /> </node> <node client_id="virt02" exclusive="false"> <sliver_type name="emulab-openvz" /> </node> <node client_id="virt03" exclusive="false"> <sliver_type name="emulab-openvz" /> </node> <node client_id="virt04" exclusive="false"> <sliver_type name="emulab-openvz" /> </node> <node client_id="virt05" exclusive="false"> <sliver_type name="emulab-openvz" /> </node> <node client_id="virt06" exclusive="false"> <sliver_type name="emulab-openvz" /> </node> <node client_id="virt07" exclusive="false"> <sliver_type name="emulab-openvz" /> </node> <node client_id="virt08" exclusive="false"> <sliver_type name="emulab-openvz" /> </node> <node client_id="virt09" exclusive="false"> <sliver_type name="emulab-openvz" /> </node> <node client_id="virt10" exclusive="false"> <sliver_type name="emulab-openvz" /> </node> <node client_id="virt11" exclusive="false"> <sliver_type name="emulab-openvz" /> </node> </rspec>
- I used the ecgtest slice and created a sliver with this rspec. 10 VMs wound up on pc5.utah.geniracks.net, and 1 VM wound up on pc1.
- Once the VMs started up, i found that virt11 was on pc1. I logged into it, and it is running rsyslogd and logs are getting populated:
[chaos@virt11 ~]$ ps -ef | grep syslog root 308 1 0 19:16 ? 00:00:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 chaos 540 529 0 19:18 pts/0 00:00:00 grep --color=auto syslog [chaos@virt11 ~]$ ls -lart /var/log/ total 52 -rw------- 1 root root 0 Sep 13 2011 tallylog -rw------- 1 root root 0 Sep 13 2011 spooler -rw------- 1 root root 0 Sep 13 2011 maillog drwxrwx--- 2 92 92 4096 Oct 18 2011 quagga drwx------ 2 root root 4096 Nov 21 07:31 httpd drwxr-xr-x 2 root root 4096 Nov 22 20:05 mail drwxr-xr-x 18 root root 4096 Nov 22 21:30 .. -rw------- 1 root root 0 Apr 12 01:53 yum.log -rw------- 1 root utmp 0 Apr 12 01:53 btmp -rw------- 1 root root 0 May 21 19:16 cron -rw------- 1 root root 0 May 21 19:16 boot.log drwxr-xr-x 5 root root 4096 May 21 19:16 . -rw-rw-r-- 1 root utmp 1152 May 21 19:18 wtmp -rw------- 1 root root 2482 May 21 19:18 secure -rw-r--r-- 1 root root 5840584 May 21 19:18 lastlog -rw------- 1 root root 1745 May 21 19:20 messages
- I arbitrarily chose virt06 as a VM that is on pc5, and looked at that. That VM is still not running syslog and not populating logs:
[chaos@virt06 ~]$ ps -ef | grep syslog chaos 533 522 0 19:20 pts/0 00:00:00 grep --color=auto syslog [chaos@virt06 ~]$ ls -lart /var/log/ total 44 -rw------- 1 root root 0 Sep 13 2011 tallylog -rw------- 1 root root 0 Sep 13 2011 spooler -rw------- 1 root root 0 Sep 13 2011 secure -rw------- 1 root root 0 Sep 13 2011 messages -rw------- 1 root root 0 Sep 13 2011 maillog drwxrwx--- 2 92 92 4096 Oct 18 2011 quagga drwx------ 2 root root 4096 Nov 21 07:31 httpd drwxr-xr-x 2 root root 4096 Nov 22 20:05 mail drwxr-xr-x 5 root root 4096 Nov 22 21:24 . drwxr-xr-x 18 root root 4096 Nov 22 21:30 .. -rw------- 1 root root 0 Apr 12 01:53 yum.log -rw------- 1 root utmp 0 Apr 12 01:53 btmp -rw-rw-r-- 1 root utmp 1152 May 21 19:20 wtmp -rw-r--r-- 1 root root 5840584 May 21 19:20 lastlog
So the new image does solve the problem, but the new image is not yet on pc5. (These are both expected.) Leigh is going to push out the new image onto pc5 later today. At any rate, the new image appears to solve this problem, so i am closing this ticket.
This isn't a rack requirement as far as i know, but it seems non-ideal, so i thought it was worth noting nonetheless.