perl locale help?

7 posts / 0 new
Last post
#1 Wed, 10/26/2011 - 15:47
midol

perl locale help?

Hi,

I have a VM GPL install under openvz on Ubuntu 10.04.2. Periodically I'd like to be able to send myself a logwatch report. When I do this is the result:

# logwatch --mailto=geek@uniserve.com
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
    LANGUAGE = (unset),
    LC_ALL = (unset),
    LANG = "en_CA.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
Killed
Killed
Killed
system 'cat '/var/log/messages'  | /usr/bin/perl /usr/share/logwatch/scripts/shared/expandrepeats ''| /usr/bin/perl /usr/share/logwatch/scripts/shared/removeservice 'talkd,telnetd,inetd,nfsd,/sbin/mingetty,netscreen,netscreen'| /usr/bin/perl /usr/share/logwatch/scripts/shared/applystddate ''>/tmp/logwatch.85OCq98X/messages' failed: 35072 at /usr/sbin/logwatch line 870.

I know that this is not a perl support site, but perl was installed as a part of installation of VM. I have not tinkered with the installation and would much appreciate advice or pointers.

Dave

Wed, 10/26/2011 - 20:54
andreychek

Howdy,

You may want to take a look at /etc/locale.gen to make sure your desired locales are enabled. If you make changes, you'd need to run "locale-gen".

Also, take a peek at /etc/default/locale to make sure any variables there are set correctly.

However, the "killed" messages you're receiving are odd, and I wonder if the two are related.

What output do you receive if you run "free -m"?

Also, what is the contents of /proc/user_beancounters?

-Eric

Sun, 10/30/2011 - 17:01 (Reply to #2)
midol

no /etc/locale.gen no /etc/default/locale

# free -m
             total       used       free     shared    buffers     cached
Mem:          7800        544       7255          0          0          0
-/+ buffers/cache:        544       7255
Swap:            0          0          0

and

# cat /proc/user_beancounters
Version: 2.5
       uid  resource           held    maxheld    barrier      limit    failcnt
      348:  kmemsize        7507148    9193216 2147483646 2147483646          0
            lockedpages           0          8        145        145          5
            privvmpages      139391     168089    1996800    1996800          0
            shmpages          12557      12573     281600     281600          0
            dummy                 0          0          0          0          0
            numproc              70         89        145        145          0
            physpages         63871      82605          0 2147483647          0
            vmguarpages           0          0     281600 2147483647          0
            oomguarpages      64114      82848     281600 2147483647          0
            numtcpsock           26         49       1160       1160          0
            numflock              6          9        145        145          0
            numpty                2          2         73         73          0
            numsiginfo            0          3        145        145          0
            tcpsndbuf        275520     309120  230686720  235438080          0
            tcprcvbuf        425984     905728  230686720  235438080          0
            othersockbuf     126976     331392  230686720  235438080          0
            dgramrcvbuf           0     166592  230686720  235438080          0
            numothersock         68        160       1160       1160          0
            dcachesize            0          0    1336320    1336320          0
            numfile            1857       2372       3480       3480          0
            dummy                 0          0          0          0          0
            dummy                 0          0          0          0          0
            dummy                 0          0          0          0          0
            numiptent            14         14        145        145          0
Mon, 10/31/2011 - 09:22
andreychek

Do you see the "5" in the "failcnt" column of "lockedpages"?

If you run logwatch again, does that make the failcnt number increase?

As far as the locale settings go --

First, make sure locale is installed:

apt-get install locale

Then, try installing one of the locale language packs:

apt-get install language-support-en

And then try setting a default locale with this:

update-locale LANG=en_GB.UTF-8

I found some docs regarding locales on Ubuntu here:

https://help.ubuntu.com/community/Locale

Fri, 11/04/2011 - 19:09 (Reply to #4)
midol

so an update -

in the interval from my last post the failcount on that line has increased to 13 for unknown reasons; running logwatch again does not alter the value.

running apt-get install locale tells me there is no locale available to install. But locale IS already installed and your suggestion to try installing english language support got lots of action and seems to have worked out ok. Likewise with setting a default locale to GB worked fine after I read the manual about having to re-login for changes to take effect.

So far so good and I have no more locale errors.

It is always tempting when the computer misbehaves to think that there is a problem with the computer. But often what is actually the case is that there are several problems. And that seems to be the case here.

The logwatch command still fails, though not with locale errors and still at line 870 as above.

It develops that postfix is not running although dovecot is. The VM UI status page shows this and restarting in either the UI or the commandline, using service postfix start both fail, although the command line version returns [OK]

syslog shows this:

Nov  4 17:01:15 cl28810 postfix/master[11337]: fatal: bind: private/lmtp: No such file or directory
Nov  4 17:01:36 cl28810 postfix/postfix-script[11515]: warning: not owned by postfix: /var/spool/postfix/private/anvil
Nov  4 17:01:36 cl28810 postfix/postfix-script[11516]: warning: not owned by postfix: /var/spool/postfix/private/lmtp
Nov  4 17:01:36 cl28810 postfix/postfix-script[11517]: warning: not owned by postfix: /var/spool/postfix/private/scache
Nov  4 17:01:36 cl28810 postfix/postfix-script[11518]: warning: not owned by postfix: /var/spool/postfix/private/maildrop
Nov  4 17:01:37 cl28810 postfix/postfix-script[11549]: starting the Postfix mail system
Nov  4 17:01:37 cl28810 postfix/master[11550]: fatal: bind: private/lmtp: No such file or directory

Looks like a permission issue, but I don't see how this came about. logwatch used to work ok and it isn't like I'd intentionally do anything to alter that.

So if you have the patience I'd like to get postfix back up, too.

Thanks for the reference to the ubuntu docs, quite worthwhile; I'll be reading what it has to say about postfix, too.

Dave

Sat, 11/05/2011 - 09:36
andreychek

in the interval from my last post the failcount on that line has increased to 13 for unknown reasons; running logwatch again does not alter the value.

Well, it's good news that it's not related to logwatch... but it sounds like other applications you're running are bumping into a memory barrier and ultimately failing.

Looks like a permission issue, but I don't see how this came about.

Hmm, Postfix permissions should all by okay by default -- those are some unusual errors! What do these two commands show:

ls -l /var/spool/postfix/private/
rpm -qa | grep postfix

It may just be a simple matter of changing the permissions of the files in /var/spool/postfix/private/, but it'd be good to see how those are currently set before changing them.

-Eric

Wed, 11/09/2011 - 18:16 (Reply to #6)
midol

for the first command:

# ls -l /var/spool/postfix/private/
ls: cannot access /var/spool/postfix/private/anvil: No such file or directory
ls: cannot access /var/spool/postfix/private/lmtp: No such file or directory
ls: cannot access /var/spool/postfix/private/scache: No such file or directory
ls: cannot access /var/spool/postfix/private/maildrop: No such file or directory
total 0
s????????? ? ?       ?       ?                ? anvil
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 bounce
srw-rw-rw- 1 postfix postfix 0 2011-09-27 00:20 bsmtp
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 defer
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 discard
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 error
srw-rw-rw- 1 postfix postfix 0 2011-09-27 00:20 ifmail
s????????? ? ?       ?       ?                ? lmtp
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 local
s????????? ? ?       ?       ?                ? maildrop
srw-rw-rw- 1 postfix postfix 0 2011-09-27 00:20 mailman
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 proxymap
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 proxywrite
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 relay
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 retry
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 rewrite
s????????? ? ?       ?       ?                ? scache
srw-rw-rw- 1 postfix postfix 0 2011-09-27 00:20 scalemail-backend
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 smtp
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 tlsmgr
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 trace
srw-rw-rw- 1 postfix postfix 0 2011-09-27 00:20 uucp
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 verify
srw-rw-rw- 1 postfix postfix 0 2011-11-09 16:06 virtual
Topic locked