Bug 5691 - logfile pipe semantics broken (SlimServer_trunk_v2007-10-03)
: logfile pipe semantics broken (SlimServer_trunk_v2007-10-03)
Status: CLOSED FIXED
Product: Logitech Media Server
Classification: Unclassified
Component: Misc
: 7.0
: PC Fedora
: P2 normal (vote)
: ---
Assigned To: Andy Grundman
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2007-10-06 22:26 UTC by quiet.dragon
Modified: 2009-01-29 09:47 UTC (History)
0 users

See Also:
Category: ---


Attachments
Patch (2.85 KB, patch)
2007-10-07 11:57 UTC, quiet.dragon
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description quiet.dragon 2007-10-06 22:26:14 UTC
6.5.4 and earlier supported --logfile="|..." to indicate logs should be piped to another process.

This no longer works with 7.0a. There are a bunch of problems:

0. A bug in addLogAppender:

        if ($filename =~ s/|//) {

                $filemode = 'pipe';
        }

   That regexp should be s/^\|// (ie quote the | and anchor to start). This small program
   shows the problem:

   $s="x";
   print "Surprise\n" if ($s =~ s/|//);

1. _defaultAppenders uses serverLogFile(), scannerLogFile() and perfmonLogFile() which are sensitive to the
   --logfile option, but pay no attention to the leading |, leave the mode at "append", and cause
   log4perl to attempt to open a file named "|..." :-(

2. With a little bit of work, it is possible to have serverLogFile() et al strip the leading | and
   set the log mode to "pipe". This doesn't have the desired effect because this spawns _three_
   processes (one each for the server, scanner and perfmon) and all compete to write to the
   log directory --- urk --- a mess all around.

3. Essentially the semantics are broken. In 6.5.4 the semantics of --logfile="|..." is that
   a single process should absorb all the log output. In 7.0a, there are three logs: server,
   scanner and perfmon. The most obvious interpretation is that --logfile="|..." should
   multiplex server, scanner and perfmon into the one process.

   The problem is that each log is specified as a separate appender. It's not clear to me
   if it's even possible to multiplex these logs together using log4perl.

The simplest fix might be to only use $::logfile for serverLogFile() and remove references to
it from addLogAppender(), scannerLogFile() and perfmonLogFile(). Then fix serverLogFile() to
allow for both "append" and "pipe" by introducing serverLogMode().

                'server' => {
                        'appender' => 'Log::Log4perl::Appender::File',
                        'mode'     => 'sub { Slim::Utils::Log::serverLogMode() }',
                        'filename' => 'sub { Slim::Utils::Log::serverLogFile() }',

The downside of this fix is that --logfile only specifies the server log. All other logs go their
separate ways.
Comment 1 quiet.dragon 2007-10-07 11:57:52 UTC
Created attachment 2223 [details]
Patch
Comment 2 quiet.dragon 2007-10-07 12:03:36 UTC
This morning I thought of a reasonable strategy for a patch. The patch is attached.

The patch looks for a leading | to indicate that output needs to be piped to a process and that the log mode should be pipe instead of append.

The code also looks for an embedded % in the file name and will substitute the name of the log stream to allow the different logs to be differentiated. Thus the following cases are accommodated:

--logfile=foo.log

   Append all logs to the same file.

--logfile=log/%/foo.log

   Append each log to foo.log but each in a subdirectory (eg log/server/foo.log)

--logfile="| multilog /var/log/slimserver/%"

   Pipe each log to a separate multilog process with each multilog writing to a separate subdirectory
Comment 3 Andy Grundman 2008-01-07 09:37:50 UTC
Thanks, patch applied as change 15967.
Comment 4 Chris Owens 2008-03-07 09:03:16 UTC
This bug is being closed since it was resolved for a version which is now released!  Please download the new version of SqueezeCenter (formerly SlimServer) at http://www.slimdevices.com/su_downloads.html

If you are still seeing this bug, please re-open it and we will consider it for a future release.