Bug 442 - timestamps for events in slimserver.log would be useful
: timestamps for events in slimserver.log would be useful
Status: CLOSED FIXED
Product: Logitech Media Server
Classification: Unclassified
Component: Misc
: 5.x or older
: PC Linux (other)
: P2 enhancement with 1 vote (vote)
: Future
Assigned To: Blackketter Dean
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2004-07-10 04:52 UTC by David Brittain
Modified: 2011-03-16 04:19 UTC (History)
0 users

See Also:
Category: ---


Attachments
Example log with no timestamp information (38.52 KB, text/plain)
2004-07-13 14:39 UTC, David Brittain
Details

Note You need to log in before you can comment on or make changes to this bug.
Description David Brittain 2004-07-10 04:52:23 UTC
Items entered into /tmp/slimserver.log have no timestamp. This makes it hard to
tell whether entries in there are related to a problem you have recently
observed or not.
Comment 1 Blackketter Dean 2004-07-13 12:37:26 UTC
Logging done explicitly by SlimServer does include timestamps, but system and Perl generated errors 
do not.  Can you attach a sample log file so I can make sure that the errors that you are seeing get 
logged properly?
Comment 2 David Brittain 2004-07-13 14:39:26 UTC
Created attachment 67 [details]
Example log with no timestamp information
Comment 3 KDF 2004-07-14 15:08:07 UTC
that's a log of CPAN stdout information.  slimserver has no control over that.
Comment 4 David Brittain 2004-07-17 11:23:10 UTC
Maybe the answer then is to output a timestamp to the log file periodically -
say once a day. That way at least you can tell whether the errors are recent or not.
Comment 5 KDF 2004-07-26 22:51:51 UTC
that's an interesting idea.  one potential issue woudl be that I can see some
users might be put off by having data being written to the log when there is
nothing to log.  Any thoughts on that?

once a day might be a good compromise, that's true, but it is not
inconsequential to set up a 24 hour timer. All timers are checked once per
second, so there is a fractional amount of overhead to consider.  

what do you think dean?
Comment 6 Blackketter Dean 2004-07-27 09:48:00 UTC
Yes, I think we shouldn't log unless there's a specific issue we are logging.
Unless there's a way to intercept STDERR and STDOUT and add timestamps, I'm not sure there's 
anything we can do.
Comment 7 Marc Sherman 2005-01-16 08:59:49 UTC
I'm not a perl expert, but I just saw something that looks like it might be
related to this issue in the exim mailing list:
--
I've cracked a problem causing disconnections by clients.
Output from perl's "warn" statement is written to the client interface by 
default. Library routines (Mail::SRS for example) are unaware of this and the 
client disconnects having been fed garbage.

Ths solution is to use this little bit of perly magic:

$SIG{__WARN__} = sub { Exim::log_write($_[0]) };

Output then goes to the logfile.
--

Could this be used to route warning messages from other Perl modules to a method
that timestamps the output and sends it to the log file?
Comment 8 Blackketter Dean 2005-08-10 12:36:09 UTC
Thanks Marc, seems to work.  Commited in revision 3927. Please verify.
Comment 9 Marc Sherman 2005-08-10 12:40:24 UTC
I'm still running 5.4.1, so you might want to re-assign this to someone in your
qa group for verification if you want it to be at all timely...
Comment 10 James Richardson 2008-12-15 13:05:49 UTC
This bug appears to have been fixed in the latest release!

If you are still experiencing this problem, feel free to reopen the bug with your new comments and we'll have another look.

Make sure to include the version number of the software you are seeing the error with.
Comment 11 Chris Owens 2008-12-18 11:54:59 UTC
Routine bug db maintenance; removing old versions which cause confusion.  I apologize for the inconvenience.