Bug 9117 - Boom Exceedingly Slow when using ReadyNas (or other slow) system...
: Boom Exceedingly Slow when using ReadyNas (or other slow) system...
Status: CLOSED FIXED
Product: Logitech Media Server
Classification: Unclassified
Component: Player UI
: unspecified
: PC Other
: -- major (vote)
: ---
Assigned To: Matt Wise
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2008-08-11 16:10 UTC by Matt Wise
Modified: 2009-09-08 09:20 UTC (History)
6 users (show)

See Also:
Category: ---


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Matt Wise 2008-08-11 16:10:37 UTC
Boom (and for that matter, other players too ... just not to the same extent) is unusably slow when connecting to a ReadyNas platform running SC 7.2. The volume control on the knob is almost useless -- and control with the IR remote is just about as bad as well. 


With the Transporter, it seems that the volume knob sends updates in a "set volume to position XXX" mode. When packets are lost, it just resends "set volume to position YYY now". The Boom seems to just send "Volume UP" packets. When they're lost (or buffered) the volume can either not go up nearly fast enough, get completely stuck, or sometimes will jump all the way up (or down). 

Additionally, menus and general interaction are extremely slow. SqueezeCenter seems to be taking up too many CPU resources when its just playing music on one or two players. This almost certainly contributes to the slowness.

I've done some performance testing and on the Boom when I change the volume rapidly I get these messages:
[08-08-11 16:08:23.5452] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:23.5751] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:23.5992] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:23.6275] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:23.6516] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:23.6758] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:23.6977] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:30.7782] Slim::Hardware::IR::processIR (664) Received 0001005a while expecting 0001005b, dropping code
[08-08-11 16:08:34.0853] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:34.1016] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:34.1215] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:34.1376] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:34.1539] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:36.7681] Slim::Hardware::IR::executeButton (1102) Button [fwd] with irCode: [undef] not implemented in mode: [playlist]
[08-08-11 16:08:45.9469] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:45.9762] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code
[08-08-11 16:08:46.0011] Slim::Hardware::IR::processIR (664) Received 0001005b while expecting 0001005a, dropping code


No matter how hard I try, I can not get the same messages with the Transporter Knob...
Comment 1 Blackketter Dean 2008-08-11 16:20:57 UTC
This isn't really a player bug, rather we should be profiling the server side to figure out why the response time is so low...

Andy/Brandon: Can you help Matt do a little profiling?
Comment 2 Matt Wise 2008-08-11 17:21:12 UTC
Further testing on a Mac Mini (1.25GHz, G4, Single Core) indicates that this does not occur to nearly the same extent unless the CPU is busy doing other tasks. Was able to reproduce while doing large file-backup to the same volume as my music, but when that backup was finished the performance returned to normal. 

I would agree that the primary focus of this thread is on the SqueezeCenter CPU utilization on a ReadyNas platform. 

There should be a close look taken at how Boom sends volume updates as compared to Transporter. A transporter connected to a slow REadyNas server is far more usable. Still not fast, but better. 
Comment 3 Adrian Smith 2008-08-12 14:39:31 UTC
Is the ready nas really that slow?

The main issue is that we go through all of the ir processing, list and display updates per rate limited click on boom - on transporter we do this a maximum of one update per round trip so to it self adapts to the server and link performance.

Other than implementing the transporter knob protocol on boom, I think there is some scope to optimise some of the lines functions etc - recent display code has been a lot less fussy about the cpu load as I've got a faster server (!)  However I'm supprised just scrolling through a list causes these problems as there's no database access in that just perl processing to manipluate the menu and create new displays.

Can you do some profiling on the box itself with --perfwarn?  I assume you can get console access and run the server from it?

Also see comments against the previous bug on this..
Comment 4 Matt Wise 2008-08-13 12:07:36 UTC
While we look into all of the performance issues here, I've opened a second bug 9137 to address the knob design on the Boom vs Transporter. 
Comment 5 James Richardson 2008-08-27 11:23:08 UTC
Verified better response time with SqueezeCenter 7.2-22900
Comment 6 James Richardson 2008-12-15 11:59:12 UTC
This bug has been fixed in the latest release of SqueezeCenter!

Please download the new version from http://www.slimdevices.com/su_downloads.html if you haven't already.  

If you are still experiencing this problem, feel free to reopen the bug with your new comments and we'll have another look.