PDA

View Full Version : Screamernet problem with HT and HV



jasonazure
08-17-2005, 04:53 AM
I've recently discovered a problem with hypervoxel(sprite) heavy renders when using ScreamerNet on HT processors !

I have a small render farm consisting of mostly Hyperthreading processors with a couple of older 2.4gig non-HT processors. I was rendering a hypervoxel heavy scene and noticed that the older processors were racing ahead of everything else, this was very confusing and I initially thought it may be something to do with large Napalm files and firewalls (the 2 older machines are the only ones with no firewall installed), so I messed around with that idea for a while with no success.

Eventually I brought up task manager on one of the HT machines and forced the screamernet node to only use one thread, WOW ! it suddenly took off and started rendering at least 5 times faster !!!

Maybe something for Newtek to address !

djlithium
08-23-2005, 11:47 AM
I've recently discovered a problem with hypervoxel(sprite) heavy renders when using ScreamerNet on HT processors !

I have a small render farm consisting of mostly Hyperthreading processors with a couple of older 2.4gig non-HT processors. I was rendering a hypervoxel heavy scene and noticed that the older processors were racing ahead of everything else, this was very confusing and I initially thought it may be something to do with large Napalm files and firewalls (the 2 older machines are the only ones with no firewall installed), so I messed around with that idea for a while with no success.

Eventually I brought up task manager on one of the HT machines and forced the screamernet node to only use one thread, WOW ! it suddenly took off and started rendering at least 5 times faster !!!

Maybe something for Newtek to address !

I have discovered similar issues running the IA64BIT Xeons at 3.2Ghz with HT using LWSN being controlled via Smedge. I have found that its best if you want to consume all of the available CPU power (just watch the ram) to set the render job to use a single thread per LSWN instance instead of launching a LSWN node and trying to get it to use all 4 CPUs (2 in a box - 2 virtual via HT) especially if you are running WinXP 64bit.
I run about 16 of these types of machines at BSG-75 VFX in house and for the longest time (well about two days before I scrapped trying to futz around with smedge and did some tests in LWSN directly and LW directly and compared notes) I was only able to get a system to use 25% of its total CPU Power. GRR.
But when you understand what its doing in the end it makes complete sense however I think NewTek should document this and do an alert or something.

Mebek
08-23-2005, 12:00 PM
Is there a way to set this up so each LWSN instance uses 1 cpu every time, or do I have to manually do it for every session? I've also noticed this (previously) inexplicable problem with my ScreamerNet sessions. I use two LWSN's on this machine (one per cpu) and one always takes about 60% longer than the other, no matter what the scene is, and a third cpu (on another machine) does a frame in half the time of my fastest thread, even though the specs are too similar for such a difference.

Knight Chat X
08-24-2005, 03:27 AM
Correct me if I'm wrong but I think what he's saying it instead of launching 1 screamer net instance per machine and per CPU, launch multiple instances of screamer net because when rendering screamer net isn't using all of the CPU's available horsepower to render, instead it's only using about 25%, that would explain why network rendering seemed slower for me as well than rendering on 1 machine, of coarse the networking connection plays a small factor because there are processing delays, I find firewall does slow things down a tiny bit but barely noticable.

On my AMD Athlon XP 3000 it only takes about 10 - 15% but I keep systems fully optimized and running cool when possible and try to make the videocard's do the graphics work and less of the main CPU that's what they are for to take load off the CPU anyways.

Mebek
08-24-2005, 12:40 PM
I have a dual core P4 and I use two LWSN nodes on this machine to net render. Since learning the trick of setting each one to only use one CPU, I find that both CPUs are being used 100% and my render times are more normal. I've tried having more than two LWSN nodes running on this machine and find that it slows the total rendering down to such a point that there is no benefit at all for doing this.

So for me, one LWSN per CPU works the best.

My question was if there is some way to have each LWSN node automatically use just one or the other CPU instead of trying to balance 50% between them and thus interfering with each other. Right now I have to manually set it up every time they run.

Knight Chat X
08-25-2005, 04:47 AM
Ok, I tested this further and it appears screamer may become unstable when the method is used, when CPU spikes to 100% a crash results, on the other hand I'm also finding volumetric rendering isn't working with network rendering.

The method mentioned also results in frames appearing to be rendered but they are not.