View Full Version : Dual Core 50 Percent CPU Usage...

03-27-2007, 08:07 AM
I apologize in advance because I'm sure this subject has been discussed before, but for the life of me I could not find any "Search" ability in order to search through a Forum Subject. (I'm a new member.)

So the first question is - How do I search through a forum of threads so I don't have to post redundant questions? (It seems my browser (IE - I'm at work) is hiding the button or something because I don't see it anywhere.)

The real issue is a simple one - I'm running a dual core AMD Opteron PC with ScreamerNet using Lightwave 8.5 and Windows XP. I followed the manual and started two instances of the LWSN batch file and everything renders as it should from the Layout Network Render dialog.

The only problem is I'm noticing under the Window's CPU Usage dialog that while both CPUs are rendering, the overall CPU usage does not get any higher than 50% exactly. Something else that is peculiar is that whenever an individual CPU grabs a new frame, the overall CPU usage in Windows quickly drops to 25% (exactly), then jumps back to 50% once it starts rendering again.

Its seems as though there's some setting or threshold in place that isn't allowing each CPU to use more than 50% of its own processing power thus totalling to an overall 50% CPU usage (the sum of both processors).

Does anyone have any advice/knowledge/experience with this issue? Or even better, a thread to direct me to?

THANKS! :thumbsup:


03-27-2007, 02:58 PM
For several months now I have been experimenting to see whether you really need more than one command prompt window relating to each relative cpu. In other words, I have found that it is quite often better to just run one even on multi-cpu machines.

I have found this to be true on both PCs & Macs. I don't believe mult-processor computers are properly supported by LWSN. Just my opinion.

03-28-2007, 11:07 AM
Upon further research of the CPU I realized the machine that I'm working on is actually a Dual AMD Opteron 285 CPU and each CPU is dual core. So the machine is actually quad core. This was confirmed using the "System Information" utility in Windows XP which actually displays four "processors" each at 2650 Mhz. (Four 2.6 Ghz processors! - guess my company knows what they're doing afterall.) It's funny that otherwise no one had any idea that this was actually a four core machine except when we started doing batch renders in Lightwave. The BIOS startup screens and the "My Computer" properties window both do not display anything about it being dual, dual core. Only the "System Information" utility does.

This also explains why the "Overall CPU Usage" maxed out at 50%. Upon starting four LWSN.exe batch files and restarting Layout, all four processors are now recognized and are all being used for rendering. And now the CPU usage is going to a full 100%. It's nice to instantly double your rendering speed (minus overhead).

On a side note: I wasn't even aware that there was a such thing as dual, dual core. I know there's quad core but I thought that was referring to a specific architecture made for four CPUs. Or is a quad core machine really just a dual, dual core machine? (Are they one-in-the-same?)

Also, I found the seach capability for the forum. It showed itself once I confirmed my membership.


03-29-2007, 06:46 AM
There are both, and when Intel's "Penryn" models start shipping late this year you will be able to easily have two quadcore processors in a single machine! The quadcores that are available now have been dismissed in certain circles as being nothing more than "dualcore-with-hyperthreading". How true this is I have yet to see... interesting times indeed.


Tom Wood
03-29-2007, 07:01 AM
Love my (now ancient) dual Xeons when rendering - four render nodes with the two extra 'virtual' processors. :D

03-29-2007, 10:33 AM
What's a virtual processor? Why are they useful?

Tom Wood
03-29-2007, 12:36 PM
There are only two actual physical Xeon processors in the computer. But when hyperthreaded there are two 'virtual' processors that show up in task manager and that are accessible to Screamernet. I don't know what the thinking is behind that for other applications, but it means four render nodes for the price of two. Even though Xeons are more expensive, but still...

04-03-2007, 02:09 AM
a cpu is actually a pipeline of multiple independent processes. not all processes take the same amount of time. hyperthreading allows you to use parts of the pipeline that would otherwise be idle - while the rest of the pipeline catches up. hence the virtual processor. even a quadcore cpu can benefit from this.

04-05-2007, 08:00 AM
On occassion however, if you really watch what the screamernet output dump is doing and put up the task manager at the same time you will find a lot of CPU time isn't being consumed because certain operations like "moving object" and or object loading and even some rendering operations are not multi-threaded. YET.
I touch on this in my tutorial here http://www.battlestarvfx.com with respect to how much time you can waste by using multi-layered objects instead of single layer objects where ever possible. You can and I have scene this myself, waste up to 90% of the rendering time itself from frame start (loading stuff) to actual image save out simply on "moving object" operations.
Part of the issue is simply that each time that process starts it never actually gets to the point where it hits more than 25 or 50% of CPU usage before it does the job its trying to do before moving on and dealing with the next object. Multi-threading and CPU consumption is very much like what you would experience with FTP. FTP starts out slow and then ramps up until it hits an error, then backs off and starts to ramp up again to the maximum it can sustain. Any time there is a new file to transfer (or in the Case of LWSN, move) its going to start from zero again and ramp up.

Laymans explaination but thats more or less how the CPU is dealing with those operations.

04-06-2007, 08:33 AM
On my system (the dual dual core Opteron setup mentioned above), when I do a network render using four instances of LWSN, each processor gets assigned a frame, the scene gets moved to the processor, and then the processor cranks at 100% until the frame is rendered. Each processor seems to sit at 100% until it's done. The processor then drops when it's getting a new frame, then it quickly ramps back up to 100%.

Is there something not optimized in this scenario?