PDA

View Full Version : UB ScreamerNet works doesn't it?



Wopmay
02-09-2009, 05:25 PM
Hi.

I started a thread in the ScreamerNet forum because my node machines are ignoring the GI cache file. Everything seems to be set up correctly, the nodes render normally and drop the frames onto the host machine.

The ScreamerNet log says the node loads the correct cache file with the correct path to the correct disk on the network. Then apparently ignores the information. The scene flashes like crazy -- but only the frames rendered by node machines (Intel) are funky. The host machine is a PPC G5 and the frames from that machine are beautiful. I can render forwards, backwards, and skip around, and the GI matches perfectly on that machine. It just takes three times as long.

About to pull hair out.

Just thought I'd ask if I missed something concerning the UB having a problem with this or if there's something I'm missing.

Any clues or tips greatly appreciated.

Thanks,
W

In ScreamerNet forum:
http://www.newtek.com/forums/showthread.php?t=94982

brian.coates
02-09-2009, 06:52 PM
You could try generating a GI cache on one of your Intel machines and have your Intel nodes use that file, instead of the G5-generated version.

Wopmay
02-09-2009, 07:15 PM
You could try generating a GI cache on one of your Intel machines and have your Intel nodes use that file, instead of the G5-generated version.

Thanks.

I might try that. Unfortunately, I'm past half way on rendering the whole scene and I'd have to replace too many frames, at least on this project. I'd sure like to figure this out, though, because it has been a drag.

I'm hoping somebody has hit this same snag and it's something simple I've overlooked.

The other thing that's strange is that the frames rendered on the UB nodes have bizarre artifacts all over them while the PPC frames that supposedly use the same cache file are pristine.

:-)

W

brian.coates
02-09-2009, 08:50 PM
The other thing that's strange is that the frames rendered on the UB nodes have bizarre artifacts all over them while the PPC frames that supposedly use the same cache file are pristine.
I know that Intel and PowerPC chips use different algorithms for generating random numbers, which in turn would effect how GI (and most procedural textures) are calculated in LW. I suspect this is the main cause of your problem, which is why I suggested make an Intel-generated cache file for your Intel nodes.

It sounds like you may also have pre-process set to "Automatic" or "Always" under GI Cache options. Setting that to "Never" should force your nodes to ONLY use the cache file instead of doing any GI calculations of their own.

I could be completely wrong though ;)

Wopmay
02-09-2009, 11:50 PM
It sounds like you may also have pre-process set to "Automatic" or "Always" under GI Cache options. Setting that to "Never" should force your nodes to ONLY use the cache file instead of doing any GI calculations of their own.

I could be completely wrong though ;)

Brian!

Thank you. I DO have the pre-process set to Automatic. Maybe that's it. I was never clear exactly how that worked. For instance, what's the difference between Never and Locked. I'll try it with Never and see what happens.

W

avkills
02-10-2009, 03:55 AM
It has been suggested from others that mixing different CPUs in render farms is generally a bad idea for the reasons mentioned by Brian.

-mark

jackany
02-10-2009, 04:15 AM
The nodes always use "locked", but you have to create the complete cache first on one machine, then lock it to prevent further manipulation by this work machine.

I had big trouble with the cache file path first, but in my case it was related to cross platform naming convention differences...

I too noticed a slight difference with procedural textures between different platforms/CPUs.

Which cache do you use? Standard or animated? Use animated if other objects than the camera are moving.
By the way, there is that great Tutorial for Radiosity by "Exception":

http://www.except.nl/lightwave/RadiosityGuide96/

Especially take a look at the cache section. That cleared things up for me.

brian.coates
02-10-2009, 04:29 AM
The nodes always use "locked", but you have to create the complete cache first on one machine, then lock it to prevent further manipulation by this work machine.

I had big trouble with the cache file path first, but in my case it was related to cross platform naming convention differences...

I too noticed a slight difference with procedural textures between different platforms/CPUs.

Which cache do you use? Standard or animated? Use animated if other objects than the camera are moving.
By the way, there is that great Tutorial for Radiosity by "Exception":

http://www.except.nl/lightwave/RadiosityGuide96/

Especially take a look at the cache section. That cleared things up for me.
What he said. :thumbsup:

Wopmay
02-10-2009, 01:24 PM
It has been suggested from others that mixing different CPUs in render farms is generally a bad idea for the reasons mentioned by Brian.

-mark

I'd say that suggestion is correct. I don't do this very often. Must have been at the beach the day that memo came around.

All the nodes in my enormous four Mac farm are Intel Macs but I use an old G5 for the host. Seemed like a good idea at the time. Guess I'll have to re-think that. If this CPU issue is correct, and it seems to be the case, I suspect that has been the root of my problems from the beginning.

Sure enough, looking carefully at the renders, I can see the objects relying primarily on procedural textures are the only ones flashing.

Thanks to all who helped me understand this. Hope I can return the favor sometime. :thumbsup:

Yours,
Woppy