PDA

View Full Version : Baked Animated GI cache - slow and odd behavior (11.6.3)



wturber
04-20-2016, 05:01 PM
I have a very simple animation with slowly rotating text. The scene uses Monte Carlo Global Illumination - interpolated. If I render without caching, I get about 300 frames rendered in around two hours on a dual x5670 machine with 24GB of RAM. But weird things happen when I bake an Animated Cache and then render. The result is that render times become truly outrageous and Lightwave does some other strange things.

If I turn on animated cache and save to cache after each frame and then do an F10 render, my render time increases, but not horribly. It seems like my initial 20-30 second per frame render times get progressively longer, taking about five seconds longer for each subsequent frame. At that rate, after 300 frames my render time will be about 25 minutes.

But things get even stranger when I render with Screamernet or render with the GI cache pre-baked. My initial render times immediately shoot to the 30 minute or more mark. This animation took 12 hours to render using a dual x5670, dual x5649, a dual x5640, and an i7 3930 OC'd at 3950 mHz. That's pretty ridiculous for a simple 10 second animation of rotating text.

Part of the problem appears to be with the GI recalculation. When the rendering begins, GI recalculation is very fast for a second or two. You can observe the CPU usage spike as if heading to 100%. But it then cuts back sharply and sits at 4% where it stays until the GI recalculation is finished. 4% is the capacity of a single "hyperthread pseudo core" on a 12 core machine with hyperthreading on (24 threads/pseudo cores). Once the GI calculation is complete, the rest of the render engages all cores as is normal - though the render still takes many minutes instead of the fraction of a minute needed for the initial frames when the GI is unbaked. Screamernet exhibits the same behavior.

Whether I pre-bake and use screamernet or simply bake the cache as I render, it looks like I need about 50 hours of machine time with to finish the render. If I don't use the cache, I only need about two hours of render time. That seems like a huge price to pay to avoid flicker with interpolated GI on a simple text animation. And the single-threaded GI recalculation just seems downright strange.
If anybody has any insights into this issue I'd appreciate hearing their thoughts.

Thanks,
Jay

spherical
04-20-2016, 05:23 PM
This sounds familiar. I'll see what I can find. Try a search on increasing render times.

Found two:
http://forums.newtek.com/showthread.php?146788-Render-time-increases-every-frame
http://forums.newtek.com/showthread.php?142287-Usefulness-of-caching-Radiosity

wturber
04-20-2016, 06:38 PM
Thanks for pointing out those threads. I had found a couple postings about progressively increasing rendering times but didn't pay much attention since I hadn't yet calculated the real toll that adding five seconds per frame would actually add. The search terms you suggest are probably better that the GI, radiosity and cache terms I was using.

Also, there wasn't a whole lot of insight offered as to what to do about it. Right now it seems like a matter of picking your poison - oddly long cache based renders or dropping interpolated and increasing the heck out of samples.

I guess what I find most maddening about the situation is that when using the baked cache Lightwave goes into a prolonged single-threaded mode. Why isn't that process multi-threaded?!? [rhetorical] Oddly, while I'm typing this I just observed it jump to two threads. Wow! It's using one whole physical core!! :^( An interesting side note - Lightwave becomes complete unresponsive while it is in that single-thread/core mode. You can't move windows, gain focus, or update the display in any way. Even the QV display window is unresponsive. The windows behave normally once Lightwave moves forward with the image rendering process.

wturber
04-20-2016, 07:11 PM
dupe posted in error - deleted