PDA

View Full Version : What would frame splitting network rendering look like?



JoeJustice
01-30-2011, 03:29 PM
I've had people ask me about frame splitting network rendering and I've seen the request come up quite a bit over the years for Lightwave. I've been thinking about this and I'm just curious what people think that would actually look like.

I suppose the awesomely, awesome thing would be if you pressed F-10 and every computer in your office suddenly blinked on and Matrix-like code began to stream across them and in 5 minutes you have a final scene that normally takes 3 weeks to render just sitting on you C: drive.

That's not something I'm gonna be able to make happen....

What I could envision is taking a scene and breaking it up into smaller scenes. So instead of outputting one frame it would output 1/2, 1/4th, 1/8th or 1/16th of the final frame and then once everything finished rendering, another application would assemble the fractions back into a single frame. This is the way I've always handled print graphics by using Limited Region. Anytime I've had to produce a billboard with radiosity I've broken the frame up and rendered the fractions over the network and then put them all back together in Photoshop.

In thinking this all through, I just don't know what advantage this gives to networking rendering an actual scene. Let's say we break the frame up into 4 segments and the frame overall takes 1 hour to render. So each fragment would take 15 minutes.

So we take AwesomeScene.lws and turn it into AwesomeSceneA.lws, AwesomeSceneB.lws, AwesomeSceneC.lws and AwesomeSceneD.lws. How should these then be rendered? Would all of A get rendered first? Or frame 1 of A, B, C and D get rendered and then frame 1 gets assembled and then goes on the frame 2 of all four?

To me this seems like a lot of overhead. The splitting, loading different frames and then reassembling seems like a lot of work with little gain. Sure each fraction will render in 1/4 the time, but it still has to render 4 times, so there's no gain.

I could be missing something. So what is it you would like frame splitting and rendering over the network to actually look like?

JonW
01-30-2011, 05:31 PM
It would normally be used for doing large single frame renders. A bite size piece would be sent out to each node. If it could be done automatically I would break it up into say something like 64 pieces & maybe a second option of 512 pieces. If you have a few computers this will be great. & if you have a stack of computers it will be even better.

If the set up needs to be a multiple size of the original piece in pixels, so be it. Once you get to the very large overall size a few pixels either way in render size is irrelevant.

The reason for so many individual pieces is if there is some intensive calculations in some small areas of the render, 1 computer is not going to be bogged down with a too larger area to render.

If I have a very long render I have a guess & cut it up into a few pieces & manually get a few computers to render. Or use Thomas’s shift camera idea & render a 4 frame animation etc. If this could be automated to larger number of pieces & reassembled it would be fantastic.

Or a Plugin or something to do this would be great. I think if it’s a reasonable cost a lot will buy it because it will make using those spare computers so much easier.

My next computer will be a dual Sandy Bridge, but if I can get the farm onto the one render I will buy a few more single CPU boxes. It will be a far more economical way to go.

lertola2
01-30-2011, 06:12 PM
I do this with ButterflyNetRender. We have a render farm with 32 nodes. I usually break up the scene into 50 or 60 horizontal slices. ButterflyNetRender stitches them all together when the render is done so I can use the render farm to make very big renders in a reasonable amount of time. One big drawback is that radiosity does not work properly with frame splitting in ButterflyNetRender. The radiosity shading is different in each horizontal slice so you can see the edges of the slices when using radiosity.
-Joe

JonW
01-30-2011, 06:59 PM
Have you got "Use Behind test" on. If it's off using Thomas's shift camera or doing manual sections radiosity renders are a little out. But if it's on, pixel peeping, the renders are perfect!


This example lined up perfectly to the pixel with shift camera using 6 computers & Thomas's shift camera

RudySchneider
01-30-2011, 07:24 PM
Interesting picture; looks seamless, though I did notice a little "gotcha" in the bottom-left portion of the render:

JonW
01-30-2011, 07:32 PM
It does line up perfectly, but the more you split it up, the bigger the back of that envelope needs to be! & the more time you need to buggerise around in Photoshop to piece it together.

Thank you for reminding me, I couldn't be stuffed fixing it! In model trains we call it a rivet counter!

Sensei
01-30-2011, 11:07 PM
What I could envision is taking a scene and breaking it up into smaller scenes. So instead of outputting one frame it would output 1/2, 1/4th, 1/8th or 1/16th of the final frame and then once everything finished rendering, another application would assemble the fractions back into a single frame. This is the way I've always handled print graphics by using Limited Region. Anytime I've had to produce a billboard with radiosity I've broken the frame up and rendered the fractions over the network and then put them all back together in Photoshop.

You have actually described how VirtualRender http://virtualrender.trueart.eu is working. It has special command VR Make Still Frame, which is baking every key and removing all keys and motion modifiers which are not in current frame.
Then user is loading such scene to preferred render controller, like BNR, and tells "render frames 1-40", but if all frames are the same, 40 render nodes are rendering exactly the same picture in the same time (so 40 minutes render will take 1 minute + some time spend on starting up and loading scene to all nodes)..
Then user is using VR Composite to join everything in one key press (just have to tell output filename and format). It can be used when render controller is still working! So, you see half ready image, and after making couple regions check whether there is no seams (without breaking).
When user cancel render, or there is power off, rendered regions are on disk, and next time pressing F9 or loading scene to render controller will continue. VR will automatically check which regions are done, and skip them.

VR setting up time is very short - in VR Options pick up folder on shared disk where regions are stored. And width and height of image, pick up region size from preset or use default. That has to be done once for scene, then it's stored in LWS for eternity.
Optional 1 key press to VR Optimize Camera, no additional options in this tool.
Optional 1 key press to VR Make Still Frame, no additional options.
And scene is ready to send to any render controller, or render locally, using F9. You can even load that scene in alone Layouts on different machines, and manually press F9 on each of them (that's what I was doing, without LWSN set up).
It's not splitting original scene to several LWS on disk!!! (That's what was doing mine older tool RenderSplitter released 6-7 years ago)

VR is NOT using Limited Region or Shift Camera trick! (Limited Region is not optimizing memory usage! You can see it by running LW 32 bit, entering in Camera Properties 16000x16000, and pressing F9 should show "image creation failed"... then use Limit Region like 1x1 pixel, and you will STILL see that error).

VR compress each rendered region, so they take 5-10% of regular RAW RGB data take usually (and therefore are transfered to/from shared network disk much faster).

Exception
01-31-2011, 04:10 AM
I agree that splitting render frames is mostly useful for print graphics, so you might want to render out four shots from a scene rather than a whole string of frames.

I think ideally you enter the scene, enter how many 'pieces' it needs to be cut in (not auto stuff necessary, we're smart enough for that), then each part is calculated separately, then merged together.

An awesome option would be to allow for a user defined 'padding' range. It would render, fir instance, 10 pixels extra for each section, and then blend these padding areas when being assembled. Yes, it will take a little more render time, but it'll save on fudging in PS.

I think, ideally, there would also be a 'single computer' setting, where you ensure that only one computer renders all parts of a single image. Because of slight inconsistencies beween macs and pcs and some (thrid party or not) shaders and effects, that might help in some situations.