Rendering for print questions

Why is it that you prefer to use the Advanced camera with a moving grid/Null instead of the Shift camera, which is much easier to set up, and requires very little (if any) math?

BTW, when I tried your camera it skipped one of the frames, probably because it was set up to use a different fps (I typically use 25fps).
 
I will share my A3 setup for print which works fine, but, BUT THE SSS2 shader creates banding! Now I increased the quality which helped somewhat,though there's still a thin band,so I will have to fix it by hand. With Omega the banding disappears but it's pretty much slower and produces totally different output than the new SSS shaders.
 
I have switched to rendering everything on a pc. I just bought a new pc from boxx and running lw 9.6 64bit. Our 8core mac was just not cutting it.
 
I have switched to rendering everything on a pc. I just bought a new pc from boxx and running lw 9.6 64bit. Our 8core mac was just not cutting it.

It's not the number of cores; it's the amount of ram (and running in 64-bit).

I can render huge files in LW by booting Snow Leopard in 64-bit, and running LW HC (a companion to CORE). I have 10 GB ram in a 2008 Mac Pro with only one quad xeon, and I rendered 14,000 pixels square recently using this method. (That's 46.66" x 46.66" @ 300 dpi). Of course, like the pc 64-bit LW, your plugs need to be 64-bit also.
 
hope someone sees this quick
set a render off last night at 10,000 pixels square, memory used in scene is about 150mb and images total around 200mb, for some reason i cant set the segment memory limit above 2gb and i was sure i have set it higher before on the mac so it was running in 2 segments.
any way running on xp 64 LW64 9.6, 8 gb ram, dual quad core xeons @2.6ghz
checked the render as it set off and set it running from renderQ to lessen the weight on ram loading in the scene file to layout, raydiosity pass started fine so i left it running over night. im working in another building today and the guy that checked it this morning found the machine had restarted its self. no when i left task manager was saying LW was using about 4.5gb ram, page file was up to 10gb (think page file was set at about 12gb).
so what would cause the restart? i did notice the ram was climbing while doing the ray pass so maybe it hit its 8gb (well more like 7gb as the system uses at least 1gb) or could it be a page file limit hit? but shouldnt that self adjust if it starts getting low?

as far as i was aware 64bit layout should be able to render pretty much any size so i have told them to chuck another 4gb of ram in the machine which hopefully should be enough.

feed back on this would be good
cheers

Mike
 
To save ram, once you are ready to render, obviously shut down every other application, including Modeler.

In Layout change the View Ports to Bounding Box, save & close the scene. re-launch Layout, & don’t change the view ports. Then render.


Also Screamernet in this situation is really good. You just launch Layout. You don’t load the scene, but just go to SN & load the scene here. You won’t know how long it takes, but it does free up more ram.

SN on one box is not too difficult to set up.


Re the page file. You are using 10gb of virtual ram. It should not be more than say a few hundred kb. I think you may need at least another 8 gb ram. Because you are up to about 14.5gb.

If you are a bit short on ram the scene will take forever to render. Multitudes longer!


A UPS with a large battery (MGE Evolution S + EXB) are worth their weight in gold in these situations.
 
cheers jonW
in the end i just worked round it, i had rendered the backdrop with no extra models in it (clean room so to speak) with the DOF on it, then in the large render i turned off DOF and MB, and rendered the scene via renderQ as that basically does for me what SN does but uses all cpus automatically.
it still crept up to the ram limit but finished in 7 hours, with DOF and MB passes on it was looking to take over 20 hours
still waiting for the extra ram to turn up, though they have only put 333mhz ram in 5300's, which im a bit narked at as it should have at least been 6400 887mhz ram, but then i didnt get to spec the box, oh well nearly done with it now
 
Print is all about resolution.
Resolution and color spaces. But that's probably other thread :)


The higher the halftone screen ruling, the higher the render size needed to accommodate the final print size. These days with direct-to-plate imaging, 200 lines of half-tone dots per inch is typical on press.

On a Mac Pro with a single quad core Xeon and 10 GB of RAM, the largest I have ever been able to render in LW 9.6 is 6000 x 6000 (36 million pixels). At 400 ppi, that's a final print size of only 15" square.

So my question is twofold:

1. Would it help to add more ram, or is the 6000 x 6000 limit I'm experiencing simply a function of LW UB running in 32-bit?

2. To get larger renders at print rez, say for posters or packaging 24" x 36" (9700 x 14500) and up ... I tested the theory of just rendering multiple limited regions and comping it all in Photoshop later. Theoretically, this would take only 4 limited region renders at my 36-million pixel limit ... but this does not work: I get the dreaded ""Error: image Creation Failed" each time I attempt to render even a "tiny" sliver of a 9700 x 14500 image (tiny, like 1/20th of the image, or only about 7 million pixels). I also tried limiting the region to something clearly smaller than 6000 x 6000 pixels, thinking maybe the 6000 is a linear limit, rather than the total number of pixels being rendered. But alas, this fails with the same error message also. In fact, there isn't a small enough limited region for LW to render any part of a 9700 x 14500 image.

So how so you get there from here? Is there a plug-in or another method besides limited region for successfully rendering small parts of a print-size image? -- (On Flay I found "LW Stripe" which was originally designed for this purpose, but it's ancient, and available only for Win 95/98 and Alpha! :) ... I actually had an Alpha about 12 years ago! lol

Thanks for any guidance ... :)

Just in case you are interested in an artist-friendly technique for rendering your high-res images without RAM issues, I've shared a trigonometry-free technique in Issue# 32 of HDRI3D magazine. It's an intuitive but accurate way in just 7 steps.



Gerardo
 
So what happens with interpolated radiosity in such multi-part post-stitched images?
Won't it recalc for each part, and make it impossible to blend?
Or maybe you need to GI cache it first?
 
I had a few % borders to blend possible issues (none seen in my case). GI was rendered per part (thus node), in my case a 30k pixel wide image can't even be selected...
 
Hieron, excellent version! but it's not the same technique. If horizontal and vertical resolution can be split by the same number of segments, your technique looks easier, I think; but if we need an arbitrary number of segments (horizontal different than vertical), or any other graphical configuration, I think the technique I'm proposing in Issue# 32 is easier. The technique also allows to set up all segments in the same frame (or no need of setting up different scenes for each segment). Useful in case the illustration is playing with a motion blur effect in some specific frame.

Bjornkn, you could GI cache it, but if the technique allows a blending area, as Hieron says, that's not a problem usually. Well, I commonly don't use Interpolated anyway - I prefer other method to get smooth GI results.



Gerardo
 
Hey thanks!
It indeed needs the same segmentation in vertical and horizontal direction in the simple form. And while it all can be done in 1 scene (just on x different frames), something like motion blur (or any movement) wouldn't work well with it indeed.

Perhaps I should see if I can get HDRI3D's here as well
 
Hieron, excellent version! but it's not the same technique. If horizontal and vertical resolution can be split by the same number of segments, your technique looks easier, I think; but if we need an arbitrary number of segments (horizontal different than vertical), or any other graphical configuration, I think the technique I'm proposing in Issue# 32 is easier. The technique also allows to set up all segments in the same frame (or no need of setting up different scenes for each segment). Useful in case the illustration is playing with a motion blur effect in some specific frame.

Bjornkn, you could GI cache it, but if the technique allows a blending area, as Hieron says, that's not a problem usually. Well, I commonly don't use Interpolated anyway - I prefer other method to get smooth GI results.
Gerardo
But you won't reveal your technique to us who don't have access to that HDRI3D mag?
I'd love to see it :)
And I'd also love to see some more info on how to achieve smooth non-interpolated GI within a reasonable time frame :)
 
ButterflyNetRender has a built-in feature for splitting up images into slices and then stitching the results back together. Works fine in LW, Kray and FPrime, and lets you render very high-res very quickly across multiple machines, without using as much RAM.
 
When using Thomas Mangold’s Advanced Camera Rendering tutorial.....

In Render Globals, make sure Use Behind Test is on. I’m sure this will give correct split frame renders with BNR.
 
Back
Top