PDA

View Full Version : non gpu render in LW?



andromeda_girl
01-22-2017, 01:29 PM
been trying to understand why a machine with 64 GB ram, a titan with 3000 GPU's and 16 cores on dual CPU's still required 10 hours a frame to render a single frame and am starting to learn that for some reason that eludes me, perhaps LW does not access GPUs for render power.
perhaps this explains why there is so much information about Octane render on the LW site?

somewhat flabbergasted to learn this, it seems too old school to be true.
isn't newtek going to correct this?
:(

MichaelT
01-22-2017, 01:44 PM
LW does not use GPU. So yes, you need something like Octane for this. That said, 10hrs sounds like there is a problem with your setup also. Because unless you are deliberately rendering something at that level, it normally goes a lot quicker to render things. But I can render about the same time as Octane, and still get similar results. Yes, I know... adding mirrors and what not, increases the time. But not something I use a lot, to be perfectly honest. And don't take this as an absolute truth. I also know full well, that depending on complexity, things add up. My main point is that LW is pretty quick. CPU also scales better than GPU, which is why companies still use them.

Example... I made this one while writing this post (the artifact at the top, is just a layer offset a bit. I did say I made it while writing the post :) ):

135686

rustythe1
01-22-2017, 01:56 PM
you must have something not set up correctly or optimized for your scene, I find lw native faster than octane on my 16 core, usually my 1080 animations are around 30sec to 3 mins per frame, obviously your post is very vague on what you are trying to accomplish but there is nothing wrong with the current renderer

MichaelT
01-22-2017, 02:30 PM
you must have something not set up correctly or optimized for your scene, I find lw native faster than octane on my 16 core, usually my 1080 animations are around 30sec to 3 mins per frame, obviously your post is very vague on what you are trying to accomplish but there is nothing wrong with the current renderer

Yeah, I have noticed it being slower than CPU as well (in more complicated scenes)

andromeda_girl
01-22-2017, 04:03 PM
well, to be fair it was an 8k render spitting out 16 compositing buffer passes...
but regardless of the high resolution, 10 hrs did seem excessive, especially since it was a single frame.

I have so far been unable to locate any documentation on how/where to set up specific performance tweaks, such as multithreading or how much memory to access.
[I am nonetheless saddened LW does not access the GPU, it seems only Fusion wants to gleefully use up all the hardware power in the machine without even asking it to...]

other than the slow render [to me it was slow anyway] LW did split my ambient occlusion render horizontally in half: half rendered AO, the other half appeared to be a spec pass] which I feel may indicate it was not using all the cores in the CPU, for example- another concern.
[I would have expected a default to be multithreaded which would avoid anything odd like that]

presumably there is some panel hidden away somewhere within LW so I can ensure it is multithreading and using as much RAM as I dare feed it but I haven't found it. [been away from 3D work for 15 years so LW 2015.3 is quite a shock to the system, how much it has changed]

jasonwestmas
01-22-2017, 05:54 PM
mhmm, the optimal settings in the current lightwave are nothing like they used to be set at. For examples shading and lighting samples don't need to be as high to get high quality. AA is much different now too. I suppose a few of the shaders are not multithreaded but I'm pretty sure they are by default. I would try rendering at a much smaller resolution to pinpoint what in the scene is causing the slowness in general.

jwiede
01-22-2017, 07:09 PM
well, to be fair it was an 8k render spitting out 16 compositing buffer passes...
but regardless of the high resolution, 10 hrs did seem excessive, especially since it was a single frame.

It's difficult to guess without seeing the scene, but depending on how your compositing buffer passes are configured, you might be regenerating that 8K frame more times than you think. Certain compositing outputs require their own regenerated rendering, so instead of a single 8K render for the all the buffers, LW might have had to render it a few times to provide all the different buffer outputs. IIRC, AO is an example of a buffer output which can cause LW to re-render the output. Still, an 8K frame even with a few re-rendered buffers shouldn't be 10 hours under "normal" circumstances.

If you can't post the scene, or a reasonable approximation of it without "private" content, then can you at least describe the scene's contents a bit further in terms of surfaces/materials used (esp. noting any complex nodal surfaces), how much reflection/refraction is present (esp. blurry), etc.?

MichaelT
01-23-2017, 12:29 AM
well, to be fair it was an 8k render spitting out 16 compositing buffer passes...
but regardless of the high resolution, 10 hrs did seem excessive, especially since it was a single frame.

I have so far been unable to locate any documentation on how/where to set up specific performance tweaks, such as multithreading or how much memory to access.
[I am nonetheless saddened LW does not access the GPU, it seems only Fusion wants to gleefully use up all the hardware power in the machine without even asking it to...]

other than the slow render [to me it was slow anyway] LW did split my ambient occlusion render horizontally in half: half rendered AO, the other half appeared to be a spec pass] which I feel may indicate it was not using all the cores in the CPU, for example- another concern.
[I would have expected a default to be multithreaded which would avoid anything odd like that]

presumably there is some panel hidden away somewhere within LW so I can ensure it is multithreading and using as much RAM as I dare feed it but I haven't found it. [been away from 3D work for 15 years so LW 2015.3 is quite a shock to the system, how much it has changed]

Hmm, maybe you can create a mock-up scene, that does what you need in regards to the passes, but without anything project related? So others here can have a look at the setup you are using.

Sensei
01-23-2017, 12:54 AM
well, to be fair it was an 8k render spitting out 16 compositing buffer passes...
but regardless of the high resolution, 10 hrs did seem excessive, especially since it was a single frame.


But why are you making 8k in the first place?
Do you want to downscale it, to simulate AA?
Then you should have no AA, or very basic AA settings..



I have so far been unable to locate any documentation on how/where to set up specific performance tweaks, such as multithreading or how much memory to access.
[I am nonetheless saddened LW does not access the GPU, it seems only Fusion wants to gleefully use up all the hardware power in the machine without even asking it to...]

Because Fusion is extremely easy to program..



other than the slow render [to me it was slow anyway] LW did split my ambient occlusion render horizontally in half: half rendered AO, the other half appeared to be a spec pass] which I feel may indicate it was not using all the cores in the CPU, for example- another concern.
[I would have expected a default to be multithreaded which would avoid anything odd like that]

presumably there is some panel hidden away somewhere within LW so I can ensure it is multithreading and using as much RAM as I dare feed it but I haven't found it. [been away from 3D work for 15 years so LW 2015.3 is quite a shock to the system, how much it has changed]

Knowledge is more valuable asset than quantity of CPU cores and memory..

And you will learn from others on this forum,
or by yourself, during experimentation, as the most of us.
You should start from showing us you scene f.e. upload it here or to FTP/HTTP server,
and people will be able to download it,
and check "what is wrong with it".

My guesses:
you have wrong GI cache settings,
not optimal AA settings,
recursion set too high, and reflective/refractive objects are causing to bounce and reaching recursion limit every single spot..

ps. In 3rd party renderer you would have to learn how to set up everything from scratch, as your knowledge in LW built-in renderer settings will be pretty much useless..

Surrealist.
01-23-2017, 02:23 AM
presumably there is some panel hidden away somewhere within LW so I can ensure it is multithreading and using as much RAM as I dare feed it but I haven't found it. [been away from 3D work for 15 years so LW 2015.3 is quite a shock to the system, how much it has changed]

Multi Threading is on Automatic by default. The setting is in Render Globals on the Render Tab, bottom. You can also force it to use any specific number. But Automatic should find the threads on your machine.

Ram usage is also automatic as far as I know. At least I have monitored my rendering and seen it use it all up... lol No settings required.



well, to be fair it was an 8k render spitting out 16 compositing buffer passes...
but regardless of the high resolution, 10 hrs did seem excessive, especially since it was a single frame.

One thing to keep in mind is that 2K is 4 X more pixels than 1K. And so on. So that 8K is 32X more pixels than 1K and 16 more than 2K. So with 2K (HD) as a benchmark, your frame would take 37.5 minutes. This jumps to 10Hrs for 8K just purely mathematically. Though there are other compute times that I am sure are not calculated the same. But as a general rule it would come out that way - I presume. (Maybe you can use the Multiplier setting or try different resolutions in the Camera Tab to test it) And the same would go for ram usage. I have trouble doing a complex scene in 2K and 32 gigs or ram. So what card could I use to do that on a GPU? Not the average card I don't think. So there are those limitations too. Because frame size affects RAM needed.

Aside from that. Yeah they should get with GPU. But there are no current plans I am aware of.

As far as other things that would shoot your render times up, other than a lot of geometry is usually reflections and refractions. The default AA (Render Globals Camera Tab) is also very high. .01. I usually set it to something like 1 for test renders. And then maybe .1 or .5 for cleaner tests and can usually get by with something like .05 for a final render on some difficult scenes but usually I don't have to use .01.

The other setting that will send renders up are the shader and light samples. (Render Globals Render Tab) Usually I go with a lower shader sample and a higher light sample. Light Samples are cheaper on render time than shading samples. But it really depends on the content of your scene which you will need more of. Using the Limited Region tool (Layout/Render Tab) is useful to quickly see which of these settings give a good result.

Hope these tips help.

magiclight
01-23-2017, 03:22 AM
Make sure you don't have a segment size that is too small so the frame does not fit there, that would slow it down a bit, with an 8K frame buffer you need to crank it up, 400MB or so at least.

Also make sure you do not use the image cache it will slow it down a lot also.

When it comes to GPU, well with a limited number of developers it's a lot of work to implement a GPU renderer, you would need CPU support anyway because many times a large scene will not fit in the GPU memory so you need a fallback to CPU mode rendering, apart from all of that Rob has said in number of interviews that he does not want to lock LW down to some specific hardware (nVIDIA) and it would not make Mac users happy either I guess, that would require OpenCL support, there are many problems in GPU rendering, take a look at any GPU renderers forum, there are lots of HW related problems, they buy a new brand video card and get upset because the renderer does not support that card yet, they have a card that is 3 years old and it does not work very well because of that, no matter how many video cards you get you just get faster rendering you are still very limited in the amount of memory the renderer can use (it is getting better though, give it a few more years) a CPU renderer is a better solution for a builtin renderer that should work on PC and Mac without any problems, GPU renderers are fine and you can get one if you need it.

And as you said yourself, if you want GPU rendering there are solutions for that.

brent3d
01-28-2017, 04:04 PM
been trying to understand why a machine with 64 GB ram, a titan with 3000 GPU's and 16 cores on dual CPU's still required 10 hours a frame to render a single frame and am starting to learn that for some reason that eludes me, perhaps LW does not access GPUs for render power.
perhaps this explains why there is so much information about Octane render on the LW site?

somewhat flabbergasted to learn this, it seems too old school to be true.
isn't newtek going to correct this?
:(

Andromeda_girl, check out my latest video for the answer to you questions:


https://youtu.be/diQExULG_Vo?t=6