PDA

View Full Version : GPU Rendering in Lightwave



tcoursey
11-05-2015, 11:38 AM
Been over at Octane forums much more lately...but have been watching from a distance at the new announcements that have been coming out recently from Rob. Looking forward to the next version of LightWave and what they might have in store for us.

I'm wondering what everyone thinks....do you think we'll see GPU rendering with this new PBR system by any chance?

mummyman
11-05-2015, 11:41 AM
I don't remember if someone pointed out the differences in GPU vs CPU. For now, they said the new PBR renderer is CPU. What would be the difference or benefit of either? Speed?

toeknee
11-05-2015, 11:44 AM
I own Octane so I am covered, but I am pretty sure that Rob said the new render engine is CPU based. I could be wrong but I thought I saw that somewhere. Personally I would like to see Lighwave take advantage of both CPU and GPU. This is one of the cool aspects of Mantra render engine for Houdini. My system has two Nvidia 980's with a fast hex core i7. I want to be able to take advantage of all the power in my system.

tcoursey
11-05-2015, 11:56 AM
As to what is the difference....SPEED for the $/value equation. You can get much more bang for the buck with GPU than CPU. I currently use Octane with two older Titan cards...and they scream compared to CPU VPR etc... Only issues for me is once you go GPU (at least with Octane) you can't switch back and forth easily with your assets. The PBR system is only calculated with GPU and is different in node setup etc...than standard CPU/LightWave stuff.

I agree, it would be great to have LightWave or Octane have both CPU and GPU in one engine...not sure if that's a technical issue or if there isn't any real return....meaning GPU smokes a CPU...so why would you add a volkswagon to the race with lambo's (just a strange metaphor) :)

Who knows...just glad to see some info coming from LW3DG and it actually seem promising in some way. Have not upgraded since 11.6 as there hasn't been any real need for my workflow/pipeline!

mummyman
11-05-2015, 12:07 PM
Thanks!

tcoursey
11-05-2015, 12:10 PM
Thanks!

Head over to https://home.otoy.com/ and check out their live demo of Octane 3. It's a virtual machine you can view in any browser...interactively playing with it. Main menu bar upper right...Octane 3 Demo.

spherical
11-05-2015, 03:26 PM
I am pretty sure that Rob said the new render engine is CPU based.

It was Lino and he said it is CPU-only at this time. One of the problems with GPU, at least until Unified Memory becomes mainstream, is that you are limited to the amount of VRAM of the card that has the least amount. If you have a heavy scene, it may not fit; but a CPU renderer will chew through it. IIRC, Octane 3 has a workaround for this, though.

Lewis
11-06-2015, 02:32 AM
It was Lino and he said it is CPU-only at this time. One of the problems with GPU, at least until Unified Memory becomes mainstream, is that you are limited to the amount of VRAM of the card that has the least amount. If you have a heavy scene, it may not fit; but a CPU renderer will chew through it. IIRC, Octane 3 has a workaround for this, though.

Thing is that LW CPU render engine uses 5-6-7x more RAM than Octane uses VRAM so comparison is tricky. I've fit massive scenes into 3,5-4GB Vram GPU Octane and LW used over 25GB with same models/textures loaded at rendertime. Also even CPU machine is limited with memory since Motherboards also have RAM limits (classic 1150 MBos are 32GB limit so for more RAM you need socket 2011 MBs or Dual Xeons).

Also Octane v2.x already have option to cache textures on HDD/Local Ram so you can chew very big scenes already i..e no need to wait for octane 3.0 for that.

dulo
11-06-2015, 04:46 AM
This is one of the cool aspects of Mantra render engine for Houdini. My system has two Nvidia 980's with a fast hex core i7. I want to be able to take advantage of all the power in my system.
Mantra is CPU only. No GPU support at all . Only some physics stuff can benefit from GPU, but always at the price of inflexibility.

tcoursey
11-06-2015, 07:23 AM
Thing is that LW CPU render engine uses 5-6-7x more RAM than Octane uses VRAM so comparison is tricky. I've fit massive scenes into 3,5-4GB Vram GPU Octane and LW used over 25GB with same models/textures loaded at rendertime. Also even CPU machine is limited with memory since Motherboards also have RAM limits (classic 1150 MBos are 32GB limit so for more RAM you need socket 2011 MBs or Dual Xeons).

Also Octane v2.x already have option to cache textures on HDD/Local Ram so you can chew very big scenes already i..e no need to wait for octane 3.0 for that.

I wasn't sure if this was me or not...but I would agree....It seems my scenes in Octane don't take up as much space as they would have in LW either. We have done some pretty heavy ArchViz scenes with only 6GB Titan cards. We really enjoy the abilities and speed of Octane for our use, compared to $8-12K Xeon Machines.

Luc_Feri
11-06-2015, 07:29 AM
Yeah I've yet to fill up a scene to exceed 4GB of VRam, I'm loving Octane Render. The speed at which it will render DOF too is very impressive.

ianr
11-06-2015, 07:41 AM
Thanks Lewis that was great data, I said somewhere preivously
that LW3dG should do a Turbo Kit. Now I will refer to it as the
Nitros edition.

But seriously, TURB as well as OCtane show the leverage of GPU

They could (LW3dG) do it as a big Toggleable in-house plugin in 2016

Surely Sims more & more need Crunch time & they can do with a auto detect

Hook towhat is actively avaible on your graphics card . surely ,thats doable!

Lewis
11-06-2015, 07:48 AM
I would be most happy if they (LWG3D) opted for Hybrid (CPU+GPU) render engine now when they are completely changing shaders, lights and render engine so lot of stuff will be incomaptible anyway but for some reason they opted for CPU only.

Just check out Mosquito GPU render engine , they evne support all MAX nodes/materials on to GPU also (not sur ehow they pulled of that but it works like Fprime did in LW, i.e. no need to change materials)

erikals
11-06-2015, 10:31 AM
Lino answered it in a thread...

..... ...oh, what the hell, why don't i just give you the direct link >

http://forums.newtek.com/showthread.php?148301-Lightwave-3D-BLOG-is-now-up-online!&p=1451278&viewfull=1#post1451278


For sure not for the next release. Using GPU for the render is something we still need to evaluate.
For now, CPU is the way to go.

not to rain on the parade, but do note though that Jay Roth said they would implement it in a few years,
that was 7 years ago or so...

there is still the excellent Octane

and maybe Kray3 will be released some day...

grabiller
11-07-2015, 12:55 AM
I'm pretty sure Lino and Rob are aware of Redshift, the current "Graal" in terms of rendering, at least to me.

It does what LW is supposed to do, but like 20x time faster or so (I'm not kidding here, a scene that takes 20 minutes to render in LW will render in less than a minute or so in Redhshift yet with superior sampling), and it is much faster than Octane because it is a biased renderer (like LW in interpolated mode albeit you can use bf mc too).

When I say 20x time faster, I'm not even including the antialiasing part of the equation. At first you have a tendency to set your adaptive sampling like in LW or with other renders, 1-16 or 1-32 or even 1-64 (or even 1-128! ouch!) ? Well push that to 1-1024 in Redshift and you will barely notice the rendering time difference while having perfect DOF, Motion Blur, etc.. not a single pixel noise. Awesome. And for GI you can set such higher sampling rate for Irradiance caching that you can say goodbye to animated interpolated GI flicking. No noise, no flicking, and this, much faster than any renderer out there, at least to my knowledge. And forget about VRAM limitation, it does out-of-core rendering. It will go a bit slower then but.. it is still much faster than the others.

Anyway, my point here or at least my dream is that I wish Rob would get closer to the Redshift authors and pass some kind of deal to either integrate Redshift as the new "native" Lightwave renderer or at least to borrow the GPU tech part to include it into the new Lightwave renderer incarnation. That would make Lightwave incredible again with the fastest native renderer on the planet, exactly what artists, indies and small studio needs from Lightwave.

I encourage anyone who has access to Maya or Softimage (and soon 3DS Max) to actually test Redshift, this is the only way to really understand and realize and see with your own eyes what I am talking about. Until then you can't really understand or feel the pain of using any other renderer.

Cheers,
Guy.

erikals
11-07-2015, 03:17 AM
redshift is nice, but remember Octane is a beast at exteriorior renders
my guess is Kray3 will probably be close to the Redhift on interior renders, maybe even better, and with no darn subscription.

do note, i can get that 20min LightWave render down to 5min
trick 1 is to render the reflection in a separate pass (halves the render time)
trick 2 is to use DP Vector Blur (close to halves the render time)
trick 3 is to use adjustable Motion Blur passes - https://www.youtube.com/watch?v=78K72VFwsSU

and that is with the "old" current LightWave render,
these tricks applied to the new PBR render should make it a, wild guess, 2min render

i'm then excluding "trick 4" using DP DeNoiser - http://forums.newtek.com/showthread.php?146003-Gerardo-Estrada-DPont-Denoiser

...and other tricks

-------------------

now, those are methods one can use, easy methods, but they do take a little time.
hopefully NewTek can implement several of those techniques into the new LightWave

-------------------

not saying "no" to Redshift, but wonder...
how does it compare to Octane
how does it compare to Thea
how does it compare to other Render engines (so many of them these days)

grabiller
11-07-2015, 06:31 AM
I'm not attacking the Lightwave renderer, I'm just pointing out RedShift and its GPU acceleration, the first biased (actually hybrid as it also has brute force monte carlo path tracing, like LW) renderer on the market with GPU acceleration, afaik, and the fastest one.

You can use any trick with the LW renderer, the same apply to RedShift. You'll get roughly the same acceleration factor, yet this is not completely linear as I explained previously, you get that speed difference with "standard" or similar settings, but then if you crank up the adaptive sampling from, say, 1-32 to 1-1024, it will slow down the rendering only by 20s, 40s or perhaps 1 minute of rendering in full HD and that's here it starts to blow your mind in terms of result quality vs speed.

I understand you doubts though, that's why I said you can only "understand" it by testing it yourself. That's the only answer I can give to your "how does it compare to..".

I myself tested it in XSI, against mental ray, Octane (I have XSI and LW Octane licenses), Arnold, 3Delight, iRay, I've tested KRay too in LW. All I can say is that if RedShift was available or integrated into LW - or if the LW renderer would benefit from the same kind of GPU acceleration - I would use it exclusively and never look back (well, until something new happens, of course).

m.d.
11-07-2015, 09:41 AM
I'm not attacking the Lightwave renderer, I'm just pointing out RedShift and its GPU acceleration, the first biased (actually hybrid as it also has brute force monte carlo path tracing, like LW) renderer on the market with GPU acceleration, afaik, and the fastest one.


Just to clarify, octane is biased as well when using direct lighting kernel....there is a lot of misconceptions about this.

erikals
11-07-2015, 10:09 AM
the thing is many LightWavers want many different 3party Render Engines,
i'm not saying "no" to RedShift, just saying many people want many different things.

Thea request > http://forums.newtek.com/showthread.php?144075-Thea-Render-for-LightWave
Vray request > http://forums.newtek.com/showthread.php?125923-Vray-for-Lightwave
Octane request > http://forums.newtek.com/showthread.php?143473-Octane-render-vs-Lightwave-s-native-render-engine
Kray3 request > http://forums.newtek.com/showthread.php?139867-Test-scene-for-Kray-3
Corona request > http://forums.newtek.com/showthread.php?146320-Corona-for-Lightwave-bad-news
Lagoa request >
https://www.behance.net/gallery/9646597/Human-Hair-3D-Images-by-Lagoa
https://mir-s3-cdn-cf.behance.net/project_modules/hd/5471429647049.560d7932eaa15.jpg

and, do these support Volumetrics / Hair / Particles / Texture Baking / DPont nodes ... etc etc

i've just read so many request through the years, not to mention, Unreal / RenderMan / Blender Cycles / Arnold... etc...

maybe a poll is in order... not sure...


I myself tested it in XSI, against mental ray, Octane (I have XSI and LW Octane licenses), Arnold, 3Delight, iRay, I've tested KRay too in LW. All I can say is that if RedShift was available or integrated into LW - or if the LW renderer would benefit from the same kind of GPU acceleration - I would use it exclusively and never look back (well, until something new happens, of course).
that does without doubt sound positive, though a question is, will "subscription" attract LightWavers,
and is it though enough to compete with Octane.

pming
11-07-2015, 04:38 PM
Hiya!


It was Lino and he said it is CPU-only at this time. One of the problems with GPU, at least until Unified Memory becomes mainstream, is that you are limited to the amount of VRAM of the card that has the least amount. If you have a heavy scene, it may not fit; but a CPU renderer will chew through it. IIRC, Octane 3 has a workaround for this, though.

True...'ish. :) There are some *amazing* advances in texture compression stuff now. I don't understand most of it, but basically...ahhh hell. Here's a link to the Unreal Engine 4's announcement/advert for "Granite". I have no idea if this would work out well for 3d programs...but you never know!

https://www.unrealengine.com/blog/two-thousand-gigapixels-of-textures-anyone

^_^

Paul L. Ming

mummyman
11-07-2015, 06:21 PM
I'm not attacking the Lightwave renderer, I'm just pointing out RedShift and its GPU acceleration, the first biased (actually hybrid as it also has brute force monte carlo path tracing, like LW) renderer on the market with GPU acceleration, afaik, and the fastest one.

You can use any trick with the LW renderer, the same apply to RedShift. You'll get roughly the same acceleration factor, yet this is not completely linear as I explained previously, you get that speed difference with "standard" or similar settings, but then if you crank up the adaptive sampling from, say, 1-32 to 1-1024, it will slow down the rendering only by 20s, 40s or perhaps 1 minute of rendering in full HD and that's here it starts to blow your mind in terms of result quality vs speed.

I understand you doubts though, that's why I said you can only "understand" it by testing it yourself. That's the only answer I can give to your "how does it compare to..".

I myself tested it in XSI, against mental ray, Octane (I have XSI and LW Octane licenses), Arnold, 3Delight, iRay, I've tested KRay too in LW. All I can say is that if RedShift was available or integrated into LW - or if the LW renderer would benefit from the same kind of GPU acceleration - I would use it exclusively and never look back (well, until something new happens, of course).

Agreed on the speed of RedShift. A few of my coworkers are using it...but it's only being developed by 3 people or something like that. They used it in XSI for a major project. Major learning curve and hassles.. I'm shocked they are still pushing it and developing for XSI, but that's a whole other topic. Not sure why they don't develop for Lightwave. Not that we'll need it soon. But it was very fast. But then, anything is fast compared to Mental Ray.

grabiller
11-08-2015, 12:35 PM
Obviously none of you has tested RedShift first hand, otherwise you would be far less sarcastic or full of doubts.

That said, I'm on board with you guys, I like the LW renderer as it is, despite its weaknesses hopefully addressed little by little by the new renderer incarnation.

In fact, re-reading my first post, I think I should have re-phrased my wish as this:
"I wish Rob Powers/Newtek get closer to the RedShift renderer authors so they could borrow/license some or all of their GPU acceleration technology."

Or, alternatively:
"I wish the Lightwave renderer developer(s) implement/recreate/reinvent the same kind of RedShift GPU acceleration technology into the Lightwave renderer."

Meaning I'm not after replacing the Lightwave renderer with another 'named' renderer, just hopping it can get the most acceleration power from nowadays computer and GPU, as RedShift currently demonstrates it being the fastest biased renderer on the market. And by the way, RedShift recently received a 3D World CG awards : https://thecgawards.com/ - http://www.creativebloq.com/3d/cg-awards-winners-2015-revealed-111517613

@mummyman
I'm not sure RedShift being developed by 'only' 3 peoples is relevant. Currently, isn't the Lightwave renderer developed by only 1 person ? The quality matters, not the quantity.