PDA

View Full Version : Brent's Lightwave 2019 Render Challenge



brent3d
02-08-2019, 09:46 AM
Show us how fast your CPU really is in Lightwave 2019 and post your video results in this brute force render comparison. The test scene is in the video description. Let the games begin!

https://youtu.be/jx0q1CGP4vg

144061

Matt
02-08-2019, 10:31 AM
I wouldn't compare to a GPU renderer. Apples and oranges.

That said, my home PC renders this scene in: 1min 57s

Nicolas Jordan
02-08-2019, 11:02 AM
I wouldn't compare to a GPU renderer. Apples and oranges.

That said, my home PC renders this scene in: 1min 57s

I agree, I finally have come to the conclusion that there isn't really any good way to compare GPU to CPU rendering, it's mostly pointless.

brent3d
02-08-2019, 11:05 AM
I wouldn't compare to a GPU renderer. Apples and oranges.

That said, my home PC renders this scene in: 1min 57s

Of course you can and thanks for posting your speed.

brent3d
02-08-2019, 11:15 AM
I agree, I finally have come to the conclusion that there isn't really any good way to compare GPU to CPU rendering, it's mostly pointless.

I just did. Lightwave 2019 is in the same market as Maya, 3DsMax, C4D, and Blender. So all features and render capabilities can be compared and contrasted which is why Autodesk bought Arnold; to keep up with Octane, Cycles, and now EVEE. Why don't you ask Newtek to make Lightwave's 2019 VPR GPU renderable? Because of Autodesk's purchase of Arnold Path-Traced GPU rendering is quickly becoming the standard, where is Lightwave 2019 in all this?

Matt
02-08-2019, 11:16 AM
Of course you can and thanks for posting your speed.

Not if you're comparing their speeds as a point.

Matt
02-08-2019, 11:19 AM
I just did. Lightwave 2019 is in the same market as Maya, 3DsMax, C4D, and Blender. So all features and render capabilities can be compared and contrasted which is why Autodesk bought Arnold; to keep up with Octane, Cycles, and now EVEE. Why don't you ask Newtek to make Lightwave's 2019 VPR GPU renderable? Because of Autodesk's purchase of Arnold Path-Traced GPU rendering is quickly becoming the standard, where is Lightwave 2019 in all this?

So compare CPU to CPU versions of their renderers, otherwise it's a completely useless comparison, it's the equivalent of comparing how fast a bicycle to a Lamborghini and justifying it by saying "but they're both modes of transport".

brent3d
02-08-2019, 11:24 AM
Not if you're comparing their speeds as a point.

Why wouldn't speed be compared? Choices of what hardware are utilized for rendering are choices of owners/developers and choices and their results can be compared. Quality and speed are always factors in regards to rendering ability, so how about finding ways to make VPR faster without sacrificing quality...hmmm maybe put it on a GPU instead of CPU, just a thought.

TheLexx
02-08-2019, 11:33 AM
I don't think there is any intention here to see how Octane "humiliates" LW native cpu renderer. It is good to compare speeds because I would (in theory anyway) rather spend more to harness LW renderer with a small farm than subscribe to Octane, and consequently take a certain time hit, but the test will give some sort of idea what to expect. Matt, I also appreciate your time being posted - if the thread is allowed to roll, the results will be interesting. It is something I have always wondered.

:)

Imageshoppe
02-08-2019, 02:07 PM
It is good to compare speeds because I would (in theory anyway) rather spend more to harness LW renderer with a small farm than subscribe to Octane, and consequently take a certain time hit, but the test will give some sort of idea what to expect. Matt, I also appreciate your time being posted - if the thread is allowed to roll, the results will be interesting. It is something I have always wondered.

:)

LW 2019, 65 seconds on my Threadripper 1950X, overclocked on all cores to 4.0. But the GPU denoise step is not usable for animation, right? And the Octane de-noise one is? That's a big deal and without de-noise it's a crazy town mess. For grins, I'll set up that shot as a turntable and see exactly what the GPU de-noise looks like in animation.

Interestingly, I've sort of reluctantly come to the opposite conclusion, as I'm not overall very happy with the LW2018 or 2019 render speeds compared to the various "cheats" I use on LW2015 to get through heavy and long animation workloads, and I'm not sure that my current small renderfarm approach is as turnkey, energy efficient and fast as a multiple GPU install in my current system would be. In the past I've been very reluctant to embrace GPU rendering due to the various issues and limitations of what you can do compared with LW native rendering. But gosh, LW is now SLOW... despite everything I've tried and following the rendering wisdom of RebelHill.

This sort of reminds me of when we moved from the 5.X cycle to 6.0, and there was the huge slow-down due to floating point rendering. At the time (and for a few years after), I HATED the 6.X series and would revert to 5.6 for the speed increase. It took a long time (and several hardware iterations faster) to appreciate the value of those higher dynamic range images...

Regards,

OlaHaldor
02-08-2019, 04:24 PM
I downloaded LW 2019 demo to try this one.
i7 5960x overclocked to 4.1 GHz.
It's got 8 cores/16 threads.

Took 3min 9 seconds total.
I have Octane, but only Modo at the moment so I can't test it since I can't export the scene from the demo. But I'm sure it would be *fast* on the 2080 Ti.

brent3d
02-08-2019, 05:07 PM
LW 2019, 65 seconds on my Threadripper 1950X, overclocked on all cores to 4.0. But the GPU denoise step is not usable for animation, right? And the Octane de-noise one is? That's a big deal and without de-noise it's a crazy town mess. For grins, I'll set up that shot as a turntable and see exactly what the GPU de-noise looks like in animation.

Interestingly, I've sort of reluctantly come to the opposite conclusion, as I'm not overall very happy with the LW2018 or 2019 render speeds compared to the various "cheats" I use on LW2015 to get through heavy and long animation workloads, and I'm not sure that my current small renderfarm approach is as turnkey, energy efficient and fast as a multiple GPU install in my current system would be. In the past I've been very reluctant to embrace GPU rendering due to the various issues and limitations of what you can do compared with LW native rendering. But gosh, LW is now SLOW... despite everything I've tried and following the rendering wisdom of RebelHill.

This sort of reminds me of when we moved from the 5.X cycle to 6.0, and there was the huge slow-down due to floating point rendering. At the time (and for a few years after), I HATED the 6.X series and would revert to 5.6 for the speed increase. It took a long time (and several hardware iterations faster) to appreciate the value of those higher dynamic range images...

Regards,

Brilliant post! It does remind me of the 5.x to 6.0 jump as well. From using PBR a lot it almost demands GPU power to render it and now that Out of Core is becoming more available for the use of system memory GPU is becoming more practical.

brent3d
02-08-2019, 05:10 PM
I downloaded LW 2019 demo to try this one.
i7 5960x overclocked to 4.1 GHz.
It's got 8 cores/16 threads.

Took 3min 9 seconds total.
I have Octane, but only Modo at the moment so I can't test it since I can't export the scene from the demo. But I'm sure it would be *fast* on the 2080 Ti.

On a 2080 Ti !?! I should've said there was a GPU speed limit:D but yes you would drop the scene in around hmmm 5sec or less..lol and that's the point. Thanks for posting your CPU findings, excellent!

Tim Parsons
02-08-2019, 06:56 PM
47 seconds - I'm good with that.
i7-8086K @ 4GHz 6 Core

brent3d
02-08-2019, 07:14 PM
47 seconds - I'm good with that.
i7-8086K @ 4GHz 6 Core

Second fastest time I know of, still the default scene settings?

Tim Parsons
02-08-2019, 08:08 PM
Second fastest time I know of, still the default scene settings?

Changed the camera samples to 3 and lowered the GI rays to 16. Render came out really clean and nice.

I love the speed of Octane as well as the look, but it's a pain in the *** to surface stuff in it. I mostly do interior stills so my workflow is generally work all day on the scenes and then load them into RenderQ and go home. :) So native is just fine by me and 2019 is faster and cleaner than 2018 so I can't complain. If a GPU renderer becomes available that works with LW shaders etc., I'll check it out for sure.

brent3d
02-08-2019, 08:39 PM
Changed the camera samples to 3 and lowered the GI rays to 16. Render came out really clean and nice.

I love the speed of Octane as well as the look, but it's a pain in the *** to surface stuff in it. I mostly do interior stills so my workflow is generally work all day on the scenes and then load them into RenderQ and go home. :) So native is just fine by me and 2019 is faster and cleaner than 2018 so I can't complain. If a GPU renderer becomes available that works with LW shaders etc., I'll check it out for sure.

This is good to hear. Native GPU rendering is the way to go, I'm spoiled now by Octane/Blender's full integration. For Lightwave with Octane it would be nice if they integrated the Surface menu's into another tab in the material editor instead of just having that block of empty space with the Options button for you to click on, because node editors can create a lot of clutter and slow down the workflow. Lightwave with PBR is just begging for native GPU support, just imagine VPR running off your GPU's.

thomascheng
02-08-2019, 08:51 PM
The problem with this scene is that there's not a lot of processing of other rendering elements. Also, it seems this scene will favor octane as a pure raytraced only operation. I like to see a scene take advantage of a bigger scene with lots of volumetrics, instancing, textures, and displacement and see how it compares. The general concenses is that CPU is still better for heavy complex scene and GPU is better for raytraced heavy scenes.

brent3d
02-08-2019, 09:24 PM
The problem with this scene is that there's not a lot of processing of other rendering elements. Also, it seems this scene will favor octane as a pure raytraced only operation. I like to see a scene take advantage of a bigger scene with lots of volumetrics, instancing, textures, and displacement and see how it compares. The general concenses is that CPU is still better for heavy complex scene and GPU is better for raytraced heavy scenes.

I've heard someone say that before in regards to the topic of complexity, but outside of previous GPU memory limitations vs system memory I've not scene the case. the reason why I went GPU is because I needed to render dense complex scenes (event renders with lots of CAD geometry etc) with volumetric lighting, within closed spaces at 1080p and 4k with fly-through and have it out to the client by end of day, on one machine. Can't imagine the size farm I would've needed.

OFF
02-08-2019, 10:32 PM
1m47s xeon e5-2692 x 2. 64 ram.
It seems to me that the GPU denoiser can be used in animation, but camera AA should do at least half, the denoiser the other half. Those for animation, anti-aliasing should be no less than, say, 32. This will avoid blurred and flickers distortions. In addition, such effects as DOF are best done in video editors, since the GPU denoiser blurs and destroy this effect.

brent3d
02-08-2019, 10:36 PM
1m47s xeon e5-2692 x 2. 64 ram.
It seems to me that the GPU denoiser can be used in animation, but camera AA should do at least half, the denoiser the other half. Those for animation, anti-aliasing should be no less than, say, 32. This will avoid blurred and flickers distortions. In addition, such effects as DOF are best done in video editors, since the GPU denoiser blurs and destroy this effect.

Yes, denoising is not full on solution to everything, but should be used strategically. Good speed, nice to see a Xeon working.

roboman
02-08-2019, 11:57 PM
6 min 13 sec intel I7-3770 3.4ghz 4 core

brent3d
02-09-2019, 12:18 AM
6 min 13 sec intel I7-3770 3.4ghz 4 core

good job, the scene can be demanding. Thanks for posting.

MarcusM
02-09-2019, 08:06 AM
I allowed myself optimize your scene a litte with preserve render quality.

On my private terrible 4 core i5 times are:
Brent's scene: 5 min 50 sec
Optimized scene: 1 min 9 sec

Optimized scene can be downloaded here:
https://drive.google.com/open?id=19EuAKw49MFSkvePQayUxN4_-MITlDOX2

Images show optimized vs original scene:

OlaHaldor
02-09-2019, 09:59 AM
On a 2080 Ti !?! I should've said there was a GPU speed limit:D but yes you would drop the scene in around hmmm 5sec or less..lol and that's the point. Thanks for posting your CPU findings, excellent!
If you or anyone else could export the scene as FBX or alembic or something I could bring it into Modo and see what happens with Octane, just for giggles.

jboudreau
02-09-2019, 09:59 AM
It's all about optimization. I optimized the scene using just 1 direct light with 2 rays. AA set to 8 - Renter time 9.6 seconds Dual Xeon 32 Thread system

144082

CoryC
02-09-2019, 10:06 AM
I took your scene and did a little more optimizations to it. Brought it from 59sec down to 41sec on a 6 core i7.

- - - Updated - - -

I took MarcusM's scene and did a little more optimizations to it. Brought it from 59sec down to 41sec on a 6 core i7.

JohnMarchant
02-09-2019, 11:57 AM
My render time on Dell XPS Laptop 1 Min 15 Secs, optimised.

thomascheng
02-09-2019, 12:02 PM
Ryzen 1800x

Original - 2.21
Optimized - 27.1
Optimized with BG sample off, Polygon Intersection to Fastest, Tilesize 16 - 23.0

thomascheng
02-09-2019, 12:06 PM
It's all about optimization. I optimized the scene using just 1 direct light with 2 rays. AA set to 8 - Renter time 9.6 seconds Dual Xeon 32 Thread system

144082

Are you using the bounce cards or area lights for you optimization?

3dslider
02-09-2019, 05:33 PM
As i shared from facebook i put here too :)

Original : 9m 8s
Optimized : 1m 28s

brent3d
02-09-2019, 05:37 PM
As i shared from facebook i put here too :)

Original : 9m 8s
Optimized : 1m 28s

That's a good time. If your using Interpolation make sure you retain the same shadow articulation as in Brute Force, that way your render will be closer to Octanes Direct Lighting's accuracy.

3dslider
02-09-2019, 05:56 PM
That's a good time. If your using Interpolation make sure you retain the same shadow articulation as in Brute Force, that way your render will be closer to Octanes Direct Lighting's accuracy.

Not interpolation just i make brute force down to 2 rays + 1 ray for R/R/S and it is fast indeed for same result with denoise :)


this is optimized image :

144084

brent3d
02-09-2019, 07:18 PM
Not interpolation just i make brute force down to 2 rays + 1 ray for R/R/S and it is fast indeed for same result with denoise :)


this is optimized image :

144084

Awesome!

Rayek
02-10-2019, 01:50 PM
Now that we are comparing Apples with Oranges, let's throw in a banana.
Eevee test render. 1920x1080px 1.76 seconds on GTX 1080.
4K 4096x2160px render: 4.74 seconds

http://www.upl.co/uploads/eeveetestthumb1549831644.png

Full size image render: http://www.upl.co/uploads/eeveetest1549831669.png
Blend file: http://www.upl.co/uploads/eeveeBrent3D1549831330.zip

PS I should've turned off the reflections on the base, but forgot.

brent3d
02-10-2019, 02:12 PM
Now that we are comparing Apples with Oranges, let's throw in a banana.
Eevee test render. 1920x1080px 1.76 seconds on GTX 1080.
4K 4096x2160px render: 4.74 seconds

http://www.upl.co/uploads/eeveetestthumb1549831644.png

Full size image render: http://www.upl.co/uploads/eeveetest1549831669.png
Blend file: http://www.upl.co/uploads/eeveeBrent3D1549831330.zip

PS I should've turned off the reflections on the base, but forgot.

You just dropped the mic:lol: Shows over folks, nothing to see here.

Rayek
02-10-2019, 02:30 PM
To be fair, the choice of render engine depends on the job at hand. Eevee has its limitations, just like any other render engine. Let the job requirements define the choice of render engine, I'd say.

That said, Eevee is blowing my mind at the moment. I haven't gone in deep yet, and the (almost) realtime render paradigm requires a bit of a rethink here and there. So I am still very much a novice. But it is *fun*.

thomascheng
02-10-2019, 03:06 PM
If we are comparing eevee, we should also bring in some Unreal with the bridge. Anyone want to try?

brent3d
02-10-2019, 03:37 PM
To be fair, the choice of render engine depends on the job at hand. Eevee has its limitations, just like any other render engine. Let the job requirements define the choice of render engine, I'd say.

That said, Eevee is blowing my mind at the moment. I haven't gone in deep yet, and the (almost) realtime render paradigm requires a bit of a rethink here and there. So I am still very much a novice. But it is *fun*.

Totally agree, I had a project once that actually required a hard edge shadow/graphic look so back to LW's rayracer I went...lol.
EVEE will be a game changer since it's actually in a 3D application and not just a game editor.

OlaHaldor
02-10-2019, 03:51 PM
Isn't Eevee a bit like Marmoset Toolbag though? It's sort of an OpenGL/DirectX render engine? So it's literally *real-time*, no?

I rendered 1920x1080 with Octane in Modo at 8 seconds.
I tried 4K for fun; 38 seconds.
http://www.olahaldor.net/annet/screendump/octane_twoarealights.JPG

I recorded a short clip too. Might be especially interesting for those who haven't tried gpu render yet.
(it's still processing, will be HD and beyond any time now..)

http://www.youtube.com/watch?v=bFfUQvG4_bk

brent3d
02-10-2019, 04:13 PM
Isn't Eevee a bit like Marmoset Toolbag though? It's sort of an OpenGL/DirectX render engine? So it's literally *real-time*, no?

I rendered 1920x1080 with Octane in Modo at 8 seconds.
I tried 4K for fun; 38 seconds.
http://www.olahaldor.net/annet/screendump/octane_twoarealights.JPG

I recorded a short clip too. Might be especially interesting for those who haven't tried gpu render yet.
(it's still processing, will be HD and beyond any time now..)

http://www.youtube.com/watch?v=bFfUQvG4_bk

Outstanding! What card or cards?

Rayek
02-10-2019, 04:24 PM
Isn't Eevee a bit like Marmoset Toolbag though? It's sort of an OpenGL/DirectX render engine? So it's literally *real-time*, no?

I rendered 1920x1080 with Octane in Modo at 8 seconds.
I tried 4K for fun; 38 seconds.
http://www.olahaldor.net/annet/screendump/octane_twoarealights.JPG

I recorded a short clip too. Might be especially interesting for those who haven't tried gpu render yet.
(it's still processing, will be HD and beyond any time now..)

http://www.youtube.com/watch?v=bFfUQvG4_bk

Eevee is not quite comparable to a game render engine. It uses other techniques which improve the overall quality and use sampling and "stuff". It renders to high quality anti-aliasing, and even uses a de-noiser. It's somewhat between here and there in regards to rendering.

Good job on that Octane render, btw!

I am now playing with Godot to get the lighting and quality right, but it takes more manual setup. Completely different approach with GI probes and such. But it is truly real-time, which makes sense because it is literally a game engine.

@ThomasCheng Unreal would be a good comparison too. It is amazing what we can achieve with all these render technologies.

Rayek
02-10-2019, 05:03 PM
Cycles with combined GPU + CPU (1080GTX + aging old i7 920).

128 samples with denoiser default settings. 35.62 seconds. Not half bad. But I do wonder about the usefulness of this test, aside from the obvious GPU versus CPU render speed differences.

@olahaldor Octane seems to use more VRAM than Cycles for this scene (1331MB versus 99.10MB). Is this correct? Seems strange to me, seeing this is a simple scene.

@Brent3D Your Lightwave Octane render seems to use 576MB VRAM. Also quite a lot compared, but less than Olahaldor's Modo Octane render. Anyway, just curious.

http://www.upl.co/uploads/cyclesbrent1549842629.jpg

brent3d
02-10-2019, 06:47 PM
Cycles with combined GPU + CPU (1080GTX + aging old i7 920).

@Brent3D Your Lightwave Octane render seems to use 576MB VRAM. Also quite a lot compared, but less than Olahaldor's Modo Octane render. Anyway, just curious.


Excellent work! Not sure on the Vram differences, checking it out.

OlaHaldor
02-11-2019, 01:09 AM
Outstanding! What card or cards?

1 x EVGA RTX 2080 Ti. No overclock or anything.
(seeing this I *really* want to get another one if I get some jobs to pay for it ;D)



Eevee is not quite comparable to a game render engine. It uses other techniques which improve the overall quality and use sampling and "stuff". It renders to high quality anti-aliasing, and even uses a de-noiser. It's somewhat between here and there in regards to rendering.

Ah ok :) Sounds very interesting.
While you can't do a ton of AA samples etc on the fly in Marmoset, you can when you save an image (or image sequence). I didn't make a directional light in my example, but that'd make the shapes read better by adding a hard shadow.

(marmoset really is my go-to app for quick and dirty renders. So fast to set up something that looks good enough)


I can't really tell why Octane used more VRAM here. I've seldom thought about VRAM usage when I render. I only care about it if I get an error. ;D

rustythe1
02-11-2019, 03:08 AM
If we are comparing eevee, we should also bring in some Unreal with the bridge. Anyone want to try?

i use unreal all the time now, just rendered a 2 min animation of an entire school fly through with millions of polys and background elements, full dynamic lighting no bake with around 80 lights, a full exterior grounds with dynamic vegetation and rendered at 4k cinematic in 15 mins, unreal bridge is the best thing to happen to lightwave in a long while (even animated the camera sequences in lightwave and one click instant unreal movie!)
and as regards to comparing gpu and cpu, its pointless as there are so many arguments for and against each, and then on the cpu side there are massive differences between cpus themselves, as Ive argued on other threads even the way the intels handle the FP values makes a huge difference, my current cpu renders 5 times faster than my old 5960, I rendered the original scene in 32 seconds, that's almost 4 times as fast as matts time, the optimized scene was under 10 seconds, so if you want to compare cpu to gpu you first need to compare all cpus to each other, especially if your comparing a cpu to a top end gpu like a --80ti should you not say compare an 8 core system to that of a standard single 20-1050gtx?

brent3d
02-11-2019, 07:45 AM
1 x EVGA RTX 2080 Ti. No overclock or anything.
(seeing this I *really* want to get another one if I get some jobs to pay for it ;D)



That's insanely fast on one card! If anyone wants to know why you would need to render that fast with Path-Traced, Brute Force, or Direct Lighting think VR rendering and animation at 4k and 8k, it's here and that truly is a GPU market.

brent3d
02-11-2019, 08:00 AM
i use unreal all the time now, just rendered a 2 min animation of an entire school fly through with millions of polys and background elements, full dynamic lighting no bake with around 80 lights, a full exterior grounds with dynamic vegetation and rendered at 4k cinematic in 15 mins, unreal bridge is the best thing to happen to lightwave in a long while (even animated the camera sequences in lightwave and one click instant unreal movie!)
and as regards to comparing gpu and cpu, its pointless as there are so many arguments for and against each, and then on the cpu side there are massive differences between cpus themselves, as Ive argued on other threads even the way the intels handle the FP values makes a huge difference, my current cpu renders 5 times faster than my old 5960, I rendered the original scene in 32 seconds, that's almost 4 times as fast as matts time, the optimized scene was under 10 seconds, so if you want to compare cpu to gpu you first need to compare all cpus to each other, especially if your comparing a cpu to a top end gpu like a --80ti should you not say compare an 8 core system to that of a standard single 20-1050gtx?

You are ignoring what's going on in the market. When is the last time you've seen Autodesk marketing MentalRay? What was their latest big company purchase in regards to rendering? What rendering software was just intergrated into Maya and soon 3Ds Max? What did that softwares developers announce 10mths ago they had achieved with the renderer, although they were six years behind other companies? What is Vray, what does Vray now support and why? What does Nvidia's RTX stand for and what 3D softwares have announced upcoming support for it? What branches of the government does Nvidia contract with?
There are no fruits, only a basket.

brent3d
02-11-2019, 08:04 AM
1 x EVGA RTX 2080 Ti. No overclock or anything.
(seeing this I *really* want to get another one if I get some jobs to pay for it ;D)




Ah ok :) Sounds very interesting.
While you can't do a ton of AA samples etc on the fly in Marmoset, you can when you save an image (or image sequence). I didn't make a directional light in my example, but that'd make the shapes read better by adding a hard shadow.

(marmoset really is my go-to app for quick and dirty renders. So fast to set up something that looks good enough)


I can't really tell why Octane used more VRAM here. I've seldom thought about VRAM usage when I render. I only care about it if I get an error. ;D

For clarity and still in BETA:
https://youtu.be/2hGSDD9-Tkc

thomascheng
02-11-2019, 08:41 AM
I love to see Newtek add GPU support too, but I think it might be better for them to just optimize the current CPU renderer first. I think there's still a lot more optimization that can occur. The GPU side would require a lot of work looking at how long it took Vray and Arnold to implement GPU rendering. The resource might be better spent on supporting AMD to implement Pro Render into LW with some additional code to get it to match the LW renderer as much as possible. Who knows, maybe the future would be pcie CPUs to compete with GPUs.

brent3d
02-11-2019, 09:16 AM
I love to see Newtek add GPU support too, but I think it might be better for them to just optimize the current CPU renderer first. I think there's still a lot more optimization that can occur. The GPU side would require a lot of work looking at how long it took Vray and Arnold to implement GPU rendering. The resource might be better spent on supporting AMD to implement Pro Render into LW with some additional code to get it to match the LW renderer as much as possible. Who knows, maybe the future would be pcie CPUs to compete with GPUs.

Arnold "had" to go GPU, watch their talks on the issue. They knew they were 6 years behind the competition. The Autodesk partnership funded it which gave them direct access to Nvidia with Optix. (Optix is more of a selling point for users since Arnold got into GPU late in my opinion).
Native GPU/CPU supported Path-Traced PBR is now quickly becoming the norm, the short-term rendering future is RTX, realtime Path Tracing at 4k-8k, and 4k-8k VR. If Lightwave's 2019's renderer is the closest to an Octane Direct Lighting render setting then what are you optimizing to, oblivion?

A
https://youtu.be/bqnhhoA99Asrnold GPU tech talk:

raymondtrace
02-11-2019, 09:34 AM
You are ignoring what's going on in the market...

Is there a chance that you were ignoring what's going on in the post to which you were replying?

Rayek
02-11-2019, 10:12 AM
I agree. The C4D developers realized they had to add GPU render tech to their software ASAP to remain relevant and competitive. I think their situation was comparable to Lightwave: an aging CPU based render engine while all the competition was already, or were going, GPU.

Instead of reinventing the wheel, Maxon decided to integrate ProRender, which was/is a smart move. It would have been an even smarter move for Lightwave's development: a mature combined GPU/CPU renderer out of the box, and less headaches how to divide and strategize development resources. Existing resources that could have been easily re-used (existing material libraries), and so on.

When Lightwave 2018 was released, I did not understand the lack of GPU rendering in the new render engine. Aside from first-release bugs and rough edges, I thought it is a great quality engine - but the competition offers far more flexibility with BOTH CPU and GPU support, and offer the same quality and beyond. 2018's new renderer was years behind the competition the moment it was released.

The Newtek team could have focused more on other long outstanding improvements, instead of starting a CPU renderer from scratch that would be years behind the competition when released in 2018. I still can't wrap my head around it. In particular seeing the development behind (almost) real-time GPU based rendering based on game render tech. Newtek could have really made a smashing comeback if the new render engine would have been a CPU/GPU hybrid one. Such a lost opportunity.

Still, water under the bridge. The only way forward now is to add GPU rendering to Lightwave, preferably combined CPU/GPU to keep being relevant in the semi-long term. Which is a VERY specialized coding job, and developers in this field are very hard to hire and keep hold of. So I see Newtek's Lightwave team going forward without GPU support because of the invested time and effort to develop their new CPU only renderer, even to the detriment of long-term survival (sunken cost fallacy). Many Lightwave users will have to invest in an alternative GPU renderer (Octane) to remain competitive in their work, and based on what I read here, many already have. Which marginalizes the built-in new 2018 / 19 render engine.

Modo is sort-of in the same boat, but can at least boast being one of the best modelers in the market.

Having said all this, I might be proven wrong, and the LW dev team is going to surprise us all with GPU support next year, or the year after that. The question is whether it will matter anymore to anyone at that time.

Matt
02-11-2019, 10:29 AM
Changed the camera samples to 3 and lowered the GI rays to 16. Render came out really clean and nice.

I love the speed of Octane as well as the look, but it's a pain in the *** to surface stuff in it. I mostly do interior stills so my workflow is generally work all day on the scenes and then load them into RenderQ and go home. :) So native is just fine by me and 2019 is faster and cleaner than 2018 so I can't complain. If a GPU renderer becomes available that works with LW shaders etc., I'll check it out for sure.

What's the render time when you don't change the scene?

brent3d
02-11-2019, 10:32 AM
Is there a chance that you were ignoring what's going on in the post to which you were replying?

No, addressing the CPU arguement, not the use of Unreal or anything like that.

brent3d
02-11-2019, 10:37 AM
I agree. The C4D developers realized they had to add GPU render tech to their software ASAP to remain relevant and competitive. I think their situation was comparable to Lightwave: an aging CPU based render engine while all the competition was already, or were going, GPU.

Instead of reinventing the wheel, Maxon decided to integrate ProRender, which was/is a smart move. It would have been an even smarter move for Lightwave's development: a mature combined GPU/CPU renderer out of the box, and less headaches how to divide and strategize development resources. Existing resources that could have been easily re-used (existing material libraries), and so on.

When Lightwave 2018 was released, I did not understand the lack of GPU rendering in the new render engine. Aside from first-release bugs and rough edges, I thought it is a great quality engine - but the competition offers far more flexibility with BOTH CPU and GPU support, and offer the same quality and beyond. 2018's new renderer was years behind the competition the moment it was released.

The Newtek team could have focused more on other long outstanding improvements, instead of starting a CPU renderer from scratch that would be years behind the competition when released in 2018. I still can't wrap my head around it. In particular seeing the development behind (almost) real-time GPU based rendering based on game render tech. Newtek could have really made a smashing comeback if the new render engine would have been a CPU/GPU hybrid one. Such a lost opportunity.

Still, water under the bridge. The only way forward now is to add GPU rendering to Lightwave, preferably combined CPU/GPU to keep being relevant in the semi-long term. Which is a VERY specialized coding job, and developers in this field are very hard to hire and keep hold of. So I see Newtek's Lightwave team going forward without GPU support because of the invested time and effort to develop their new CPU only renderer, even to the detriment of long-term survival (sunken cost fallacy). Many Lightwave users will have to invest in an alternative GPU renderer (Octane) to remain competitive in their work, and based on what I read here, many already have. Which marginalizes the built-in new 2018 / 19 render engine.

Modo is sort-of in the same boat, but can at least boast being one of the best modelers in the market.

Having said all this, I might be proven wrong, and the LW dev team is going to surprise us all with GPU support next year, or the year after that. The question is whether it will matter anymore to anyone at that time.

Well said and directly on target.

raymondtrace
02-11-2019, 11:26 AM
No, addressing the CPU arguement...

I beg to differ that you were addressing anything by launching into a quiz game. :)

Yes, GPU rendering is a thing and big brand name companies are incorporating it. What's going on in the market is that there are separate renderers that 3D animation apps utilize. Even AD couldn't excel at this internally and had to purchase a renderer from outside. Isn't LW part of this trend of external rendering options by continuing to integrate with Kray, Octane and UE? Isn't LW part of this trend by leaning into PBR with its native render...and inherently making it easier to move materials to external renderers? Is Russell really ignoring the market by positioning himself like everyone else (being able to render via CPU and GPU)?

I recently watched a video of a guy that wanted to animate in Softimage since college and did not get a license until the time Softimage was dragged toward the grave by AD. All of us live in ignorance of what will happen tomorrow. Let's just enjoy the ride of discovery and use the tools that work for us.

brent3d
02-11-2019, 11:30 AM
I beg to differ that you were addressing anything by launching into a quiz game. :)

Yes, GPU rendering is a thing and big brand name companies are incorporating it. What's going on in the market is that there are separate renderers that 3D animation apps utilize. Even AD couldn't excel at this internally and had to purchase a renderer from outside. Isn't LW part of this trend of external rendering options by continuing to integrate with Kray, Octane and UE? Isn't LW part of this trend by leaning into PBR with its native render...and inherently making it easier to move materials to external renderers? Is Russell really ignoring the market by positioning himself like everyone else (being able to render via CPU and GPU)?

I recently watched a video of a guy that wanted to animate in Softimage since college and did not get a license until the time Softimage was dragged toward the grave by AD. All of us live in ignorance of what will happen tomorrow. Let's just enjoy the ride of discovery and use the tools that work for us.

Rayek's post below says it best:

"I agree. The C4D developers realized they had to add GPU render tech to their software ASAP to remain relevant and competitive. I think their situation was comparable to Lightwave: an aging CPU based render engine while all the competition was already, or were going, GPU.

Instead of reinventing the wheel, Maxon decided to integrate ProRender, which was/is a smart move. It would have been an even smarter move for Lightwave's development: a mature combined GPU/CPU renderer out of the box, and less headaches how to divide and strategize development resources. Existing resources that could have been easily re-used (existing material libraries), and so on.

When Lightwave 2018 was released, I did not understand the lack of GPU rendering in the new render engine. Aside from first-release bugs and rough edges, I thought it is a great quality engine - but the competition offers far more flexibility with BOTH CPU and GPU support, and offer the same quality and beyond. 2018's new renderer was years behind the competition the moment it was released.

The Newtek team could have focused more on other long outstanding improvements, instead of starting a CPU renderer from scratch that would be years behind the competition when released in 2018. I still can't wrap my head around it. In particular seeing the development behind (almost) real-time GPU based rendering based on game render tech. Newtek could have really made a smashing comeback if the new render engine would have been a CPU/GPU hybrid one. Such a lost opportunity.

Still, water under the bridge. The only way forward now is to add GPU rendering to Lightwave, preferably combined CPU/GPU to keep being relevant in the semi-long term. Which is a VERY specialized coding job, and developers in this field are very hard to hire and keep hold of. So I see Newtek's Lightwave team going forward without GPU support because of the invested time and effort to develop their new CPU only renderer, even to the detriment of long-term survival (sunken cost fallacy). Many Lightwave users will have to invest in an alternative GPU renderer (Octane) to remain competitive in their work, and based on what I read here, many already have. Which marginalizes the built-in new 2018 / 19 render engine.

Modo is sort-of in the same boat, but can at least boast being one of the best modelers in the market.

Having said all this, I might be proven wrong, and the LW dev team is going to surprise us all with GPU support next year, or the year after that. The question is whether it will matter anymore to anyone at that time."

thomascheng
02-11-2019, 12:01 PM
So what is the solution? At this point, it would be hard for Newtek to abandon CPU rendering and I doubt they have the resource to do both. Should they not even bother to optimize the cpu renderer?

brent3d
02-11-2019, 12:08 PM
So what is the solution? At this point, it would be hard for Newtek to abandon CPU rendering and I doubt they have the resource to do both.

The solution is so clearly stated Rayek's post, re-read it.

raymondtrace
02-11-2019, 12:28 PM
The solution is so clearly stated Rayek's post, re-read it.

Don't you want to post it ONE MORE TIME? :)

While Rayek makes some good points, even Rayek acknowledged "I did not understand the lack of GPU rendering" and "I might be proven wrong".

You seem to be ignoring that LW is moving forward with PBR rendering in 2018 & 2019. This movement may be too slow for you but you've got to get off the rant that it isn't moving forward.


...The Newtek team could have focused more on other long outstanding improvements, instead of starting a CPU renderer from scratch that would be years behind the competition when released in 2018. I still can't wrap my head around it. In particular seeing the development behind (almost) real-time GPU based rendering based on game render tech. Newtek could have really made a smashing comeback if the new render engine would have been a CPU/GPU hybrid one. Such a lost opportunity...

That new CPU renderer aligns better with standard PBR workflows. It is silly to dismiss this step forward. LW2019 introduces the bridge for UE. Clearly, NewTek is not losing the opportunity to take advantage of free game render tech.

brent3d
02-11-2019, 12:37 PM
Don't you want to post it ONE MORE TIME? :)

While Rayek makes some good points, even Rayek acknowledged "I did not understand the lack of GPU rendering" and "I might be proven wrong".

You seem to be ignoring that LW is moving forward with PBR rendering in 2018 & 2019. This movement may be too slow for you but you've got to get off the rant that it isn't moving forward.



That new CPU renderer aligns better with standard PBR workflows. It is silly to dismiss this step forward. LW2019 introduces the bridge for UE. Clearly, NewTek is not losing the opportunity to take advantage of free game render tech.

No one has yet ranted on these series of posts regardless of their stance, so why say that?
I never said or suggested that Lightwave was not moving forward for I do not know their road map, do you?
I have pointed out where everyone else is going, has gone, and the possible consequences of not following.

Marander
02-11-2019, 12:47 PM
So what is the solution? At this point, it would be hard for Newtek to abandon CPU rendering and I doubt they have the resource to do both.

They should stick with the LW2018 / 2019 CPU render engine in my opinion. There is much more important stuff to do.

Not everyone can or want to render on GPU. CPU render engines run fine on any machine, including ultrabooks.

I own several CPU and GPU based render engine licenses but I prefer running long render jobs on CPU in LightWave or other CPU based render engines. They can run for many days at 100% utilization and I don't need to worry about killing my GPUs. Water cooling is not my thing and GPUs are not as durable as CPUs in constant full utilization. I use the GPUs only for lookdev (but VPR is great too in that regard), quick renders and simulations, not for animations. One reason I prefer doing stuff on CPU engines is that I can continue my 3D stuff even when I travel. Best of both worlds are hybrid render engines of course.

The LW render quality and VPR are great in most parts and render times are much better now in 2019 with the Nvidia denoiser.

Currently my most fun render engine is Cycles 4D with its newest (early access) update, it's incredible how fast it runs on hybrid CPU / GPU (but only in CUDA mode, OpenCL is half as fast). The feature set, stability, quality, Cinema integration and its node editor are is incredible good. Worth to watch the Insydium videos to get an idea.

It could have been a good move in my opinion to integrate Cycles in LightWave but its a lot of work to do a seamless integration.

ProRender is not my thing, never used it, except for some tests. I still prefer Physical Render (which is also PBR based), Vray or LightWave over it. All of them offer great features and quality for me acceptable render times.

It was a lot of work that MAXON did to seamlessly integrate ProRender and it's still not completely finished. The Cycles integration took Insydium several years to get to this level of quality.

Therefore I prefer if NewTek continues with the CPU render engine and works on other important parts of the software.

And not to forget, most GPU render engines are node-locked, online licensed or rental only.

brent3d
02-11-2019, 12:58 PM
They should stick with the LW2018 / 2019 CPU render engine in my opinion. There is much more important stuff to do.

Not everyone can or want to render on GPU. CPU render engines run fine on any machine, including ultrabooks.

I own several CPU and GPU based render engine licenses but I prefer running long render jobs on CPU in LightWave or other CPU based render engines. They can run for many days at 100% utilization and I don't need to worry about killing my GPUs. Water cooling is not my thing and GPUs are not as durable as CPUs in constant full utilization. I use the GPUs only for lookdev (but VPR is great too in that regard), quick renders and simulations, not for animations. One reason I prefer doing stuff on CPU engines is that I can continue my 3D stuff even when I travel. Best of both worlds are hybrid render engines of course.

The LW render quality and VPR are great in most parts and render times are much better now in 2019 with the Nvidia denoiser.

Currently my most fun render engine is Cycles 4D with its newest update, it's incredible how fast, runs on CPU/GPU. The feature set, quality, integration and node editor are is incredible good. Worth to watch the Insydium videos to get an idea.

It could have been a good move in my opinion to integrate Cycles in LightWave but its a lot of work to for a seamless integration.

ProRender is not my thing, never used it, except for some tests. I still prefer Physical Render (which is also PBR based), Vray or LightWave over it. All of them offer great features and quality with for me acceptable render times.

It was a lot of work that MAXON did to seamlessly integrate ProRender and it's still not completely finished. The Cycles integration took Insydium several years to get to this level of quality.

Therefore I prefer if NewTek continues with the CPU render engine and works on other important parts of the software.

Good position, but if Newtek stays CPU it then continues to force users to pay additional for 3rd party solutions in a world where all the primary 3d suites are and will be fully integrated with Path-Tracing and GPU/CPU support. That's like a store refusing to invest in the new popular upcoming brand item and allowing there customers to go next door to buy it.

TheLexx
02-11-2019, 01:05 PM
Here's a crazy convoluted scenario - import a fully animated LW scene into Cinema 4D and render in Redshift just to see what happens. I don't think anyone would ever do that, but it is possible in theory ?

Rayek
02-11-2019, 01:06 PM
That new CPU renderer aligns better with standard PBR workflows. It is silly to dismiss this step forward. LW2019 introduces the bridge for UE. Clearly, NewTek is not losing the opportunity to take advantage of free game render tech.

Definitely. The Unreal bridge is a great stride forward, and makes Lightwave a good proposition to game devs and realtime archviz/etcetera, and I expect the next version to have some real improvements in the modeling department because of this.

As for the current CPU only renderer? Don't know. It's a good CPU render engine, but to move forward it will need GPU support as well, because would it attract new users (or existing users) if they can use a GPU/CPU hybrid based one that results in far faster results and better quality?

That said, I hope they are working on GPU support. The other option would be to completely drop the new 2018/19 render engine, and adopt ProRender or Cycles, for example. It sound insane, but sunken cost fallacies are all around us, and almost no-one seems to learn from them.

BUT! It is possible the LW devs are on the ball, and adding GPU support as we speak. I haven't checked, but I'd imagine there are GPU libraries which may make that possible in a relatively "easy" way - or at least doable.

brent3d
02-11-2019, 01:09 PM
Definitely. The Unreal bridge is a great stride forward, and makes Lightwave a good proposition to game devs and realtime archviz/etcetera, and I expect the next version to have some real improvements in the modeling department because of this.

As for the current CPU only renderer? Don't know. It's a good CPU render engine, but to move forward it will need GPU support as well, because would it attract new users (or existing users) if they can use a GPU/CPU hybrid based one that results in far faster results and better quality?

That said, I hope they are working on GPU support. The other option would be to completely drop the new 2018/19 render engine, and adopt ProRender or Cycles, for example. It sound insane, but sunken cost fallacies are all around us, and almost no-one seems to learn from them.

BUT! It is possible the LW devs are on the ball, and adding GPU support as we speak. I haven't checked, but I'd imagine there are GPU libraries which may make that possible in a relatively "easy" way - or at least doable.

Yup to all the above.

thomascheng
02-11-2019, 01:12 PM
I see more of a statement than a solution. With limited resources, it is still the same problem.

"it would be hard for Newtek to abandon CPU rendering and I doubt they have the resource to do both."

brent3d
02-11-2019, 01:19 PM
I see more of a statement than a solution. With limited resources, it is still the same problem.

"it would be hard for Newtek to abandon CPU rendering and I doubt they have the resource to do both."

"I see more of a statement than a solution". These posts have been very illuminating on both sides of the debate, if you can't see the solution though, as clearly as it's been stated, then maybe this isn't the conversation for you, it's not mandatory to partake.

Rayek
02-11-2019, 01:20 PM
Currently my most fun render engine is Cycles 4D with its newest (early access) update, it's incredible how fast it runs on hybrid CPU / GPU (but only in CUDA mode, OpenCL is half as fast). The feature set, stability, quality, Cinema integration and its node editor are is incredible good. Worth to watch the Insydium videos to get an idea.

It could have been a good move in my opinion to integrate Cycles in LightWave but its a lot of work to do a seamless integration.



This would have been my choice too, actually. Cycles has a very permissive license, an excellent renderer that is actively developed, proven in various production scenarios, PBR, and would have suited Lightwave's nodal system perfectly well. They could have focused on a perfect GUI integration.

You are correct, though: it took the Cycles 4D developer quite some time to get it up to the current level. Although he was only by himself.

Could have, would have, might have...

Marander
02-11-2019, 01:22 PM
Here's a crazy convoluted scenario - import a fully animated LW scene into Cinema 4D and render in Redshift just to see what happens. I don't think anyone would ever do that, but it is possible in theory ?

If it's a LW2015 scene it can be loaded natively in Cinema 4D (the LW2018 format is not supported in other applications) or use FBX. I do exchange scenes or objects between these programs, it's no problem.

But no render engine just produces beautiful images without the correct settings.

You would have to tweak all materials, lights / camera / render settings in order to make it look right if it's not just a clay render.

Edit: Nodal animation would need to be baked down previously of course.

thomascheng
02-11-2019, 01:28 PM
"I see more of a statement than a solution". These posts have been very illuminating on both sides of the debate, if you can't see the solution though, as clearly as it's been stated, then maybe this isn't the conversation for you, it's not mandatory to partake.

No need to be condesending. Your solution is very simplistic is what I'm saying. We all know it isn't that easy to make happen.

brent3d
02-11-2019, 01:41 PM
No need to be condesending. Your solution is very simplistic is what I'm saying. We all know it isn't that easy to make happen.

Now who's being condescending?
Any dramatic change will take effort. Not one example of a company doing this mentioned in this thread said it was easy...and I sure didn't say it was, but that doesn't mean the change is not necessary.

Marander
02-11-2019, 01:44 PM
This would have been my choice too, actually. Cycles has a very permissive license, an excellent renderer that is actively developed, proven in various production scenarios, PBR, and would have suited Lightwave's nodal system perfectly well. They could have focused on a perfect GUI integration.
...
Could have, would have, might have...

Yep correct, that would have been great and maybe even attract Blender users.

It would have fit much better than ProRender because Cycles already offers already Hair shading, Fire, Smoke, SubPoly Displacement, Nodal shading and hybrid CPU / GPU render, CUDA and OpenCL support.

Would have!!!

Rayek
02-11-2019, 01:53 PM
What I feel should be on their list next is getting the viewports of Modeler and Layout up to par with a PBR real-time based one, comparable to Unreal, Godot Engine, or (dare I say) Eevee.

It would make sense seeing the new Unreal bridge. And allow this PBR viewport tech to be used in the regular render pipeline (similar to Eevee). This would be a preferred course to sail, in my opinion. It would be pretty future-proof, with new GPUs now introducing raytracing and tech that was usually reserved for off-line render engines.

brent3d
02-11-2019, 01:59 PM
Yep correct, that would have been great and maybe even attract Blender users.

It would have fit much better than ProRender because Cycles already offers already Hair shading, Fire, Smoke, SubPoly Displacement, Nodal shading and hybrid CPU / GPU render, CUDA and OpenCL support.

Would have!!!

Yes, spot on.

brent3d
02-11-2019, 02:01 PM
What I feel should be on their list next is getting the viewports of Modeler and Layout up to par with a PBR real-time based one, comparable to Unreal, Godot Engine, or (dare I say) Eevee.

It would make sense seeing the new Unreal bridge. And allow this PBR viewport tech to be used in the regular render pipeline (similar to Eevee). This would be a preferred course to sail, in my opinion. It would be pretty future-proof, with new GPUs now introducing raytracing and tech that was usually reserved for off-line render engines.

Dare.

thomascheng
02-11-2019, 02:15 PM
Now who's being condescending?
Any dramatic change will take effort. Not one example of a company doing this mentioned in this thread said it was easy...and I sure didn't say it was, but that doesn't mean the change is not necessary.

Well, I apologize. It wasn't meant to be condescending. I just wanted to hear a deeper explanation on how Newtek can make that happen. Obviously, something needs to be sacrificed/delayed in development to put resources in this area.

raymondtrace
02-11-2019, 02:26 PM
Now who's being condescending?...

Well... you. :)


...I'm spoiled now by Octane/Blender's full integration...

You are ignoring what's going on in the market.

The solution is so clearly stated Rayek's post, re-read it.

...if you can't see the solution though, as clearly as it's been stated, then maybe this isn't the conversation for you, it's not mandatory to partake.

It appears that the premise of your "render challenge" is...


...Because of Autodesk's purchase of Arnold Path-Traced GPU rendering is quickly becoming the standard, where is Lightwave 2019 in all this?

Autodesk did not sufficiently develop path-traced GPU rendering. Therefore they had to buy the technology. Why would you challenge/expect NewTek to develop something even Autodesk did not perfect on their own? Why is this "debate" (there is no debate...nobody has denied the value of GPU rendering for particular applications) so important to you? Why would you think NewTek and its real customers are unaware of the topic? Is it possible that people choose CPU rendering for the valid reasons already posted in this thread? Maybe you should re-read it.

It's great that you found a 3D platform that works for you. You don't need to exert so much energy to sell GPL software. :)

brent3d
02-11-2019, 03:19 PM
Well... you. :)


Nice try, but I will refer you to Rayek's post since he said it best.

"I agree. The C4D developers realized they had to add GPU render tech to their software ASAP to remain relevant and competitive. I think their situation was comparable to Lightwave: an aging CPU based render engine while all the competition was already, or were going, GPU.

Instead of reinventing the wheel, Maxon decided to integrate ProRender, which was/is a smart move. It would have been an even smarter move for Lightwave's development: a mature combined GPU/CPU renderer out of the box, and less headaches how to divide and strategize development resources. Existing resources that could have been easily re-used (existing material libraries), and so on.

When Lightwave 2018 was released, I did not understand the lack of GPU rendering in the new render engine. Aside from first-release bugs and rough edges, I thought it is a great quality engine - but the competition offers far more flexibility with BOTH CPU and GPU support, and offer the same quality and beyond. 2018's new renderer was years behind the competition the moment it was released.

The Newtek team could have focused more on other long outstanding improvements, instead of starting a CPU renderer from scratch that would be years behind the competition when released in 2018. I still can't wrap my head around it. In particular seeing the development behind (almost) real-time GPU based rendering based on game render tech. Newtek could have really made a smashing comeback if the new render engine would have been a CPU/GPU hybrid one. Such a lost opportunity.

Still, water under the bridge. The only way forward now is to add GPU rendering to Lightwave, preferably combined CPU/GPU to keep being relevant in the semi-long term. Which is a VERY specialized coding job, and developers in this field are very hard to hire and keep hold of. So I see Newtek's Lightwave team going forward without GPU support because of the invested time and effort to develop their new CPU only renderer, even to the detriment of long-term survival (sunken cost fallacy). Many Lightwave users will have to invest in an alternative GPU renderer (Octane) to remain competitive in their work, and based on what I read here, many already have. Which marginalizes the built-in new 2018 / 19 render engine.

Modo is sort-of in the same boat, but can at least boast being one of the best modelers in the market.

Having said all this, I might be proven wrong, and the LW dev team is going to surprise us all with GPU support next year, or the year after that. The question is whether it will matter anymore to anyone at that time."

hrgiger
02-11-2019, 03:25 PM
Modo is sort-of in the same boat, but can at least boast being one of the best modelers in the market.




Pro Render is still in beta for Modo. The next beta should be available soon.

SBowie
02-11-2019, 03:34 PM
You know, this thread seemed to start out simple: 'How fast are your renders in LW2019?' It seems clearer now that this was a subterfuge to try to make a whole different point. :(

Honestly, I really don't care if anyone wants to regurgitate (or update) the endless 'gpu versus cpu' threads that have been hosted here, but at least let's be honest about it rather than try to sneak it in under the radar. Really, why not just be straight up about it in the first place? I think that's what gets up people's noses.

brent3d
02-11-2019, 03:34 PM
Well, I apologize. It wasn't meant to be condescending. I just wanted to hear a deeper explanation on how Newtek can make that happen. Obviously, something needs to be sacrificed/delayed in development to put resources in this area.

Cool beans, that's why I've been referring back to Rayek's earlier post best said IMO. Only dev knows their roadmap and the challenges in this topic though.

brent3d
02-11-2019, 03:41 PM
You know, this thread seemed to start out simple: 'How fast are your renders in LW2019?' It seems clearer now that this was a subterfuge to try to make a whole different point. :(

Honestly, I really don't care if anyone wants to regurgitate (or update) the endless 'gpu versus cpu' threads that have been hosted here, but at least let's be honest about it rather than try to sneak it in under the radar. Really, why not just be straight up about it in the first place? I think that's what gets up people's noses.

This has been a very good thread, a very cool challenge with some surprising results, and a lot of great ideas and information shared (respectfully), and the focus has been on Lightwave and has never strayed off topic since we were dealing with CPU outcomes vs GPu. Many participated in the dialogue in good spirit, so what's the problem again?

SBowie
02-11-2019, 04:02 PM
... so what's the problem again?The fact that there clearly seems to have been an underlying agenda which was not manifest in the OP. Bait and switch always ticks people off, and is completely unnecessary.

Chris S. (Fez)
02-11-2019, 04:05 PM
This seems to be a Lightwave CPU-shaming thread in the guise of "education" and "render challenge"...complete with Blender and Modo ads.

Blender users can't help themselves here or in any other forum.

brent3d
02-11-2019, 04:20 PM
The fact that there clearly seems to have been an underlying agenda which was not manifest in the OP. Bait and switch always ticks people off, and is completely unnecessary.

To funny, everything you just stated did not occur in this great thread, but we must be debating and discussing something we aren't supposed to be. So kill the thread, nothing to see here folks.

SBowie
02-11-2019, 06:42 PM
To funny, everything you just stated did not occur in this great thread ...Sure, "Fake News" ... just keep telling yourself that. There are plenty of pro-GPU threads here, so clearly even though you're trying to convince me that the 'OT' in this one is just accidental, the fact is either way it doesn't mean anyone is debating something that isn't allowed. But I'm not the only one here who thinks this was a ploy. So the funny thing is that you could have just started a pro-GPU thread (as many others have done before) and no-one would have called you out for doing so ... and people would have made the same points pro and con - without thinking you were trying to be clever.

brent3d
02-11-2019, 06:57 PM
Sure, "Fake News" ... just keep telling yourself that. There are plenty of pro-GPU threads here, so clearly even though you're trying to convince me that the 'OT' in this one is just accidental, the fact is either way it doesn't mean anyone is debating something that isn't allowed. But I'm not the only one here who thinks this was a ploy. So the funny thing is that you could have just started a pro-GPU thread (as many others have done before) and no-one would have called you out for doing so ... and people would have made the same points pro and con - without thinking you were trying to be clever.

Wow what they say about these forums is true. Good luck with that, but a great thanks to all the users who contributed there time, thoughts and ideas to the challenge and dialogue. Very good people.

Pepper Potts
02-11-2019, 08:03 PM
Sure, "Fake News" ... just keep telling yourself that. There are plenty of pro-GPU threads here, so clearly even though you're trying to convince me that the 'OT' in this one is just accidental, the fact is either way it doesn't mean anyone is debating something that isn't allowed. But I'm not the only one here who thinks this was a ploy. So the funny thing is that you could have just started a pro-GPU thread (as many others have done before) and no-one would have called you out for doing so ... and people would have made the same points pro and con - without thinking you were trying to be clever.

I'm not sure if some of the statements that you are making are actually fair to say, and I am referring to both of the statements that were made to brent3D. I have been a user of LW for years and I recently began including Blender in my pipeline. I am also a reader (not an active "commenter") on the LW forums. Like many others, I use them as a way to learn from multiple people and see the different opinions out there. The recent brent3D forum posts have been some of the most enlightening I've seen over the past few months. These posts have shown me how best to utilize LW with Blender and vice versa. It was not so so long ago when LW users were on forums asking ANYONE from Lightwave what was going on (for months mind you) and not one response was given. So to try to shut down someone who seems to be genuinely starting a debate, not a fight, about both the pros and cons of the current LW seems crazy to me. He has pretty much given you an entire list of features that you may want to add. And I'm sure Newtek would much rather its forum users remember the productive debates and possible hopeful features that may come out of them instead of the rudeness of it's forum moderators towards those users.

I've seen brent3d's work and I've noticed how long he has been a LW user. I suggest LW be careful about how you treat your users. At this rate I truly believe you will lose out on great users who will promote great work with this great software simply because you are more worried about a forum debate instead of using those debates to make your software better. No hate. . .just a thought.

raymondtrace
02-11-2019, 11:09 PM
...So to try to shut down someone who seems to be genuinely starting a debate, not a fight, about both the pros and cons of the current LW seems crazy to me...

It is indeed crazy, because there is no debate. There's no confusion about the benefits and limitations of GPU, either by NT or most of its customers. Those that want to render via GPU in LW can do so with external renderers, just as Brent is already doing with Octane for both LW and Blender. While NT does not yet offer GPU rendering natively, they've been working to make it easier to interact with external GPU renderers with each release.

I valued the information in this thread until some unnecessary intellectual posturing on post #48. Couple that with interactions observed in other threads and in other social media ...and I can understand Steve's POV.

Photogram
02-11-2019, 11:14 PM
Changed the camera samples to 3 and lowered the GI rays to 16. Render came out really clean and nice.

My render time is 39.5 second with theses settings.
Dual Xeon E5-2670 2.66 ghz

Marander
02-11-2019, 11:31 PM
Render times 36.2 seconds on an aging i7 6C/12T.

My next workstation will most likely be something like a 32 to 64 Core Threadripper so these rendertimes look promising!

I'm impressed with the LW2019.1 + NVidia Denoiser speed.

Without the Denoiser the image would be aweful noisy but that is a game changer for me.

I also rendered the scene in Cinema Physical Render and Vray on CPU, it's not possible to reach these render times and quality using GI without a denoiser. Vray has one built-in but it sucks in my opinion.

It is only a clay render so that can look different when using textures, reflections, refractions, transparency, subsurface, GGX / Beckman shaders etc. but nevertheless I'm impressed.

Rayek
02-12-2019, 12:40 AM
Optimized scene from #24 https://forums.newtek.com/showthread.php?159156-Brent-s-Lightwave-2019-Render-Challenge&p=1564456&viewfull=1#post1564456

1m30s on my ancient i7 [email protected] and Lightwave 2019.

Quite impressive. When I tested 2018 rendering, I wasn't impressed by the render times to arrive at noiseless renders, but 2019 matured, and works well now.

OnlineRender
02-12-2019, 04:27 AM
i7-5930K - Default Scene 3Mins 35 Secs
Optimized: 1min 12 Secs
Low End " but passable" 36 secs
-------------------------------------------
Octane 24 Secs
-------------------------------------------
GarageFarm 2 secs


comparing GPU vs CPU is like having sex with a bathing suit on, pointless!

in the next 10 years most things will probably be done on the cloud, I have yet to see the argument that CPU's are actually getting pretty damn fast.
GPU rendering brings a whole new set of problems, initial startup costs for example ... decent PSU , decent case or rack...

each to their own

TheLexx
02-12-2019, 04:53 AM
comparing GPU vs CPU is like having sex with a bathing suit on, pointless!

in the next 10 years most things will probably be done on the cloud
Incuding the sex, probably between Daz characters on Genesis 37 while we tap in wirelessly. Still, I guess we shouldn't knock it till we've at least tried it once.... :D

Qexit
02-12-2019, 05:38 AM
comparing GPU vs CPU is like having sex with a bathing suit on, pointless!

in the next 10 years most things will probably be done on the cloud, I have yet to see the argument that CPU's are actually getting pretty damn fast.
GPU rendering brings a whole new set of problems, initial startup costs for example ... decent PSU , decent case or rack...

each to their ownVery true about the costs. My LW PC is reliable but not exactly state-of-the-art. It has a 3GB Quadro K4000 graphics card that isn't really suitable for GPU rendering. It only has 24GB of DDR3 RAM and a pair of Xeon E5620 CPUs running at 2.4 GHz, so it can run LW2019 quite happily but doesn't let me get anything done in a hurry. I am hoping to replace the whole thing at some point this year as it is now 5 years old but with a budget that wouldn't cover any of the current top spec gear. High end graphics cards and GPU renderer licenses are simply not on my shopping list as I cannot afford them. I only have a K4000 now because I picked it up for a ridiculously low price as a refurbished unit four years ago. Nice gear is nice to have...but not everyone has access to it :)

OlaHaldor
02-12-2019, 06:08 AM
Here's a crazy convoluted scenario - import a fully animated LW scene into Cinema 4D and render in Redshift just to see what happens. I don't think anyone would ever do that, but it is possible in theory ?

When I studied 3D a few years ago we had to use Maya for model, rig, animation. But I hated the look of Mental Ray, so I moved everything to LightWave and rendered with Octane. ;D
Anyone could do that.



Re the "ads for Modo"... It's not that I wanted to use Modo for the test render with Octane, so it wasn't meant as an ad. I stated I cannot open the file and render with Octane in LW2019 trial. Nor could I export.. Whether I had used Octane in LW or Modo really doesn't matter. It'd still be Octane. Sorry if I stepped on some toes by mentioning the enemy...

TheLexx
02-12-2019, 07:05 AM
When I studied 3D a few years ago we had to use Maya for model, rig, animation. But I hated the look of Mental Ray, so I moved everything to LightWave and rendered with Octane. ;D
Anyone could do that.I guess I was just curious about speed differences between Octane and Redshift with a thought if LW could also access Redshift (can't let 4D have all the fun !). Interesting that in theory LW will render in RS. Good that LW has three GPU options, with one being free.

If Marander does manage a test of LW native with a 64 Core Threadripper, that would be interesting too.

:)

mummyman
02-12-2019, 07:18 AM
I'm slowly doing this from LW to Maya to render in Redshift. Redshift is amazingly fast for me. Very VERY similar to LW's new renderer for me to understand. Being able to bake out instances using OD Tools is fantastic. Sadly... I don't have much to show for a comparison test. Loving this thread. Hopefully I can do some tests down the road. But something still grainy and crawly slightly in LW with a 1 min 45 sec render can be pretty damn clean in about 30 seconds in Redshift. To me it might be worth working in LW and converting to fbx / baking to use RS. Sorry for going off-topic.

beverins
02-12-2019, 07:52 AM
Just as a note about the CPU vs GPU thing, especially as related to Arnold...

Arnold is still mostly CPU bound - we use it at our school here and while the GPU helps with iterating the lighting in the viewport... it completely fails with mayabatch.exe / render.exe on a renderfarm.

Also, one note about Optix - Nvidia is working on it, but in its current state the denoising actually impacts TWO things... in addition to the denoising not being temporally aware, it also makes the final image unsuitable for compositing even if you can live with the shimmering denoise. Vray is much the same.

Thanks for the scene - I'll try it on some varied hardware when I get a chance :-)

Cageman
02-12-2019, 08:27 AM
LightWave 2019 CPU + Houdini Mantra are great if you have a big farm like we have. I mean, the potential to have over 400 rendernodes overnight (usually it is more around 200 mark) turns things around quite quickly. There is no GPU solution for us that would be cost-effective to replace that farm.

1) Octane has a very shady licensing model that I do not like from a studio perspective.
2) Redshift cost 500-600 USD / license + you have to get some fairly expensive hardware to go with each license.
3) Out of Core features if used extensively (large complex scenes with high res textures, millions of instances, and 10-20 fully deforming characters with hair etc) has a tendency to make GPU rendering go a tad bit slower than its original potential, and in those cases, higher end CPUs starts to catch up.

LightWave 2019 upgrade was $395 / seat with pretty much unlimited rendernodes.

So, those are some of the reasons I see CPUs being winners for our situation.

rustythe1
02-12-2019, 09:15 AM
LightWave 2019 CPU + Houdini Mantra are great if you have a big farm like we have. I mean, the potential to have over 400 rendernodes overnight (usually it is more around 200 mark) turns things around quite quickly. There is no GPU solution for us that would be cost-effective to replace that farm.

1) Octane has a very shady licensing model that I do not like from a studio perspective.
2) Redshift cost 500-600 USD / license + you have to get some fairly expensive hardware to go with each license.
3) Out of Core features if used extensively (large complex scenes with high res textures, millions of instances, and 10-20 fully deforming characters with hair etc) has a tendency to make GPU rendering go a tad bit slower than its original potential, and in those cases, higher end CPUs starts to catch up.

LightWave 2019 upgrade was $395 / seat with pretty much unlimited rendernodes.

So, those are some of the reasons I see CPUs being winners for our situation.

and this was exactly one of my points, the cost is even worse for us in Europe, in the states I believe you can pick up a 2080 for around $600 if I am to believe forum posts, but here in the uk they can set you back nearly 1600, that's over $2000, but intel chips seem to go the other way, I only paid just over 1200 for my 7980, (I think in the states they are $2000 or more) don't quote me on anything it s just here say, but gpu is well over priced in Europe compared to states and east, and then like you say there is the software cost its self

brent3d
02-12-2019, 10:23 AM
It is indeed crazy, because there is no debate. There's no confusion about the benefits and limitations of GPU, either by NT or most of its customers. Those that want to render via GPU in LW can do so with external renderers, just as Brent is already doing with Octane for both LW and Blender. While NT does not yet offer GPU rendering natively, they've been working to make it easier to interact with external GPU renderers with each release.

I valued the information in this thread until some unnecessary intellectual posturing on post #48. Couple that with interactions observed in other threads and in other social media ...and I can understand Steve's POV.

If you aren't interested or don't agree with the subject of this thread then simply don't take part in it, no one is forcing you to read.
No need to try to slander me since anyone who has a mouse (or trackball) can easily find my training videos and see my opinions and perspectives over the years on LW, Modo, and Blender on my Youtube channel and I'm on the Octane/Blender group FB as well. On my Youtube Channel if you scroll way down you will see I uploaded, for free, some of my Lightwave Lecturers and demonstrations I created for my classes I taught at Howard University when I ran their 3D Art and Animation program, we had a full Lightwave lab and a 20 machine Screamer Net farm that I ordered, setup, and arranged. I'm glad to say I taught hundreds of students Lightwave during those years which I'm sure added to Newteks revenue and hopefully reputation at that time. Just had to add these bits since I've always argued for Lightwave, as Cinematic Director at Firaxis Games during Civilization III production, as a Professor, and as an Art Director for mobile games, but hey maybe I don't like Lightwave and maybe I'm just a troll trying to create division on the Lightwave forum, go figure. Remember you don't have to read any of this, just take a bite of the cookie and everything will be as right as rain.

I would like to personally thank Rayek for his post #53, which I believe caps the discussion, OlaHaldor for his unique perspective coming from a Modo side, and the countless users who participated in this exchange with their thoughts and ideas, felt like the old days.

Thank you for mentioning my post #48! When users can answer these important questions for themselves they will have a clearer understanding where things are going in the short term, but once again no-one is forcing anyone to have to read these questions or to answer them and they are not meant as a snarky response or any form of attack on ones intelligence.

brent3d's Post #48:
When is the last time you've seen Autodesk marketing MentalRay?
What was their latest big company purchase in regards to rendering?
What rendering software was just intergrated into Maya and soon 3Ds Max?
What did that software's developers announce 10mths ago they had achieved with the renderer, although they were six years behind other companies?
What is Vray, what does Vray now support and why?
What does Nvidia's RTX stand for and what 3D softwares have announced upcoming support for it?
What branches of the government does Nvidia contract with?
There are no fruits, only a basket."

My Youtube Channel:
https://www.youtube.com/user/alleyne3d/videos

SBowie
02-12-2019, 10:24 AM
I'm not sure if some of the statements that you are making are actually fair to say.No offense taken, but let's review what I wrote:


There are plenty of pro-GPU threads here: Clearly true (I assume something true is also 'fair')
(so no-one)is debating something that isn't allowed: True; see above.
I'm not the only one here who thinks this was a ploy: Also true.
you could have just started a pro-GPU thread (as many others have done before) and no-one would have called you out for doing so: True again.

So I'm not seeing it, sorry. Any valuable information in this thread would have accrued just as easily in a straight-up gpu-versus-cpu thread without the bait and switch approach.

SBowie
02-12-2019, 10:28 AM
Thank you for mentioning my post #48! When users can answer these important questions for themselves they will have a clearer understanding where things are going in the short termAnd if you had started with that, rather than using a call for render times as a way to slide into what you really wanted to discuss, I seriously doubt anyone would have criticized.

brent3d
02-12-2019, 10:34 AM
And if you had started with that, rather than using a call for render times as a way to slide into what you really wanted to discuss, I seriously doubt anyone would have criticized.

If you don't like the challenge don't participate, if you don't like the dialogue then don't read it. So unless I broke the forum rules please keep your trolling snarky opinions to yourself. Hard to believe you read my post and still take your time out to respond to me like this, to funny.

rcallicotte
02-12-2019, 10:43 AM
Brent, I've been watching you post all over the place (Facebook, here, etc.) and notice you only have 338 posts here...so, I find your response and attitude and arrogance is revolting.

brent3d
02-12-2019, 10:49 AM
Brent, I've been watching you post all over the place (Facebook, here, etc.) and notice you only have 338 posts here...so, I find your response and attitude and arrogance is revolting.

Posts on Facebook? Oh when I schooled trolls last week after they tried to troll my video post..hmm. Once again the slander since everyone knows I rarely post my opinion anywhere besides when I make a training video..to funny and nice try... troll.

OnlineRender
02-12-2019, 11:01 AM
If you don't like the challenge don't participate, if you don't like the dialogue then don't read it. So unless I broke the forum rules please keep your trolling snarky opinions to yourself. Hard to believe you read my post and still take your time out to respond to me like this, to funny.

https://media.giphy.com/media/tDpnUNp2Zo1Xy/giphy.gif

raymondtrace
02-12-2019, 11:13 AM
If you aren't interested or don't agree with the subject of this thread...

What exactly is the subject of this thread? And what slander do you perceive?

I only entered this thread to question your dismissal of Russell, who clearly indicated he's got his bases covered with CPU and GPU rendering, just like you. I appreciate you have history and experience. So does Russell. There's no need to swing your bat around.

Your repeated copy/paste of entire posts, assuming others are not capable of reading or scrolling back up the page, is insulting.


I would like to personally thank Rayek for his post #53, which I believe caps the discussion...

If you truly read Rayek's post, you may observe more humility of uncertainty and less dismissive absolutes.

Take a breath. I'm not your enemy. I'm just questioning your approach to a subject that most everyone here already understands.

brent3d
02-12-2019, 11:13 AM
https://media.giphy.com/media/tDpnUNp2Zo1Xy/giphy.gif

Who cares what you think of me defending my position, contribute to this threads topic instead cause it looks like you want to troll.

SBowie
02-12-2019, 11:14 AM
If you don't like the challenge don't participate, if you don't like the dialogue then don't read it. So unless I broke the forum rules please keep your trolling snarky opinions to yourself. Hard to believe you read my post and still take your time out to respond to me like this, to funny.Hard to believe you don't actually get the point, but that's as may be. Learn, don't learn, all the same to me.

brent3d
02-12-2019, 11:15 AM
What exactly is the subject of this thread? And what slander do you perceive?

I only entered this thread to question your dismissal of Russell, who clearly indicated he's got his bases covered with CPU and GPU rendering, just like you. I appreciate you have history and experience. So does Russell. There's no need to swing your bat around.

Your repeated copy/paste of entire posts, assuming others are not capable of reading or scrolling back up the page, is insulting.



If you truly read Rayek's post, you may observe more humility of uncertainty and less dismissive absolutes.

Take a breath. I'm not your enemy. I'm just questioning your approach to a subject that most everyone here already understands.

Oh shut up, you are not on topic, but you are definitely trolling. The trolls have been unleashed...lol

SBowie
02-12-2019, 11:16 AM
Oh shut up, you are not on topic, but you are definitely trolling. The trolls have been unleashed...lolAnd with that, and the abandonment of civility, off we go to moderated posting. This was really unnecessary.

Bill Carey
02-12-2019, 11:16 AM
The rest of us appreciate your steady hand and apparently limitless patience Mr. Bowie.

brent3d
02-12-2019, 11:17 AM
Hard to believe you don't actually get the point, but that's as may be. Learn, don't learn, all the same to me.

Hard to believe you actually work for Newtek.

SBowie
02-12-2019, 11:19 AM
I can hardly believe it myself sometimes. (Pinch) Ouch ... yep, still here, after all these years. :)

TheLexx
02-12-2019, 11:24 AM
Brent, elsewhere (I stress not here), you were subjected to some pretty vile trolling which could have come straight out of Mein Kampf, and I can defintely understand you feeling "got at" about that (because you actually were being got at) and I think something residual may be coming over here as a result which I never saw once on your wonderful Youtube channel. Maybe things look different in writing here - if we were all sat round a massive table, maybe we would all perceive each other a little better (though I admit to hiding under that table right now). :)

Ryan Roye
02-12-2019, 11:58 AM
The reason why the sentiment of "comparing apples to oranges" is accurate is because it really is comparing two very different systems optimized for two very different tasks. It'd like judging a fish on its ability to climb trees.

Try rendering volumetrics in Octane... you'll find that Lightwave produces results much more quickly and with a much higher level of control. It'd be an inefficient use of my time to try and explain why the comparison between LW and Octane isn't just "this is faster so use program X over Y", I can only ask that you do your research and try to determine why myself and others would all come to the same conclusion.

I say this as someone who uses Octane in my workflow as well. It's a great renderer for scenes that require good GI, not so great for any non-photoreal stuff like visual effects or styled imagery.

SBowie
02-12-2019, 12:00 PM
OK, listen up peeps ... movin' on. Despite the questionable genesis of this thread, the main conversation in it is not particularly objectionable, and I don't think anyone really has a problem with it. Apart from the moderation already imposed (regarding which I refer anyone who is interested to the moderation policy thread), I don't see any reason it can't continue - without unhelpful and irrelevant personal commentary. That which does not fit this model will simply be moderated without further ado.

brent3d
02-12-2019, 12:13 PM
The reason why the sentiment of "comparing apples to oranges" is accurate is because it really is comparing two very different systems optimized for two very different tasks. It'd like judging a fish on its ability to climb trees.

Try rendering volumetrics in Octane... you'll find that Lightwave produces results much more quickly and with a much higher level of control. It'd be an inefficient use of my time to try and explain why the comparison between LW and Octane isn't just "this is faster so use program X over Y", I can only ask that you do your research and try to determine why myself and others would all come to the same conclusion.

I say this as someone who uses Octane in my workflow as well. It's a great renderer for scenes that require good GI, not so great for any non-photoreal stuff like visual effects or styled imagery.

Yes, I've liked being to manipulate looks and renders within the standard raytrace engine. PathTracing is for Photoreal or close to it.
Don't agree though on the Octane volumetric stuff though see my example below:

https://youtu.be/l2-fXCJiX6o

MarcusM
02-12-2019, 12:14 PM
Show us how fast your CPU really is in Lightwave 2019 and post your video results in this brute force render comparison. The test scene is in the video description. Let the games begin!

https://youtu.be/jx0q1CGP4vg

144061

I personally like render challenges. Then users can play with own settings and learn more from others.

If you will do more in future I have request. Try do better render settings, more optimized, to not teach bad habits. Especially before make youtube videos whitch can go to wide range of recipients

You could also use your own models. You know, to brag something self made.

Cageman
02-12-2019, 12:19 PM
No, addressing the CPU arguement, not the use of Unreal or anything like that.

No, you didn't. You simply _ignored_ facts that CPUs are taking another big step forward in computing power. No one here, in this thread, are saying GPUs aren't faster for rendering... but you seem to deny the fact that CPUs are doing some great forward movements in this area.

Cageman
02-12-2019, 12:32 PM
If you aren't interested or don't agree with the subject of this thread then simply don't take part in it, no one is forcing you to read.

So, if I say that 400 CPUs will be faster than anything you can throw together as a single person (machines with GPUs etc), then, I should not take part in this thread?

I mean... I work at a Ubisoft studio that have more than 400 pretty good Workstations. I would be mad if I would do anything less than have those in a renderfarm overnight, even if I only can get 200 of them... instead I should go GPU... is that what you are suggesting?

Nicolas Jordan
02-12-2019, 02:27 PM
No, you didn't. You simply _ignored_ facts that CPUs are taking another big step forward in computing power. No one here, in this thread, are saying GPUs aren't faster for rendering... but you seem to deny the fact that CPUs are doing some great forward movements in this area.

I totally agree CPUs are very good route to go for many depending on your needs. I'm sure we will see CPUs make even larger steps forward in the next couple years. I give AMD all the credit for this since they are the ones who started it with the Threadripper. Intel has made some big strides as well but only as a reaction in an attempt to remain competitive. If the Threadripper had not come to market GPU based rendering would look more attractive than ever.

3dslider
02-12-2019, 02:51 PM
Yes, I've liked being to manipulate looks and renders within the standard raytrace engine. PathTracing is for Photoreal or close to it.
Don't agree though on the Octane volumetric stuff though see my example below:

https://youtu.be/l2-fXCJiX6o

Cool your video :)

But i don't know and honestly and no offense i prefer to LW volumetric just maybe it renders more "polished", otherwise it needs to compare more closer to both.

Cageman
02-12-2019, 03:10 PM
I totally agree CPUs are very good route to go for many depending on your needs. I'm sure we will see CPUs make even larger steps forward in the next couple years. I give AMD all the credit for this since they are the ones who started it with the Threadripper. Intel has made some big strides as well but only as a reaction in an attempt to remain competitive. If the Threadripper had not come to market GPU based rendering would look more attractive than ever.

AMD are really inovating now... their Threadripper series is only at a start, and still, they manage to keep the costs of those quite low, compared to Intel. I am very impressed, indeed.

Cageman
02-12-2019, 03:13 PM
I personally like render challenges. Then users can play with own settings and learn more from others.

If you will do more in future I have request. Try do better render settings, more optimized, to not teach bad habits. Especially before make youtube videos whitch can go to wide range of recipients

You could also use your own models. You know, to brag something self made.

I have to agree. The LW scene was one of the most sloppy and un-optimized scenes I've seen in years, especially when LW2019 has Optix.

Cageman
02-12-2019, 03:22 PM
Also... this... Multitasking that doesn't hog your computer.... In this video, he is encoding a cue of videos, while working with Maya+Arnold + Recording a video of it.

If this would have been Octane or Redshift, your computer would have been hogged quite badly if not doing the task manager and do prio stuff...

https://www.youtube.com/watch?v=ceihAHySwdw&t=

lardbros
02-12-2019, 03:34 PM
There are a few inconsistencies I've read from your posts, which kind of shows a lack of understanding, or something.

Autodesk don't market Mental Ray because nvidia stopped developing it. Instead, they're focusing on Iray and their AI stuff.

Also... Arnold using nvidia Optix means nothing at all. The Arnold renderer has its own built in denoiser called noice. Do you know that LW 2019 has Optix denoiser built in too??
Just seems like you're skipping the comments which are valid but disagree with, yet the comments which back up your view/opinion are all okay.
Would have been nicer if you'd spent a bit more time weighing up the benefits of GPU versus CPU and vice versa. Believe it or not, there are benefits to using CPU over GPU too you know.

hrgiger
02-12-2019, 03:38 PM
So, if I say that 400 CPUs will be faster than anything you can throw together as a single person (machines with GPUs etc), then, I should not take part in this thread?

I mean... I work at a Ubisoft studio that have more than 400 pretty good Workstations. I would be mad if I would do anything less than have those in a renderfarm overnight, even if I only can get 200 of them... instead I should go GPU... is that what you are suggesting?

I'm not sure that a 400 cpu renderfarm has much to do with your average user, so not sure why the comparison.

But on that subject, you mentioned earlier that it wouldn't be cost effective to replace the render farm with a GPU solution. It's only not cost effective because you already have a system in place that is likely working. But it would be more cost effective then say replacing the whole farm with a newer CPU farm and you certainly wouldn't need anywhere near 400 GPU's to match or even beat the CPU network.

You also mentioned that CPUs are making notable advancements. They are, but they're not keeping up with Moores law, not even the newest Ryzens are. But GPU's are advancing even faster. I have one of the newer threadripper chips, but I only bought it for more all around performance in my apps. The future is GPU.

3dslider
02-12-2019, 03:52 PM
The future is GPU.

Not sure but GPU is quite fast but the problem on my nvidia gtx 780 is a bit old when i compare on Blender it renders one thread and in CPU core makes more threads that work fast too. We can not deny the CPU core still has a future as well GPU.

cresshead
02-12-2019, 04:07 PM
hi, i don't post much here but i'll add my general input on the cpu speed test conversation.

I saw some render results data for a renderer that can use cpu or gpu and in those results a AMD 16core threadripper (32 thread) as on par with a nvidia 1080 or a 1080ti depending on the scene rendered out.
so for me that's a good 'ballpark' on where each is at right now.

I found this video interesting on how both the cpu and the gpu dev curves will flatten out soon, the gains will be TINY.
Once CPU reaches 64 cores the curve flattens out.
Once the GPU reaches 128 RT cores ...not much gonna get better either with 4000 RT cores...

long story short might be multiple cpu's again...either in 1 pc or on a network
or multiple gpu cards hung off the main workstation.

so... the future needs another shift in the next 5 years.



https://www.youtube.com/watch?v=eJBOU23L720

cresshead
02-12-2019, 04:10 PM
Not sure but GPU is quite fast but the problem on my nvidia gtx 780 is a bit old when i compare on Blender it renders one thread and in CPU core makes more threads that work fast too. We can not deny the CPU core still has a future as well GPU.

going a bit off topic here but on GPU , tile size impacts render times quite abit...don't use cpu render tile sizes like 48x48 ...got much bigger like 256x256 when using GPU.

Cageman
02-12-2019, 04:24 PM
I'm not sure that a 400 cpu renderfarm has much to do with your average user, so not sure why the comparison.

The very simple reason is that most rendering, today, is not done on GPUs when it comes to VFX or pre-rendered Cinematics.

Do you honestly believe that WETA will remove their 40k CPU park just because Brent had a "vision"?. :D

EDIT: The problem Brent has is that he totally lacks vision around him... as in, what is really going on. Big VFX will not move from CPU anytime soon.

It is important to show him that his view is only based on what he needs, and then try to tell everyone else that that is the truth to all needs. Which it isnt. That is all I wanted to add to this thread.

hrgiger
02-12-2019, 04:36 PM
The very simple reason is that most rendering, today, is not done on GPUs when it comes to VFX or pre-rendered Cinematics.

Do you honestly believe that WETA will remove their 40k CPU park just because Brent had a "vision"?. :D

No of course nothing to do with Brent. But don't think todays existing pipelines reflect tomorrows decisions. You're going to see more systems at the high end leveraging both CPU and GPU together.

Nicolas Jordan
02-12-2019, 04:43 PM
hi, i don't post much here but i'll add my general input on the cpu speed test conversation.

I saw some render results data for a renderer that can use cpu or gpu and in those results a AMD 16core threadripper (32 thread) as on par with a nvidia 1080 or a 1080ti depending on the scene rendered out.
so for me that's a good 'ballpark' on where each is at right now.

I found this video interesting on how both the cpu and the gpu dev curves will flatten out soon, the gains will be TINY.
Once CPU reaches 64 cores the curve flattens out.
Once the GPU reaches 128 RT cores ...not much gonna get better either with 4000 RT cores...

long story short might be multiple cpu's again...either in 1 pc or on a network
or multiple gpu cards hung off the main workstation.

so... the future needs another shift in the next 5 years.



https://www.youtube.com/watch?v=eJBOU23L720

I guess that would mean that my 2990WX would have similar processing power to 2 x 1080 cards.

Cageman
02-12-2019, 04:43 PM
hrgiger: I edited my message during the time you made the response. The section I added was regarding Brent himself and how I view his standpoint.

cresshead
02-12-2019, 04:51 PM
I guess that would mean that my 2990WX would have similar processing power to 2 x 1080 cards.

yeh, your 32 core AMD (64threads) should be similar to two 1080 or 1080ti cards, pretty good result.

3dslider
02-12-2019, 04:52 PM
I guess that would mean that my 2990WX would have similar processing power to 2 x 1080 cards.

I hear even RTX work fast with realtime raytracing, it would mean of speed from RTX make more fast than 64 threads ???

3dslider
02-12-2019, 04:57 PM
yeh, your 32 core AMD (64threads) should be similar to two 1080 or 1080ti cards, pretty good result.

I suspected as I said above my post

PS : thank you for your tip about size for GPU ;)

hrgiger
02-12-2019, 05:10 PM
yeh, your 32 core AMD (64threads) should be similar to two 1080 or 1080ti cards, pretty good result.

which almost works out, because the 32 core AMD is about 2.5 times more expensive than what I paid for my 1080Ti($700)

probiner
02-12-2019, 05:59 PM
I don't think things are literal when it comes to GPU and CPU, you also have to consider other parameters that impact workflows, for example, biased or unbiased sampling.

Octane offer is on the opposite spectrum of old LW renderer. CPU biased vs GPU unbiased. Octane was radical improvement to the Visualization and Animation users for the raw speed and the reliable GI on animated geos

If you move to Film though or heavy scenes with FX you might go into different types of renderer, like Arnold, Renderman and Mantra.

As I understand LW new render engine is aimed at Arnold practices. And that's great because the overall experience with Arnold users is: Short setup time for the user, let the machine chew on the renders, get back amazing images sequences with little problems. Won't be as fast and GPU, but it's reassuring. This may not serve much people in Visualization that have to spit out a lot of work fast, but certainly an upgrade to Animation and FX people.

Then you'll also have Redshift and classic Vray that while on GPU and CPU respectively have similar sampling settings and will demand a lot of tweaking and scene optimization from the user in order to balance quality and render times.

But hey... I rarely do renders... So, looking forward for others' insight.


As for the usual clique, suppressing outsiders... Most can't make a proper articulated video, share some insight, with fair diction or without gasping for air. So you do you and let them have their own circle-jerk, in their ebony tower.

I love crossover discussions and your videos have some useful nuggets and food for thought even though I'm not an Octane or full fledged Blender user. They are part of that thing people used to foster more: discovery through dialogue and effort put on presentation of ideas, even if to be corrected...

Cheers

safarifx
02-12-2019, 06:41 PM
more CLICK BAIT

http://neotek-laboratories.de/wp-content/uploads/2019/02/command_web.jpg

https://en.wikipedia.org/wiki/Clickbait

snip safx

safarifx
02-12-2019, 06:53 PM
close up CLICK BAIT

http://neotek-laboratories.de/wp-content/uploads/2019/02/command3_web.jpg

https://en.wikipedia.org/wiki/Clickbait

snip safx

safarifx
02-12-2019, 07:13 PM
very close up CLICK BAIT

http://neotek-laboratories.de/wp-content/uploads/2019/02/command4_web.jpg

https://en.wikipedia.org/wiki/Clickbait

snip safx

safarifx
02-12-2019, 08:28 PM
brent,


It is better to first learn a software and then make videos.

Your comparisons have nothing to do with industry practice.
I do not mind that you are a fanboy, but if you try to blaze the blender and lightwave community against each other with unfair means and unfair comparsions I will destroy you and your CLICK BAIT.

https://en.wikipedia.org/wiki/Clickbait

Leave something like that to the professionals.
You prefer to learn LightWave.
This will bring you more knowledge and you will not steal the community's precious time with nonsens and click bait videos.
so. delete yourself on youtube and do not waste your time. You do not become a good artist when you talk. if you want to talk take up the profession of the entertainer.

So first become master. then talk about the championship. not the other way around.


Thank you very much.

http://neotek-laboratories.de/wp-content/uploads/2019/02/command5_web.jpg

snip safx

SBowie
02-12-2019, 08:35 PM
I'd like to rein in the personal comments, please. Debate the subject all you wisH.

safarifx
02-12-2019, 09:03 PM
I'd like to rein in the personal comments, please. Debate the subject all you wisH.


I have said everything there is to say. (from my point of view)

snip safx

Nicolas Jordan
02-12-2019, 10:42 PM
I hear even RTX work fast with realtime raytracing, it would mean of speed from RTX make more fast than 64 threads ???

I think your referring to real time rendering it can perform in video games if the game is specifically designed to take advantage of that tech in the RTX card. It's different than using an RTX for rendering in Octane.

Marander
02-12-2019, 11:10 PM
I'd like to rein in the personal comments, please. Debate the subject all you wisH.

Ah but these horrible and racist click bait images and personal comments from safarifx are not moderated. They come from a different corner, I see.

Chris S. (Fez)
02-12-2019, 11:11 PM
Relax Safarifx. No need for posts like that.

Brent, I use Max. Arnold is presently CPU only.

You admit your bias in so many words: "To me 2015 "is" Lightwave, this other stuff they are trying to sell just doesn't seem to make sense to me."

I still believe you started the thread, consciously or unconsciously, to reaffirm your bias and as an exercise in "educating" people who are satisfied if not genuinely enthusiastic with the CPU rendering in LW.

Why would someone reasonably suspect you of bias and an agenda beyond a mere "render challenge"?

1) You posted a hopelessly poorly optimized Lightwave scene for comparison. When someone mentioned rather reasonably that it makes little sense NOT to optimize a render for LW CPU then you accused him of trolling and being a "sociopath". Savagely attacking a user for rather meekly questioning the fairness of your "challenge" seemed to confirm you were intentionally setting up LW to fail.

2) You did not respond in any meaningful way to Matt, Ryan and other professionals making the case that the "challenge" was biased to begin with.

3) You said you posted the challenge to get the community to "start talking about GPU Rendering" and accused the LW community of "ignoring what is going on in the market". Where have you been? The topic has been discussed endlessly in this forum. Nobody has said "the world is still CPU". Nobody has said that GPU rendering would not be welcome in Lightwave ASAP. Nobody.

4) You broke the forum rules multiple times:

Here are the forum rules: "The following are not considered professional or civil discourse and thus are not allowed: Promotional messages and material for competing products"

Your comments are clearly promoting competing products:

"Native GPU rendering is the way to go, I'm spoiled now by Octane/Blender's full integration."

"I had a project once that actually required a hard edge shadow/graphic look so back to LW's rayracer I went...lol.
EVEE will be a game changer since it's actually in a 3D application and not just a game editor."

"For clarity and still in BETA:" Here you post a youtube link promoting competition.

"Dare."

If you want to discuss LW and Octane then that is fine on this forum. Promoting other software on this forum is a clear breach of rules.

Given all the above I still believe it is reasonable to suspect your motives. But if you sincerely posted this "challenge" in good faith then I apologize :beerchug:

- - - Updated - - -


Ah but these horrible and racist click bait images and personal comments from safarifx are not moderated. They come from a different corner, I see.

Agreed. Get rid of them

Marander
02-12-2019, 11:14 PM
brent,


It is better to first learn a software and then make videos.

Your comparisons have nothing to do with industry practice.
I do not mind that you are a fanboy, but if you try to blaze the blender and lightwave community against each other with unfair means and unfair comparsions I will destroy you and your CLICK BAIT.

https://en.wikipedia.org/wiki/Clickbait

Leave something like that to the professionals.
You prefer to learn LightWave.
This will bring you more knowledge and you will not steal the community's precious time with nonsens and click bait videos.
so. delete yourself on youtube and do not waste your time. You do not become a good artist when you talk. if you want to talk take up the profession of the entertainer.

So first become master. then talk about the championship. not the other way around.

Thank you very much.

snip safx

Wow these words from someone doing videos in "English" that are absolutely horrible to view or listen to and provide no value to the viewer. At least not to me while I learned some things from Brents' well done videos.

I agree to Chris S. (Fez), some posts seem biased but overall I learned some things from this thread and fueled my interest in the LW2019 render engine.

Rayek
02-12-2019, 11:54 PM
Ah but these horrible and racist click bait images and personal comments from safarifx are not moderated. They come from a different corner, I see.

Agreed, ill-conceived stereotypical imagery, and aggressive as well. Please remove.

rustythe1
02-13-2019, 12:39 AM
yeh, your 32 core AMD (64threads) should be similar to two 1080 or 1080ti cards, pretty good result.

ahhh, at last someone confirms it in simple numbers, being as my 7980xe renders twice as fast in many rendering software than the thread ripper, that makes mine equivalent to having almost a 4 card rig, meaning my cpu renderer would work out over a quarter of the price of building a quad card rig!

Tobian
02-13-2019, 12:46 AM
Ivory tower just passing through...

But you know, don't upset him or he'll make a whining video 3 times as long as any of his tutorials. Oh too late! I guess he must have rendered that one one in an un-optimised lightwave scene: try making your bitching videos in blender next time!

safarifx
02-13-2019, 01:07 AM
Ah but these horrible and racist click bait images and personal comments from safarifx are not moderated. They come from a different corner, I see.


Behind the scenes. This is a example scene for lipsync in LW 8 Content.
(the Scene is from 2007.)

I load the Scene in LW 2019 and convert this Scene in 20 min to lwo 3 and converted it to Octane PBR (including surfaces,lighting,posing,render).

Do we want to try for example blender with a 12 year old scene? would not that be a great great another CLICK BAIT video with nonsens?

Your answer shows me that you do not know the scene and the character (which is otherwise an aborigine).

Of course that's stupid because you have made brent to an aborigine.

he is not.

really not.

but if a human makes another human aborigine, even though he is not, what is he?

exactly!

a racist!

So you should apologize to Brent.


Thank you.

OlaHaldor
02-13-2019, 01:34 AM
For me, and a few others I know, GPU render is the only way to go if we want to keep our freelance gigs. Customers require faster turnarounds. It's a lot cheaper for me to upgrade a single GPU than it is to build a farm which can compete in speed with X amount of nodes. My wife would not approve that in our tiny 54 sq m flat. :devil: Perhaps adding a second GPU down the road wouldn't require more space around my workstation, it would just take up a slot inside, and more or less double the render speed.


At the studio where I work it's a different story though. It's all about Arnold. Occasionally Mantra for specific things such as smoke sims. And it's all about rendering on the farm or the cloud. All CPU.

To be fair we tested Redshift a less than average sized scene on a movie a couple years ago. We're talking thousands of 4K textures to populate hundreds or thousands of UDIMs for characters, sets, displaced landscape, water etc. Out of Core was just announced as a feature and.. well.. It rendered the frames relatively fast, but loading and flushing the cache took way more time than Arnold. It could have helped to split the scene into fg, mg, bg, or perhaps even more passes such as characters too.

The conclusion was though; Arnold does this scale best. And I'm sure LW native renderer, if we were to try something like this, would have been better in that situation.


In my mind; GPU is best for small scale production. CPU is better for large scale. :)

Skywatcher_NT
02-13-2019, 01:40 AM
I understand your frustration Safarifx ;)

https://www.youtube.com/watch?v=DZ3a41fyi5o
https://www.youtube.com/watch?v=8kpmBfwwfZY
etc...

safarifx
02-13-2019, 01:54 AM
I understand your frustration Safarifx ;)

https://www.youtube.com/watch?v=DZ3a41fyi5o
https://www.youtube.com/watch?v=8kpmBfwwfZY
etc...


I have no time to watch videos.
the new features of lightwave 2019 are much more interesting. :)

hrgiger
02-13-2019, 01:54 AM
Did the original content from LW 8 content have gold teeth Rene? Or was that just a personal touch in how you respond to a black LightWave user? I think you should leave the depiction of indigenous peoples to those better qualified to speak on their behalf. The fact that it was once part of LW 8 content is irrelevant. Classy as always Rene.

TheLexx
02-13-2019, 02:44 AM
Deleted.

Tobian
02-13-2019, 03:33 AM
How about you give the moderator time to wake up. He's a human, not a f***ing Cylon, he's not awake 24/7.

TheLexx
02-13-2019, 03:44 AM
Deleted #164 fair point. :)

Tobian
02-13-2019, 03:49 AM
How about you stop telling the mod how to do his job? How about we just delete this whole useless thread?

TheLexx
02-13-2019, 03:57 AM
Deleted with apologies to Tobian.

cresshead
02-13-2019, 04:14 AM
how about being nice to each other..it's not a bodycount based game trying to rack up corpses with nasty replies.
post your render time results...

TheLexx
02-13-2019, 04:18 AM
You're right. :)

erikals
02-13-2019, 05:18 AM
head of NewTek LightWave said they would add GPU support for rendering.
10-15 years ago.

-------

LW2019 is a nice update

-------

GPU support is harder to implement, hence why it wasn't available in LW2018.
the main focus of LW2018 was the major Mesh Engine update. it was little time for other stuff.

-------

sure, hope to see GPU support in the near future. Basically Everyone is jumping on that ship, and for a good reason.

Tobian
02-13-2019, 06:08 AM
head of NewTek LightWave said they would add GPU support for rendering.
10-15 years ago.
-snip-



and this is why LW is getting ever more tight lipped, because if the janitors cats neighbor's says something about maybe there being a feature, possibly... people copy and paste it into every thread and hold it like the sword of damoclese over their head as a feature which they promised...

SBowie
02-13-2019, 06:15 AM
Ah but these horrible and racist click bait images and personal comments from safarifx are not moderated. They come from a different corner, I see.I haven't looked at the videos, or really even contemplated the images in the posts. Frankly, I'm not even sure what those posts are meant to be getting at - either too subtle for me or outside my personal frame of reference. If you think they are objectionable, instead of jumping to conclusions about moderation, do the right thing - report them, and your complaint will be given serious consideration.

erikals
02-13-2019, 06:24 AM
i'm simply saying, GPU support can't possibly be that bad. if NT wanted it some time ago.

but, it is a wish, so ending it with that. over and out.

--------

Brent, don't get upset by critique. Relevant or not, we all get that from time to time.

SBowie
02-13-2019, 06:35 AM
OK, first, yes - it's true. I sleep at night (Texas time), so you can - temporarily - get away with a fair bit of tomfoolery overnight. But many who have posted here are correct. This thread was started under a false premise, although it didn't need to be. Nevertheless, setting that aside, there's nothing much untoward about the topic, which as (others have noted) is one that turns up periodically unaccompanied by anything more than the usual array of diverse and occasionally informative views. Still, some on this or that side of the issue have chosen to express themselves about others in ways that are not welcome in this forum. Peacemakers have tried to make peace; ranters have ranted; those who don't take any $%^& from anyone have yet again failed to realize that, being mere mortals, they don't know everything and thus should consider differing views respectfully; and the mature majority, I expect, have just sighed wearily.

I'm tired of this. It's over. I'm going back to my coffee. If someone wants to start another clean cpu-versus-gpu thread, feel free. If someone feels some aspect of that thread is out of line, report it rather than rant. Such reports are considered and appropriate moderation is applied if warranted, at the discretion of moderators.