Page 1 of 12 12311 ... LastLast
Results 1 to 15 of 174

Thread: Brent's Lightwave 2019 Render Challenge

  1. #1
    ..since 4.0 and beyond brent3d's Avatar
    Join Date
    Mar 2005
    Location
    Maryland
    Posts
    345

    Brent's Lightwave 2019 Render Challenge

    Show us how fast your CPU really is in Lightwave 2019 and post your video results in this brute force render comparison. The test scene is in the video description. Let the games begin!

    https://youtu.be/jx0q1CGP4vg

    Click image for larger version. 

Name:	LightwaveRenderChallenge.jpg 
Views:	266 
Size:	243.1 KB 
ID:	144061

  2. #2
    Valiant NewTeKnight Matt's Avatar
    Join Date
    Feb 2003
    Location
    San Antonio, Texas, USA
    Posts
    13,055
    I wouldn't compare to a GPU renderer. Apples and oranges.

    That said, my home PC renders this scene in: 1min 57s
    UI / UX Designer @ NewTek
    __________________________________________________
    www.pixsim.co.uk : LightWave Video Tutorials & Tools


  3. #3
    Quote Originally Posted by Matt View Post
    I wouldn't compare to a GPU renderer. Apples and oranges.

    That said, my home PC renders this scene in: 1min 57s
    I agree, I finally have come to the conclusion that there isn't really any good way to compare GPU to CPU rendering, it's mostly pointless.
    Threadripper 2990WX, X399 MSI MEG Creation, 64GB 2400Mhz RAM, GTX 1070 Ti 8GB

    https://www.dynamicrenderings.com/

  4. #4
    ..since 4.0 and beyond brent3d's Avatar
    Join Date
    Mar 2005
    Location
    Maryland
    Posts
    345
    Quote Originally Posted by Matt View Post
    I wouldn't compare to a GPU renderer. Apples and oranges.

    That said, my home PC renders this scene in: 1min 57s
    Of course you can and thanks for posting your speed.

  5. #5
    ..since 4.0 and beyond brent3d's Avatar
    Join Date
    Mar 2005
    Location
    Maryland
    Posts
    345
    Quote Originally Posted by Nicolas Jordan View Post
    I agree, I finally have come to the conclusion that there isn't really any good way to compare GPU to CPU rendering, it's mostly pointless.
    I just did. Lightwave 2019 is in the same market as Maya, 3DsMax, C4D, and Blender. So all features and render capabilities can be compared and contrasted which is why Autodesk bought Arnold; to keep up with Octane, Cycles, and now EVEE. Why don't you ask Newtek to make Lightwave's 2019 VPR GPU renderable? Because of Autodesk's purchase of Arnold Path-Traced GPU rendering is quickly becoming the standard, where is Lightwave 2019 in all this?

  6. #6
    Valiant NewTeKnight Matt's Avatar
    Join Date
    Feb 2003
    Location
    San Antonio, Texas, USA
    Posts
    13,055
    Quote Originally Posted by brent3d View Post
    Of course you can and thanks for posting your speed.
    Not if you're comparing their speeds as a point.
    UI / UX Designer @ NewTek
    __________________________________________________
    www.pixsim.co.uk : LightWave Video Tutorials & Tools


  7. #7
    Valiant NewTeKnight Matt's Avatar
    Join Date
    Feb 2003
    Location
    San Antonio, Texas, USA
    Posts
    13,055
    Quote Originally Posted by brent3d View Post
    I just did. Lightwave 2019 is in the same market as Maya, 3DsMax, C4D, and Blender. So all features and render capabilities can be compared and contrasted which is why Autodesk bought Arnold; to keep up with Octane, Cycles, and now EVEE. Why don't you ask Newtek to make Lightwave's 2019 VPR GPU renderable? Because of Autodesk's purchase of Arnold Path-Traced GPU rendering is quickly becoming the standard, where is Lightwave 2019 in all this?
    So compare CPU to CPU versions of their renderers, otherwise it's a completely useless comparison, it's the equivalent of comparing how fast a bicycle to a Lamborghini and justifying it by saying "but they're both modes of transport".
    UI / UX Designer @ NewTek
    __________________________________________________
    www.pixsim.co.uk : LightWave Video Tutorials & Tools


  8. #8
    ..since 4.0 and beyond brent3d's Avatar
    Join Date
    Mar 2005
    Location
    Maryland
    Posts
    345
    Quote Originally Posted by Matt View Post
    Not if you're comparing their speeds as a point.
    Why wouldn't speed be compared? Choices of what hardware are utilized for rendering are choices of owners/developers and choices and their results can be compared. Quality and speed are always factors in regards to rendering ability, so how about finding ways to make VPR faster without sacrificing quality...hmmm maybe put it on a GPU instead of CPU, just a thought.

  9. #9
    Registered User
    Join Date
    Mar 2016
    Location
    Oxford, UK
    Posts
    793
    I don't think there is any intention here to see how Octane "humiliates" LW native cpu renderer. It is good to compare speeds because I would (in theory anyway) rather spend more to harness LW renderer with a small farm than subscribe to Octane, and consequently take a certain time hit, but the test will give some sort of idea what to expect. Matt, I also appreciate your time being posted - if the thread is allowed to roll, the results will be interesting. It is something I have always wondered.


  10. #10
    Quote Originally Posted by TheLexx View Post
    It is good to compare speeds because I would (in theory anyway) rather spend more to harness LW renderer with a small farm than subscribe to Octane, and consequently take a certain time hit, but the test will give some sort of idea what to expect. Matt, I also appreciate your time being posted - if the thread is allowed to roll, the results will be interesting. It is something I have always wondered.

    LW 2019, 65 seconds on my Threadripper 1950X, overclocked on all cores to 4.0. But the GPU denoise step is not usable for animation, right? And the Octane de-noise one is? That's a big deal and without de-noise it's a crazy town mess. For grins, I'll set up that shot as a turntable and see exactly what the GPU de-noise looks like in animation.

    Interestingly, I've sort of reluctantly come to the opposite conclusion, as I'm not overall very happy with the LW2018 or 2019 render speeds compared to the various "cheats" I use on LW2015 to get through heavy and long animation workloads, and I'm not sure that my current small renderfarm approach is as turnkey, energy efficient and fast as a multiple GPU install in my current system would be. In the past I've been very reluctant to embrace GPU rendering due to the various issues and limitations of what you can do compared with LW native rendering. But gosh, LW is now SLOW... despite everything I've tried and following the rendering wisdom of RebelHill.

    This sort of reminds me of when we moved from the 5.X cycle to 6.0, and there was the huge slow-down due to floating point rendering. At the time (and for a few years after), I HATED the 6.X series and would revert to 5.6 for the speed increase. It took a long time (and several hardware iterations faster) to appreciate the value of those higher dynamic range images...

    Regards,

  11. #11
    Super Member OlaHaldor's Avatar
    Join Date
    May 2008
    Location
    Norway
    Posts
    998
    I downloaded LW 2019 demo to try this one.
    i7 5960x overclocked to 4.1 GHz.
    It's got 8 cores/16 threads.

    Took 3min 9 seconds total.
    I have Octane, but only Modo at the moment so I can't test it since I can't export the scene from the demo. But I'm sure it would be *fast* on the 2080 Ti.
    3D Generalist
    Threadripper 2920x, 128GB, RTX 2080 Ti, Windows 10

  12. #12
    ..since 4.0 and beyond brent3d's Avatar
    Join Date
    Mar 2005
    Location
    Maryland
    Posts
    345
    Quote Originally Posted by Imageshoppe View Post
    LW 2019, 65 seconds on my Threadripper 1950X, overclocked on all cores to 4.0. But the GPU denoise step is not usable for animation, right? And the Octane de-noise one is? That's a big deal and without de-noise it's a crazy town mess. For grins, I'll set up that shot as a turntable and see exactly what the GPU de-noise looks like in animation.

    Interestingly, I've sort of reluctantly come to the opposite conclusion, as I'm not overall very happy with the LW2018 or 2019 render speeds compared to the various "cheats" I use on LW2015 to get through heavy and long animation workloads, and I'm not sure that my current small renderfarm approach is as turnkey, energy efficient and fast as a multiple GPU install in my current system would be. In the past I've been very reluctant to embrace GPU rendering due to the various issues and limitations of what you can do compared with LW native rendering. But gosh, LW is now SLOW... despite everything I've tried and following the rendering wisdom of RebelHill.

    This sort of reminds me of when we moved from the 5.X cycle to 6.0, and there was the huge slow-down due to floating point rendering. At the time (and for a few years after), I HATED the 6.X series and would revert to 5.6 for the speed increase. It took a long time (and several hardware iterations faster) to appreciate the value of those higher dynamic range images...

    Regards,
    Brilliant post! It does remind me of the 5.x to 6.0 jump as well. From using PBR a lot it almost demands GPU power to render it and now that Out of Core is becoming more available for the use of system memory GPU is becoming more practical.

  13. #13
    ..since 4.0 and beyond brent3d's Avatar
    Join Date
    Mar 2005
    Location
    Maryland
    Posts
    345
    Quote Originally Posted by OlaHaldor View Post
    I downloaded LW 2019 demo to try this one.
    i7 5960x overclocked to 4.1 GHz.
    It's got 8 cores/16 threads.

    Took 3min 9 seconds total.
    I have Octane, but only Modo at the moment so I can't test it since I can't export the scene from the demo. But I'm sure it would be *fast* on the 2080 Ti.
    On a 2080 Ti !?! I should've said there was a GPU speed limit but yes you would drop the scene in around hmmm 5sec or less..lol and that's the point. Thanks for posting your CPU findings, excellent!

  14. #14
    47 seconds - I'm good with that.
    i7-8086K @ 4GHz 6 Core
    Tim Parsons
    Technical Designer
    Sauder Woodworking Co.

    http://www.sauder.com

  15. #15
    ..since 4.0 and beyond brent3d's Avatar
    Join Date
    Mar 2005
    Location
    Maryland
    Posts
    345
    Quote Originally Posted by Tim Parsons View Post
    47 seconds - I'm good with that.
    i7-8086K @ 4GHz 6 Core
    Second fastest time I know of, still the default scene settings?

Page 1 of 12 12311 ... LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •