Results 1 to 9 of 9

Thread: Is there any benefits to using a workstation gfx card in Lightwave 2018/19

  1. #1
    Registered User
    Join Date
    Feb 2019
    Location
    Lancashire, England
    Posts
    24

    Is there any benefits to using a workstation gfx card in Lightwave 2018/19

    I just wanted to know if there are any benefits to using a workstation gfx card in Lightwave 20119 as opposed to a gaming gfx card with regards to both Modeler and Layout. At the moment I have 2 gfx cards, one being an Nvidia Titan X 12gb and the other is an AMD WX 5100 8gb card. If there are huge gains to using the WX 5100 in either Modeler OR Layout then I would be happy to put the WX 5100 in as opposed to the Titan X (Maxwell not Pascal).

    I have been trying to find benchmarks on the internet to try to figure out which is the best card and am wondering that the WX 5100 even though it isn't as powerful as a Titan X, it might be vastly optimised for OpenGl for Layout and Modeler. Any foeedback welcome.

  2. #2
    ex-LightWave documentation BeeVee's Avatar
    Join Date
    Feb 2003
    Location
    Pessac
    Posts
    5,256
    In terms of OpenGL performance, there really isn't much difference. Where you'll benefit is if you use a GPU render method. For the time being, there is only Octane, which requires an NVidia GPU and that counts count the AMD card.

    B
    Ben Vost
    LightWave 3D Docs wiki
    AMD Threadripper 1950X, Windows 10 Pro 64-bit, 32GB RAM, nVidia GeForce GTX 1050Ti (4GB and 768 CUDA cores) and GTX 1080 (8GB and 2560 CUDA cores) driver version 456.71, 2x4K 23.5" monitors
    Dell Server, Windows 10 Pro, Intel Xeon E3-1220 @3.10 GHz, 8 GB RAM, Quadro K620
    Laptop with Intel i7, nVidia Quadro 2000Mw/ 2GB (377.83 and 192 CUDA cores), Windows 10 Professional 64-bit, 8GB RAM
    Mac Mini 2.26 GHz Core 2 Duo, 4 GB RAM, 10.10.3

  3. #3
    Registered User
    Join Date
    Aug 2017
    Location
    Slo
    Posts
    151
    Lightwave =>2018 using more and more Nvidia only features. Last added is GPU de-noising. You will be disappointed with AMD in LW, as I was. In LW =<2015 you can benefit greatly with Quadro in OGL, but now with 2018 consumer GPU is as fast in OGL as Quadro.

  4. #4
    I have never seen any sensible benefit from "Quadro" graphic cards… In layout or Modeler displays.
    Eddy Marillat - PIXYM
    WS : MB ASUS X299-Pro/SE - i9 7980XE 2,6ghz 18C/36T - 32GB Ram- 2 GTX 1080 TI - Win 10 64
    GPU Net Rig : MB Biostar - Celeron 2,8 ghz 2C/2T - 8GB Ram - 2 RTX 2080 TI - Win 7 64

  5. #5
    Registered User
    Join Date
    May 2012
    Location
    Virginia
    Posts
    580
    If you buy the newest quadro cards and use a GPU renderer like Octane, there is a huge benefit. Of course, it costs what you might expect to pay for a low-end used car.

    BUT if you have the bucks, the performance pretty much speaks for itself, as you can see in the performance chart here Quadro RTX 8000 specs

    You could also consider an astronomically priced quadro card like the GV100, which uses the Volta architecture

    The RTX 8000 and GP100 are about the same price, although using different architectures (RTX is Turing and the GP100 is Pascal).

    If you are really rolling in the dough, though, all three (RTX 8000, GV100, and GP100) cards support NVLINK which allows them to not only share their GPU power, but also their VRAM.

    Check that here NVIDIA NVLINK

    Two of those beasts connected with two NVLINK bridges would be monstrously powerful.

    And, of course, pretty expensive.

  6. #6
    For the price of this RTX 8000 Quadro card, you can buy 10 RTX 2080 TI that will be almost 8 times faster on GPU rendering than one RTX 8000 I think…
    Eddy Marillat - PIXYM
    WS : MB ASUS X299-Pro/SE - i9 7980XE 2,6ghz 18C/36T - 32GB Ram- 2 GTX 1080 TI - Win 10 64
    GPU Net Rig : MB Biostar - Celeron 2,8 ghz 2C/2T - 8GB Ram - 2 RTX 2080 TI - Win 7 64

  7. #7
    Registered User
    Join Date
    May 2012
    Location
    Virginia
    Posts
    580
    Quote Originally Posted by pixym View Post
    For the price of this RTX 8000 Quadro card, you can buy 10 RTX 2080 TI that will be almost 8 times faster on GPU rendering than one RTX 8000 I think…
    Not exactly; the average price of a RTX 2080 ti is $1200 and the average price of an RTX 8000 is $5600; so you could buy 4 RTX 2080 ti cards for the price of on RTX 8000.

    Also, the RTX 2080 ti would only give you 11gb of GDDR6 maximum for rendering regardless of how many cards you had.

    The RTX 8000 would provide 48gb of GDDR6 on a single card. If you used NVLINK between two RTX 8000 cards, it would essentially give you 96gb* of GDDR6 for rendering.

    I'm not sure what the trade-off would be between dual RTX 8000 cards in full NVLINK and 8 RTX 2080 cards. That would be a very interesting test.

    *corrected because ma brain didn't want to add this morning
    Last edited by RPSchmidt; 02-22-2019 at 08:05 AM.

  8. #8
    Ah ! I though RTX 8000 average cost was $9900
    Eddy Marillat - PIXYM
    WS : MB ASUS X299-Pro/SE - i9 7980XE 2,6ghz 18C/36T - 32GB Ram- 2 GTX 1080 TI - Win 10 64
    GPU Net Rig : MB Biostar - Celeron 2,8 ghz 2C/2T - 8GB Ram - 2 RTX 2080 TI - Win 7 64

  9. #9
    Eat your peas. Greenlaw's Avatar
    Join Date
    Jul 2003
    Location
    Los Angeles
    Posts
    7,285
    Not natively but some third party physics plug-ins for LightWave (like Turbulence, Deep Rising FX, and HurleyWork's UP) get a significant performance boost from compatible graphics card.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •