Page 3 of 3 FirstFirst 123
Results 31 to 37 of 37

Thread: GPU in Lightwave question

  1. #31
    Super Member OlaHaldor's Avatar
    Join Date
    May 2008
    Location
    Norway
    Posts
    998
    You make it sound like the professional line of GPUs magically makes a big difference in performance in Modeler. Is that in fact so ? I can see why Quadro is a big thing with CAD and other solids modeling tools, but I have yet to see Quadro enhance performance significantly in Modeler (or other polygon modeling apps like Maya and so on), so much that it is worth the premium price.

    We have had several Quadros at the studio where I work; from low end to the most powerful ones. For a while the entire studio ran on the cheapest Quadros. And then switched to GTX 1070, for other reasons than Maya alone - such as the Substance tools. The cost savings were substantial vs. going for equally powerful Quadro.




    I rushed through the article you linked Free4ever, and there's not anything there that says "AMD is better than Nvidia". It's more like "use these settings". Which, for anyone remotely interested in accurate color reproduction should know about.
    Last edited by OlaHaldor; 08-02-2019 at 04:06 PM.
    3D Generalist
    Threadripper 2920x, 128GB, RTX 2080 Ti, Windows 10

  2. #32
    Registered User
    Join Date
    May 2003
    Location
    Europe
    Posts
    148
    Quadro has a few unlocked functions that speeds up CAD modelers a lot, worth it for large companies (includes more stable drivers) not worth it for indies, since it costs 2-3x. No idea if specifically LW uses those functions. Someone with a Quadro needs to test on a benchmark scene and compare with a geforce.

    The point of the article is that AMD driver makes a mistake that is easy to understand and correct. While the Nvidia driver mistake is hard to detect, so most nvidia cards run with the wrong settings on TVs. Some updates reset the settings too.

  3. #33
    Registered User
    Join Date
    Jan 2004
    Location
    Stavanger, Norway
    Posts
    120
    Quote Originally Posted by Hail View Post
    Sounds like you need a new TV or a pair of glasses at the very least.
    ...would not the fact that I immediately can spot all the visual differences between the card manufacturers bear witness to that i do NOT need glasses?

    The fact that the two produce a different image is well documented, and what might be considered better I recon mighty also be based on what one is used to. I'm not gonna bother getting into that discussion.
    - Ignorance is bliss...

  4. #34
    Registered User
    Join Date
    Jan 2004
    Location
    Stavanger, Norway
    Posts
    120
    Quote Originally Posted by OlaHaldor View Post
    You make it sound like the professional line of GPUs magically makes a big difference in performance in Modeler. Is that in fact so ? I can see why Quadro is a big thing with CAD and other solids modeling tools, but I have yet to see Quadro enhance performance significantly in Modeler (or other polygon modeling apps like Maya and so on), so much that it is worth the premium price.

    We have had several Quadros at the studio where I work; from low end to the most powerful ones. For a while the entire studio ran on the cheapest Quadros. And then switched to GTX 1070, for other reasons than Maya alone - such as the Substance tools. The cost savings were substantial vs. going for equally powerful Quadro.




    I rushed through the article you linked Free4ever, and there's not anything there that says "AMD is better than Nvidia". It's more like "use these settings". Which, for anyone remotely interested in accurate color reproduction should know about.
    As I mentioned, I know all about the various setting issues, and how to calibrate both settings and the screens themselves to get the best result. And when all is said and done, 2D and video quality is in my view just better with AMD cards.

    But this brings me back to my original question, which I posed in my CPU thread (and boy am I waiting for the 3950X):

    Can anyone confirm that the workstation cards will give better LW Modeler perfomance?

    I also do a bit of gaming yes, but at way lower resolutions that is the standard that cards are built for these days, which means if I get a juicy workstation card I can still run games on decent performance, knowing how much muscle the latest generation cards have, both workstation and gaming. But, I am not going to get a workstation card if it does not provide significant performance upgrades (in LW Modeler over a gaming card.)

    Hell, I might keep the RTX 2080 anyway; it is so massively overpowered, that when I lock it to 100 FPS (yes people, I game on a 100 Hz CRT screen; superior colours and contrasts, no response time issues) the cooling fans barely bother to spin up at all.

    So this is the feedback I am looking for: Working on a 20M polygon model in Modeler, with enough textures for it to use up 20 GB of system memory to open; will a workstation card improve response times significantly?

    Thanks in advance for any relevant feedback.

    (Long post, though I've had a couple of drinks)
    - Ignorance is bliss...

  5. #35
    Member
    Join Date
    Oct 2003
    Location
    Orlando, FL
    Posts
    4,146
    Again, my info is outdated, but I will volunteer what I understand of workstation cards.

    They will enhance your viewing of a model and your ability to tumble or spin a large poly model. They will not enhance any modeler functions/computations because that math is being done on your CPU.

    I have been told repeatedly in these forums that Modeler is a single thread application.
    (Not sure a new CPU with multi cores will help but an overclocked CPU might.)

    Hope this helps.

  6. #36
    Registered User
    Join Date
    May 2003
    Location
    Europe
    Posts
    148
    If I didn't see it with my own eyes...

    Turns out LW 9.6 was part of the Specviewperf 11 pro benchmark app that uses real world apps for benching. Since the data is valid for old LW 9.6 and the benchmarks ran on GPU from 2 generations/6 years ago, it may not be still valid in 2019, but back in the day, Quadro could spin a 3D model 3x faster than a Geforce (due to driver "optimization").
    20 fps vs 60 fps on 2M poly models.

    https://www.tomshardware.com/reviews...d,3493-12.html

    https://www.spec.org/gwpg/gpc.static/lightwave01.html

    /Edit, one more table, same results:
    https://www.anandtech.com/show/9214/...w-design-101/6
    Last edited by Free4Ever; 08-03-2019 at 03:04 PM.

  7. #37
    Registered User
    Join Date
    Jan 2004
    Location
    Stavanger, Norway
    Posts
    120
    That's interesting - thanks
    - Ignorance is bliss...

Page 3 of 3 FirstFirst 123

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •