Page 1 of 2 12 LastLast
Results 1 to 15 of 23

Thread: Best video card(s) for 3D applications (Lightwave)

  1. #1
    Registered User
    Join Date
    Feb 2006
    Location
    USA
    Posts
    56

    Best video card(s) for 3D applications (Lightwave)

    Hi all I'm hoping to get some feedback on what type of video card to get for my computer. I have an i7 920, 6gb ram, 850w power supply on an Asus Rampage 2 E mobo. I want to invest in a ATI HD 5970 but am not quite sure if all that GPU power would be utilized in 3D appz. specifically for the latest version of Lightwave.

  2. #2
    Javis Jones: Night Crew geothefaust's Avatar
    Join Date
    Aug 2005
    Location
    Oregon Territory
    Posts
    4,214
    LW CORE does indeed utilize much of the GPU when it calls for it. So, the better the video card the better your performance in those key areas which do call for it.


    As for LW 9.6, I don't notice much of a performance difference between my old Geforce 7600 GT versus my newer Gefore GTX 275. There is a slight performance in Layout, but in modeler it is very minute.

  3. #3
    borkalork BORKALORK! biliousfrog's Avatar
    Join Date
    Dec 2003
    Location
    Lowestoft, UK
    Posts
    2,488
    Currently the GPU is one of the least important parts of a 3d workstation unless realtime preview of extremely high polygon scenes is important. My laptop's integrated Intel graphics card copes with ZBrush and high poly LW scenes almost as well as my workstation's 9800GT and Quadro FX1500. For those applications that can utilize the graphics card, the speed of the GPU is the most important factor not memory capacity. I can guarantee that you will not see any performance increase from 256mb to 2048mb of memory inside any CG application.

    Personally, I'd stick with Nvidia cards as ATI's drivers are renowned for being flakey at best. Don't bother with SLi or dual GPU cards as only games can utilize both processors and don't bother with Quadro's unless you're using Maya and/or have money to burn...they're really not worth the 300% mark-up. Something like a 265/275/285 will be great.

  4. #4
    borkalork BORKALORK! biliousfrog's Avatar
    Join Date
    Dec 2003
    Location
    Lowestoft, UK
    Posts
    2,488
    If you're pushing the texture memory to it's limit it will offload to the system memory which is extremely fast via PCIe. You'll only notice a significant difference if you're expecting to run a lot of large textures through a scene at 30fps whilst maxing out the system memory...which is unlikely with a modern system. It is also rare that you'd require hundreds of mb's of textures in an OpenGL display...even if loading normal and displacement maps.

    Of course, most cards have at least 512-1024mb of dedicated RAM now so it's moot but it is certainly the least important factor in determining a graphics card outside of gaming when high FPS is important.

  5. #5
    Registered User AbnRanger's Avatar
    Join Date
    Aug 2005
    Location
    Riverside CA
    Posts
    1,827
    I had been an ATI guy for a long time, until this past year. I had bought a Quad Core Dell Desktop, and immediately went out and got an ATI card to replace the stock one...looking for a "best bang for your buck" kind of card (4850 at the time). It was fine for most things, but I noticed I couldn't get Combustion (what I composite with) to work fully. Especially the built in Particle system, which I like to use fairly often. I tried updating drivers and even sent in a request to support (never got anything back). I had another issue or two, and had to do a bunch of digging in the registry to rectify it. The average user should never have to do all that.
    I also wished 3DC had support for ATI streaming like they do for Nvidia's CUDA, but I knew that wasn't going to happen anytime soon. So, after a few months, I decided to try an NVidia card. Bam...worked like a charm. Combustion works perfectly and I haven't had any issues with NVidia cards.

    I also recently built a new system, from scratch...with overclocking in mind (from a 2.66 Quadcore to 3.2 OC), and between that and getting a GTX 275, I've noticed a huge difference in both 3ds Max and 3DC.

    Max 2010 has some new viewport goodies with MR shaders (Arch and Design) giving a very close proximation to the rendered result and Ambient Occlusion, Exposure, Soft Shadows, etc. On the gts 250 (essentially an updated GTX 9800), it would bog down a lot...so those features weren't very usable for me. That's no longer a problem with the 275. It has double the number of shader processors from the 250, so that may have a lot to do with it. I was pleasantly surprised at the difference...so I'd at least start with the 275...even though ATI 5000 series is ready for Direct X 11...doesn't matter for CG programs at the current time...as most are either still using OpenGL or DX 9.

    I think NVidia is due to counter ATI's 5000 series any time now, so you may want to hold off just a bit longer...especially if there is a substantial jump in performance.

    Personally, I've come to discover that ATI is a great choice for games, but NVidia is better for CG applications...they seem to have more focus in this industry (evidenced by the Mental Images acquisition a while back)...unlike ATI.
    Last edited by AbnRanger; 12-21-2009 at 05:31 AM.

  6. #6
    Registered User
    Join Date
    Mar 2009
    Location
    Europe
    Posts
    985
    all I can say is that I agree with geo, I don't think the GPU makes that much difference in modeler...at least I seen practically no improvement (or very little) between my Quadro and the old *** ATIs in uni :S

  7. #7
    Pancakes! IMI's Avatar
    Join Date
    Apr 2007
    Location
    Right Here
    Posts
    7,124
    Speaking of video memory, has anyone yet made an app that shows how much video memory your video card is actually using?
    Every now and then I search for such a thing and always come up empty.

  8. #8
    Engineer/Entrepreneur Sekhar's Avatar
    Join Date
    Sep 2005
    Location
    Pasadena, CA
    Posts
    2,124
    I recently switched from a low end Quadro to nVidia GTX 275 and have been using it for a few weeks now. Like Neverko said, I saw big speedups in some apps and next to nothing in others. CG examples: big improvement in After Effects previews (turned out to be invaluable in a recent project), 3DC, and to some extent CORE (tried only a little on Q4R3 though). Obviously incredible improvement in game performance, though I guess you don't care about that. Everything is generally snappier.

    Make sure of a couple of things though before you get a card. Space: these cards (like 275) are really big, some cases may not have the space; Power supply: the newer cards can be real power hogs when you stress them, though at normal usage the requirements are minimal - I didn't upgrade my PS for the 275.

  9. #9
    Super Member JonW's Avatar
    Join Date
    Jul 2007
    Location
    Sydney Australia
    Posts
    2,235
    Important issue that Sekhar brought up.

    Look at both Idle & Usage power consumption. You may be able to settle for a card that is notch or two down from the top & save a lot of juice. Especially if you are using the box for rendering & its doing stacks of over night rendering.


    It would be great if someone could give some rough benchmarks on both Quadro & Games cards in a few typical usage situations with LW.

    I think a lot would by a Quadro if they new they had a real difference but no one wants to waste money as these cards expensive & money could be better spent elsewhere.
    Procrastination, mankind's greatest labour saving device!

    W5580 x 2 24GB, Mac Mini, Spyder3Elite, Dulux 30gg 83/006 72/008 grey room,
    XeroxC2255, UPS EvolutionS 3kw+2xEXB

  10. #10
    Registered User
    Join Date
    Feb 2006
    Location
    USA
    Posts
    56
    Awesome pointers thanks for the feedback all. I was leaning heavilty towards the Latest ATI HD 9570 but I think I'll look into the GTX 285 or GTX 295...oh wait the 295 is doal gpu isn't it. I'll look to invest in the 285.

  11. #11
    Running at 29.97 fps Titus's Avatar
    Join Date
    Feb 2003
    Location
    Mexico City
    Posts
    2,839
    I've been involved with another company on "realtime rendering" using machstudio. This software is bundled with the top of the line ATI card, and someone discovered it can run on any nvidia card, and it happens the rendering is 2x-4x faster on nvidia than the ATI card.

  12. #12
    Registered User
    Join Date
    Feb 2006
    Location
    USA
    Posts
    56
    Quote Originally Posted by titus View Post
    i've been involved with another company on "realtime rendering" using machstudio. This software is bundled with the top of the line ati card, and someone discovered it can run on any nvidia card, and it happens the rendering is 2x-4x faster on nvidia than the ati card.
    wow!

  13. #13
    borkalork BORKALORK! biliousfrog's Avatar
    Join Date
    Dec 2003
    Location
    Lowestoft, UK
    Posts
    2,488
    Quote Originally Posted by JonW View Post
    It would be great if someone could give some rough benchmarks on both Quadro & Games cards in a few typical usage situations with LW.

    I think a lot would by a Quadro if they new they had a real difference but no one wants to waste money as these cards expensive & money could be better spent elsewhere.

    Search the tech section of CGTalk. A guy on there tested some of the high-end Geforce and Quadro cards with various 3d/CG apps and the differences were negligable. Considering that the quadro equivalents of the GTX 275 and 285's are several times the cost but the only difference is the driver...it seems a bit weird that they still sell any. Apparently Maya can be quite unpredictable with anything other than a 'workstation' card but it seems to be the only app where there's a significant difference.

  14. #14
    Engineer/Entrepreneur Sekhar's Avatar
    Join Date
    Sep 2005
    Location
    Pasadena, CA
    Posts
    2,124
    Quote Originally Posted by JonW View Post
    It would be great if someone could give some rough benchmarks on both Quadro & Games cards in a few typical usage situations with LW.

    I think a lot would by a Quadro if they new they had a real difference but no one wants to waste money as these cards expensive & money could be better spent elsewhere.
    I don't have any numbers with LW, but I did run Futuremark 3DMark06 (a pretty popular benchmark) on both the cards to make an overall comparison.
    1. Quadro FX550 - 842 3DMarks (not a typo!)
    2. GTX 275 - 11,705 3DMarks

    That's like a 14x speedup for me. The FX550 is really low end, but still - I never expected it to be this big.

    It's easy to look at regular programs that don't really take advantage of the card (OGL/CUDA/whatever) and assume there's not a speedup, so I'd suggest checking out some benchmarks relating to your apps to make a decision.

  15. #15
    Registered User Andyjaggy's Avatar
    Join Date
    Sep 2003
    Location
    Utah
    Posts
    4,327
    Quote Originally Posted by biliousfrog View Post
    It is also rare that you'd require hundreds of mb's of textures in an OpenGL display...even if loading normal and displacement maps.
    Okay that's just funny. Haha. I guess I'm pretty rare.

    Let's say I have 1 object, with 4K maps for diffuse, specular, bump, and displacement. Then let's say I have 5 comparable objects in the scene............ starting to add up isn't it, and that's pretty reasonable stuff we are talking about.
    Last edited by Andyjaggy; 12-21-2009 at 03:50 PM.

Page 1 of 2 12 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •