PDA

View Full Version : Best video card(s) for 3D applications (Lightwave)



Manveer Dhillon
12-20-2009, 10:03 PM
Hi all I'm hoping to get some feedback on what type of video card to get for my computer. I have an i7 920, 6gb ram, 850w power supply on an Asus Rampage 2 E mobo. I want to invest in a ATI HD 5970 but am not quite sure if all that GPU power would be utilized in 3D appz. specifically for the latest version of Lightwave.

geothefaust
12-20-2009, 10:39 PM
LW CORE does indeed utilize much of the GPU when it calls for it. So, the better the video card the better your performance in those key areas which do call for it.


As for LW 9.6, I don't notice much of a performance difference between my old Geforce 7600 GT versus my newer Gefore GTX 275. There is a slight performance in Layout, but in modeler it is very minute.

biliousfrog
12-21-2009, 02:36 AM
Currently the GPU is one of the least important parts of a 3d workstation unless realtime preview of extremely high polygon scenes is important. My laptop's integrated Intel graphics card copes with ZBrush and high poly LW scenes almost as well as my workstation's 9800GT and Quadro FX1500. For those applications that can utilize the graphics card, the speed of the GPU is the most important factor not memory capacity. I can guarantee that you will not see any performance increase from 256mb to 2048mb of memory inside any CG application.

Personally, I'd stick with Nvidia cards as ATI's drivers are renowned for being flakey at best. Don't bother with SLi or dual GPU cards as only games can utilize both processors and don't bother with Quadro's unless you're using Maya and/or have money to burn...they're really not worth the 300% mark-up. Something like a 265/275/285 will be great.

biliousfrog
12-21-2009, 04:45 AM
If you're pushing the texture memory to it's limit it will offload to the system memory which is extremely fast via PCIe. You'll only notice a significant difference if you're expecting to run a lot of large textures through a scene at 30fps whilst maxing out the system memory...which is unlikely with a modern system. It is also rare that you'd require hundreds of mb's of textures in an OpenGL display...even if loading normal and displacement maps.

Of course, most cards have at least 512-1024mb of dedicated RAM now so it's moot but it is certainly the least important factor in determining a graphics card outside of gaming when high FPS is important.

AbnRanger
12-21-2009, 05:27 AM
I had been an ATI guy for a long time, until this past year. I had bought a Quad Core Dell Desktop, and immediately went out and got an ATI card to replace the stock one...looking for a "best bang for your buck" kind of card (4850 at the time). It was fine for most things, but I noticed I couldn't get Combustion (what I composite with) to work fully. Especially the built in Particle system, which I like to use fairly often. I tried updating drivers and even sent in a request to support (never got anything back). I had another issue or two, and had to do a bunch of digging in the registry to rectify it. The average user should never have to do all that.
I also wished 3DC had support for ATI streaming like they do for Nvidia's CUDA, but I knew that wasn't going to happen anytime soon. So, after a few months, I decided to try an NVidia card. Bam...worked like a charm. Combustion works perfectly and I haven't had any issues with NVidia cards.

I also recently built a new system, from scratch...with overclocking in mind (from a 2.66 Quadcore to 3.2 OC), and between that and getting a GTX 275, I've noticed a huge difference in both 3ds Max and 3DC.

Max 2010 has some new viewport goodies with MR shaders (Arch and Design) giving a very close proximation to the rendered result and Ambient Occlusion, Exposure, Soft Shadows, etc. On the gts 250 (essentially an updated GTX 9800), it would bog down a lot...so those features weren't very usable for me. That's no longer a problem with the 275. It has double the number of shader processors from the 250, so that may have a lot to do with it. I was pleasantly surprised at the difference...so I'd at least start with the 275...even though ATI 5000 series is ready for Direct X 11...doesn't matter for CG programs at the current time...as most are either still using OpenGL or DX 9.

I think NVidia is due to counter ATI's 5000 series any time now, so you may want to hold off just a bit longer...especially if there is a substantial jump in performance.

Personally, I've come to discover that ATI is a great choice for games, but NVidia is better for CG applications...they seem to have more focus in this industry (evidenced by the Mental Images acquisition a while back)...unlike ATI.

sampei
12-21-2009, 06:57 AM
all I can say is that I agree with geo, I don't think the GPU makes that much difference in modeler...at least I seen practically no improvement (or very little) between my Quadro and the old *** ATIs in uni :S

IMI
12-21-2009, 08:38 AM
Speaking of video memory, has anyone yet made an app that shows how much video memory your video card is actually using?
Every now and then I search for such a thing and always come up empty.

Sekhar
12-21-2009, 09:09 AM
I recently switched from a low end Quadro to nVidia GTX 275 and have been using it for a few weeks now. Like Neverko said, I saw big speedups in some apps and next to nothing in others. CG examples: big improvement in After Effects previews (turned out to be invaluable in a recent project), 3DC, and to some extent CORE (tried only a little on Q4R3 though). Obviously incredible improvement in game performance, though I guess you don't care about that. Everything is generally snappier.

Make sure of a couple of things though before you get a card. Space: these cards (like 275) are really big, some cases may not have the space; Power supply: the newer cards can be real power hogs when you stress them, though at normal usage the requirements are minimal - I didn't upgrade my PS for the 275.

JonW
12-21-2009, 12:07 PM
Important issue that Sekhar brought up.

Look at both Idle & Usage power consumption. You may be able to settle for a card that is notch or two down from the top & save a lot of juice. Especially if you are using the box for rendering & its doing stacks of over night rendering.


It would be great if someone could give some rough benchmarks on both Quadro & Games cards in a few typical usage situations with LW.

I think a lot would by a Quadro if they new they had a real difference but no one wants to waste money as these cards expensive & money could be better spent elsewhere.

Manveer Dhillon
12-21-2009, 01:12 PM
Awesome pointers thanks for the feedback all. I was leaning heavilty towards the Latest ATI HD 9570 but I think I'll look into the GTX 285 or GTX 295...oh wait the 295 is doal gpu isn't it. I'll look to invest in the 285.

Titus
12-21-2009, 01:16 PM
I've been involved with another company on "realtime rendering" using machstudio (http://www.studiogpu.com/). This software is bundled with the top of the line ATI card, and someone discovered it can run on any nvidia card, and it happens the rendering is 2x-4x faster on nvidia than the ATI card.

Manveer Dhillon
12-21-2009, 01:44 PM
i've been involved with another company on "realtime rendering" using machstudio (http://www.studiogpu.com/). This software is bundled with the top of the line ati card, and someone discovered it can run on any nvidia card, and it happens the rendering is 2x-4x faster on nvidia than the ati card.

wow!

biliousfrog
12-21-2009, 02:33 PM
It would be great if someone could give some rough benchmarks on both Quadro & Games cards in a few typical usage situations with LW.

I think a lot would by a Quadro if they new they had a real difference but no one wants to waste money as these cards expensive & money could be better spent elsewhere.


Search the tech section of CGTalk. A guy on there tested some of the high-end Geforce and Quadro cards with various 3d/CG apps and the differences were negligable. Considering that the quadro equivalents of the GTX 275 and 285's are several times the cost but the only difference is the driver...it seems a bit weird that they still sell any. Apparently Maya can be quite unpredictable with anything other than a 'workstation' card but it seems to be the only app where there's a significant difference.

Sekhar
12-21-2009, 03:41 PM
It would be great if someone could give some rough benchmarks on both Quadro & Games cards in a few typical usage situations with LW.

I think a lot would by a Quadro if they new they had a real difference but no one wants to waste money as these cards expensive & money could be better spent elsewhere.

I don't have any numbers with LW, but I did run Futuremark 3DMark06 (a pretty popular benchmark) on both the cards to make an overall comparison.

Quadro FX550 - 842 3DMarks (not a typo!)
GTX 275 - 11,705 3DMarks

That's like a 14x speedup for me. :) The FX550 is really low end, but still - I never expected it to be this big.

It's easy to look at regular programs that don't really take advantage of the card (OGL/CUDA/whatever) and assume there's not a speedup, so I'd suggest checking out some benchmarks relating to your apps to make a decision.

Andyjaggy
12-21-2009, 03:48 PM
It is also rare that you'd require hundreds of mb's of textures in an OpenGL display...even if loading normal and displacement maps.

Okay that's just funny. Haha. I guess I'm pretty rare.

Let's say I have 1 object, with 4K maps for diffuse, specular, bump, and displacement. Then let's say I have 5 comparable objects in the scene............ starting to add up isn't it, and that's pretty reasonable stuff we are talking about.

JonW
12-21-2009, 04:42 PM
My situation is, say I have a 10 million poly scene, a few gb of textures & when it’s loaded into Layout & I have, 1 view port showing Textures, 1 showing Wire frames, & 2 showing Bounding boxes. WTM is showing 4 gb ram usage.

If I use a GTX260 or GTX285, if I do something & there is say 0.5 second delay in redrawing the view port, would a Quadro be an improvement? This is sort of situation.


I don’t believe the viewports will show better quality textures using a Quadro, but I really have no idea! ???

Lightwave 9.6 only uses CPU for rendering so I will assume that a Quadro will be of no benefit?


If someone could say, for example, when I move stuff around in layout the quadro is quicker, Textures look better, Rendering is quicker. Then I will get a Quadro, but I would rather buy another computer if I think the advantages here outweigh the benefits of an expensive Quadro.

Manveer Dhillon
12-22-2009, 12:19 AM
Yeah I'm dreading a delay in large poly scenes with heavy texturing..I'd like to manipulate a large scene and still have the viewport settings relativly high. AT the moment I'm looking at cards which have high GPU Mhz's.

biliousfrog
12-22-2009, 03:08 AM
Okay that's just funny. Haha. I guess I'm pretty rare.

Let's say I have 1 object, with 4K maps for diffuse, specular, bump, and displacement. Then let's say I have 5 comparable objects in the scene............ starting to add up isn't it, and that's pretty reasonable stuff we are talking about.

I use 4k textures occasionally but I don't need them to display in OpenGL at 4k, my screen size is only 2k and that's before the software has made the preview window even smaller. I think that there's a bit of confusion around what the texture memory on the graphics cards are actually for...displaying textures in OpenGL not storing textures for rendering.

So do you actually need 20 x 4k maps in your OpenGL display?...is your OpenGL texture display settings actually set to 4096x4096?

I've actually got mine set at 4k as I'm working with a satellite image which I've projected onto a plane and I need to keep zooming in on certain areas to line up geometry. I only need it that size in the viewport because I need to line things up accurately with the map.

biliousfrog
12-22-2009, 03:19 AM
I don't have any numbers with LW, but I did run Futuremark 3DMark06 (a pretty popular benchmark) on both the cards to make an overall comparison.

Quadro FX550 - 842 3DMarks (not a typo!)
GTX 275 - 11,705 3DMarks

That's like a 14x speedup for me. :) The FX550 is really low end, but still - I never expected it to be this big.

It's easy to look at regular programs that don't really take advantage of the card (OGL/CUDA/whatever) and assume there's not a speedup, so I'd suggest checking out some benchmarks relating to your apps to make a decision.

You're comparing apples and oranges. An equivalent Quadro to the GTX 275 is somewhere between an FX4600 ($1,999) and FX5800 ($3,499).

Andyjaggy
12-22-2009, 09:38 AM
I use 4k textures occasionally but I don't need them to display in OpenGL at 4k, my screen size is only 2k and that's before the software has made the preview window even smaller. I think that there's a bit of confusion around what the texture memory on the graphics cards are actually for...displaying textures in OpenGL not storing textures for rendering.

So do you actually need 20 x 4k maps in your OpenGL display?...is your OpenGL texture display settings actually set to 4096x4096?

I've actually got mine set at 4k as I'm working with a satellite image which I've projected onto a plane and I need to keep zooming in on certain areas to line up geometry. I only need it that size in the viewport because I need to line things up accurately with the map.

Actually yes I do have my display settings set to 4096X4096, especially when I am painting my textures in Modo, and 20 X 4k maps would only be about 5 objects loaded. But who ever needs more then 5 objects, jeez, that's like crazy talk.

I guess if you were never going to zoom into specific areas of the object and work on it you wouldn't need your openGL resolution that high, but I don't even know why we are arguing about this, I obviously need it, and you obviously don't. Different people have different needs.

Sekhar
12-22-2009, 09:38 AM
You're comparing apples and oranges. An equivalent Quadro to the GTX 275 is somewhere between an FX4600 ($1,999) and FX5800 ($3,499).

My point wasn't that GTX is faster than Quadro, but that two cards with such a huge difference in performance can appear similar when running apps that don't really take advantage of the GPU. If you ran LW 9.6 with FX550 and then with GTX 275 say, you'd have barely noticed the difference and easily concluded that GPU doesn't really make a difference; do the same with AE/3DC/CORE and you'd see the light.

biliousfrog
12-22-2009, 11:40 AM
Actually yes I do have my display settings set to 4096X4096, especially when I am painting my textures in Modo, and 20 X 4k maps would only be about 5 objects loaded. But who ever needs more then 5 objects, jeez, that's like crazy talk.

I guess if you were never going to zoom into specific areas of the object and work on it you wouldn't need your openGL resolution that high, but I don't even know why we are arguing about this, I obviously need it, and you obviously don't. Different people have different needs.

I'm not arguing about it, you obviously have a need for lots of high resolution OpenGL textures. My original point was that texture memory is not as important as many are led to believe. Even with 20 x 4k textures you're looking at around 400-600mb?

My current scene is using 149 images (1161mb), I've got 512mb video RAM and my scene plays back in realtime at a reasonable frame rate. I've even been moving it around on my girlfriends 300 laptop to show the client without any problems and it's only got an integrated Intel graphics with 2gb of system memory.

As for how Modo handles things, I can't say as I've only played with the demo, but the way that texture memory is used is exactly the same. If the card runs out of memory it uses the system memory which, unlike the old days of PCI and AGP, is almost instantanious.

So, going back to my original point, don't worry about how much RAM the card has, go for the fastest GPU because if you don't need all the extra memory you're not wasting your money and if you need more you'll just use the system RAM that's sitting idle.

Manveer Dhillon
12-22-2009, 11:54 AM
What kind of videocard(s) do you use Andyjaggy to get that epic Open GL performance?