PDA

View Full Version : Graphics card suggestions



Bonedaddio
11-19-2011, 06:54 AM
I'm looking at a new video card (that'll help with LW11).
See specs in sig, I'm going to upgrade the PC which is currently running a 5750, to something nVidia based.
According to what I've read, I can use many of the newer nVidia cards (running 1 gb or more of DD5 memory and 96 or more Cuda cores) and unlock it for Mercury playback in Premiere Pro CS5. Great, that's good.

While I'm at it, should I be looking at say a Quadro 2000D or something similar to boost performance in Lightwave? I don't really care much about gaming performance, I just want better preview and rendering times.

High end gaming card vs. Quadro 2000D is close to a wash in terms of $$.

I have faith that someone here will have a suggestion.

Thanks,

Les

oobievision
11-19-2011, 08:58 AM
well the quadro is an awsome card if you have the money Id go with quadro 6000 6gb ddr5. But depending on your mother board chipset that should really dictate where your heading with what brand graphics card. for examble if you have a ati chipset on ur MB I would get AMD firepro or the latest radeon card if you have an Nvidia chipset then go with quadro or the gtx 590 card. now im not sure if lightwave can utilizes crossfire or SLI to boost GL preformance but im sure thats a question for the community. If it does go with 2 or 3 video cards and sli/crossfire them up instead of buying 1 big expensive one.

Bonedaddio
11-19-2011, 09:53 AM
I don't have the wallet for the 6000, but the Quadro 2000D at around $430.00 is attractive (vs. GT580 approx. $500, I think), but only if I'll get some real, useful advantage out of it.
Core i7 is intel; I was not aware that SLI or crossfire would boost Lightwave performance???!

Thanks,
Les

stiff paper
11-19-2011, 11:02 AM
I suspect that unless openGL performance has been improved in LW11 (I don't think it has, but best to ask somebody who has the pre-release, to be sure) then you won't see any meaningful improvements with any card above the decent nVidia card you're already looking at.

(That might change with LW12, given that things seem to have dramatically improved, LW-development-wise.)

Wade
11-19-2011, 11:06 AM
I suspect that unless openGL performance has been improved in LW11 (I don't think it has, but best to ask somebody who has the pre-release, to be sure) then you won't see any meaningful improvements with any card above the decent nVidia card you're already looking at.

(That might change with LW12, given that things seem to have dramatically improved, LW-development-wise.)

I think they said the openGL was sped up 3 to 4 times at the presentation. So LW 11 might very well make urse of a decent nVidia.

Bonedaddio
11-20-2011, 09:17 AM
So by decent nVidia card, are we talking about one of the Quadros, or just the consumer gaming nVidia varieties?
I have LW11, and I can't wait to get a better card for it...

All input is greatly appreciated! Anyone out there running a lower end Quadro card?? In my case, information about the higher-end cards, 4000 on up, won't help me much.

Thanks,
Les

Bonedaddio
11-25-2011, 06:03 AM
Happy Black Friday!!

So... no one on the forum (except for the previous responders) has any info re: what graphics card to get for LW11? No one running a Quadro?
This seems to be an underwhelming response; even most carpenters will tell you which hammer and tape measure they use or like...

Les

JonW
11-25-2011, 06:32 AM
I'm running a 24" & 30" off a GTX280. I haven't done anything massive & it's fine. A GTX560 would be good & doesn't cost much.

Dexter2999
11-25-2011, 06:34 AM
In the past I got comparable results from my GeForce Card as I did from my Quadro card and it was a third of the price.

Now, having said that, performance may change drastically with other applications that are using technology other than OpenGL. I really have no knowledge of these formats but OpenCL, CUDA, or whatever may be used in Compositing applications. This is where you may find a drastic change in performance.

But like I said I don't really know. But for strictly LW use, save some money and get a good Geforce card.

biliousfrog
11-25-2011, 06:55 AM
Aside from certain CAD applications and, arguably, Maya you'll not see any advantage with using a Quadro. You'll certainly see a severe drop in performance when comparing a $500 quadro with a $500 geforce because the chipset in the quadro will be vastly inferior. If you want to go the quadro route you'd need to get one of the top-end models to compete with the $500 geforce and, as Dexter2999 stated, you'll be paying at least 3x the price.

I have a quadro card from a few years ago and it's really stable but the perfomance was never anything to get excited about.

MUCUS
11-25-2011, 07:49 AM
Same experience, if you are updating your graphic card for Lightwave, then save your money and go for a good Geforce! :)

Andy Meyer
11-25-2011, 08:06 AM
i recommend an nVidia 5xx gaming GPU.
i upgraded my old 285 to a 580 3gb version. i did the upgrade for some late night BF3 sessions... but its considerable difference in LW too.

Bonedaddio
11-25-2011, 10:56 AM
Wow, THANKS SO MUCH for good and useable advice, all!!
I've searched these forums and googled quite a bit and as near as I can tell, you're all absolutely spot on with your comments. So, how is it that LW doesn't take advantage of "Pro" level graphics cards... that's a little odd, isn't it??

I found info on using DDR5 non-quadro nVidia cards for Adobe Mercury playback acceleration, so I'm pretty sure I'll end up going with the $250.00 GTX 560 Ti video card (or the 570, but its $100 more). Really good price/performance AFAICT.

ShadowMystic
11-25-2011, 11:47 AM
Good choice. I currently use a 560ti and am very happy with it for both Lightwave and gaming.

biliousfrog
11-25-2011, 12:15 PM
...So, how is it that LW doesn't take advantage of "Pro" level graphics cards... that's a little odd, isn't it??


The 'Pro' tagline is very misleading. The cards themsleves are barely any different between quadro/geforce and Radeon/FireGL. What you're paying the extra 66%+ for (more-or-less) are drivers and those drivers only enable features which are hardly used outside of very high-end CAD applications. There used to be a 3rd party tool for enabling the 'special' features by soft-modding a 'gamer' card into a workstation card but manufacturers got smart to that.

The features you're effectively missing from a 3D application perspective are almost redundant. Anti-Aliasing has been available within Lightwave's OpenGL for a few years now. Another feature was the ability to handle overlapping GUI windows better which where not often found outside of high-end applications...that also hasn't been an issue for several years. The only other thing is certified drivers which are more stable but not 66% more stable.

Rayek
11-25-2011, 03:35 PM
Not entirely true at this point. nVidia crippled the 4xx and 5xx line of consumer cards in both opengl and cuda performance. When the 480 came out, I replaced my 280gtx, and expected 2 or three times the opengl performance. Much to my surprise, opengl performance was 4 times as bad as my old 280!!!

Other 3d users (maya, rhino, etc.) noticed this as well.

By now it has been established that the opengl performance of the 4xx and 5xx lines is abysmal because nVidia intentionally crippled the hardware. At the time when I asked nVidia about the performance issues, their support wrote me that this is 'expected behavior'.

So, I switched to an ATI 5870, and have great opengl performance now. Price/performance ratio is best for the Firepro v7900 - only about $700-$800. At least, if you are interested in good opengl performance in 3d apps. It will not help you with CUDA (mercury) support at all.

You will have to make a choice, based on your individual needs.

Even nVidia Quadro cards are crippled in CUDA performance. And the Tesla line does not include a video output at all.

So, until now, nVidia's line-up would be:

- 4xx/5xx consumer cards (crippled opengl / crippled CUDA)
- Quadro pro cards (fast opengl / crippled CUDA)
- Tesla (no video / full CUDA performance)

And ATI:

- 6xxx consumer cards (good opengl / no CUDA, only opencl)
- firepro (fast opengl / no CUDA)

So, basically a no-win / no-win situation. Unless you have money to burn on both a quadro and tesla card for both great cuda and opengl performance. (Mind, even the crippled cuda performance in a 4xx/5xx card is quite impressive, for example seen in Octane and Premiere/afx-->my own experience)

And for those who would think a solution would be to combine a good Firepro with a 5xx card for cuda: forget about it. nVidia made sure to block their drivers for any hybrid combo.

You may want to have a look at this review:
http://www.cgchannel.com/2011/10/review-professional-gpus-nvidia-vs-amd-2011/#comment-16967

The firepro v7900 is the (marginal) winner in the Lightwave test. Though it must be said that LW's opengl performance is lackluster at best in comparison with the other 3d apps. But still - why buy a $4500 Quadro if you have better opengl performance on average with a v7900 at $800?

And Cinebench scores can help to figure out the relative performance of video cards as well:

http://www.cbscores.com/

Although Cinema4d has always favoured ATI a lot.

Notice that ATI cards on average score far better than nVidia. My 5870 scores 69fps in opengl. Same system scored 45fps with 480gtx. And 49fps with the 280gtx. You can imagine my surprise - at first I thought something was wrong with the hardware or drivers. And Blender's viewport literally grinded to a halt.

For a direct comparision on opengl performance between consumer cards and quadros, check out the chart at the bottom of this review:
http://www.notebookcheck.net/Review-AMD-Radeon-HD-6990M-Graphics-Card.60150.0.html

And this one: http://www.videocardbenchmark.net/video_lookup.php?gpu=FirePro+V7900

However, there might be some light at the end of this dark tunnel (at last). nVidia is coming out with a "new" line of cards, that, LOW AND BEHOLD, are great at BOTH opengl and CUDA!!! Gasp! Shock! (ironic undertone intended)

Behold! the mighty MAXIMUS:

http://www.nvidia.com/object/maximus.html

nVidia kills me.

Still, I hated being forced to give up on CUDA. I use Premiere quite a bit, but I have much better use for good opengl performance for my work. So I am running ATI at this moment, which works great. But I do miss CUDA support at times. It all depends on your specific requirements / preferences.

A good affordable nVidia combo for good opengl and CUDA performance would be a 285gtx for your opengl (or an inexpensive Quadro), and a 580gtx for good cuda in one system.

Rayek
11-25-2011, 03:47 PM
PS one more thing: before I switched to Win7 I was using modded 5870-->firepro v8800 drivers. Although I lost video hardware acceleration and opencl, the opengl performance went through the roof, compared to the consumer drivers. It certainly made for a very noticeable difference in viewport performance.

But again, less so in Lightwave, way more so in other 3d opengl applications.

Dexter2999
11-25-2011, 04:26 PM
why buy a $4500 Quadro if you have better opengl performance on average with a v7900 at $800?

In my case the Quadro card was purchased to make the machine "certified" that it met minimum requirements in order to receive Avid support. The first question they ask is "does your system meet minimum requirements?" If you say "no" that is where the help stops.

I was just looking for the requirements of other software. I see that NUKE requires/recommends a Quadro card. Fusion has no requirements listed.

xxiii
11-26-2011, 04:23 PM
I generally concur. However, I've traditionally been an ATI user in the past, but am now using an nVidia GTX-480 (gamer card), with 1.5gb. It works fine in LW, and generally seems to work better than my previous ATIs (most recently a 4870 and a 1600xt). My understanding is that nVidia's OpenGL implementation is sounder than ATI's, and this has been my experience so far. Unless you need the extra memory for working with huge poly counts, a gamer card should be fine.

I also understand that both ATI's and nVidia's Pro cards are really almost completely identical to their gamer cards, with only driver differences, perhaps more memory, and maybe very minor hardware differences (mostly to make sure the driver isn't "fooled"). Since there are a number of OpenGL based games, these cards still need to "perform" when playing those games, and the driver generally detects this. Both companies "cripple" their gamer cards when a non-game opengl application is detected. From the above comments it sounds like nVidia is tightening up on this and ATI is loosening up.

I've generally been happy with my GTX 480. The drivers (including DirectX) seem to be more stable and reliable than my ATI experience of the last few years, and I say this as a (possibly former) ATI fan. I get a bit tired of reading "improved Crysis performace by .00182% in this release" but the opengl geometry error is still there in another program, again. Not that nVidia isn't also guilty of trying to get every last FPS out of games either, because 182 FPS just isn't enough when with only 50000 more man-hours of driver development we can get that up to 183 FPS. (but at least my non-game or non-popular games don't seem to be suffering the consequences like they were with ATI).

I have some hope for nVidia, as their 3d Vision product used to only work in full-screen mode with the gamer cards ("you want 3d in a window? you need a quadro.") but they relented and now it works in window mode. I'd love to be able to use a 3d Vision with Lightwave someday, but I suspect it only works in DirectX mode (at least on non-quadros). (they probably figured that was enough of an incentive, and didn't need to force people to run google earth in full screen to encourage the pros to buy a quadro).

It looks kind of silly when I have to reach over the 3D vision glasses and grab the red/blue ones to look at something in 3d in Lightwave.

Hopper
11-26-2011, 04:59 PM
Good choice. I currently use a 560ti and am very happy with it for both Lightwave and gaming.
Ditto. I had a credit with EVGA and bought a second one for some sweet BF3 SLI action. Solid card, stable driver, low heat, great output, and cheap.

@Rayek...
The OpenGL issues are not really in play with the 5 series any more. The 3DMark and FurMark tests show nVidia performance in the 5 series to be slightly better to the 5k series ATI's when the new drivers came out. So that is no longer the case. But I think, either way - LW performance is going to be more than what most are going to require with either choice.

kmacphail
11-26-2011, 07:29 PM
- Tesla (no video / full CUDA performance)


Some models of Tesla do in fact have a functioning DVI port. At work some of our FX artists are evaluating a two monitor setup, the first driven by a Quadro and the second driven by a Tesla.

Cheers,

-Kevin

3dWannabe
01-11-2012, 06:59 PM
i recommend an nVidia 5xx gaming GPU.
i upgraded my old 285 to a 580 3gb version. i did the upgrade for some late night BF3 sessions... but its considerable difference in LW too.

How has it been going with the GTX-580?

I bought a 3GB 580 also to replace a GTX-285 also, but have not removed it from the box due to some concerns from this thread:

http://forums.newtek.com/showpost.php?p=1210364&postcount=27

If you have noticed any improvements or degraded performance, please list which 3D app this affects.

Thanks!

Rayek
01-11-2012, 08:56 PM
@Rayek...
The OpenGL issues are not really in play with the 5 series any more. The 3DMark and FurMark tests show nVidia performance in the 5 series to be slightly better to the 5k series ATI's when the new drivers came out. So that is no longer the case. But I think, either way - LW performance is going to be more than what most are going to require with either choice.

As far as I know, and read about, the issue is still there in the fermi 5xx series. It still affects double-sided polygons performance, and it really depends on whether the software makes use of certain opengl functions or not - when it does, performance is still 2-3 times as slow as the previous generation. And don't forget that the nvidia drivers (as do the ati drivers) check for certain app/game names, and allow certain functions to be used based on that. Basically, I do not trust those generic benchmarks anymore.

But here are some real world numbers for a benchmark scene (393226 verts object) in Blender:

[email protected] | [email protected] | 1920x1200
Solid mode, 5 subs applied: 23 fps
Textured mode, 5 subs applied: 37 fps

i7 [email protected] |ati 5870 1gb | 1920x1200
Solid mode, 5 subs applied : between 130-145 fps (too fast for the eyes for a more exact number)
Textured mode, 5 subs applied: 117 fps

i7 2600k | GeForce 550ti | 1920x1200
Solid mode, 5 subs applied: ~50 fps (extrapolated, based on the horrid fps with 5 sub modifier active--> ~20fps;)
Textured mode, 5 subs applied: ~50 fps (extrapolated, based on the horrid fps with 5 sub modifier active--> ~33fps;)

The last one is based on the numbers when benchmarked in edit mode, which results in similar numbers as when working in solid mode with the modifier applied)

I am sorry, but those numbers just do not make sense at all. The nvidia systems above should in theory outperform my system.

But again, the whole thing is quite confusing, because nvidia keeps their mouths shut tight about it, and there are conflicting experiences reported on the web. And Autodesk isn't telling either. So the only reliable reports are far and between, and application dependent - which complicated matters even more.

Anyway, I am planning to get a 7970 3gb soon, and I will report my experience with that.

@hopper: Agreed, most users aren't going to notice much. But for those who are effected, it is very frustrating.

Matt
01-12-2012, 02:30 AM
LightWave doesn't currently utilise many features on high end cards, you will see some speed increase, but not enough to warrant spending a fortune.

Best buying a really fast processor.

Only advice I would give, go nVidia, avoid ATI. Oh and EVGA cards are very good, my last two have been with them, never a problem, very well made.

JonW
01-12-2012, 02:44 AM
Thanks for clearing that up.

Should we be going for cores or speed?

I'm reasonably happy with my GTX280 but what would be a good update?

prometheus
01-13-2012, 03:03 AM
You might think twice or so and take a look in to what the cuda cores means.

recently jascha (jawset) author of the turbulence fluids plugin for lightwave, has added cuda support, wich means that you can choose either cpu or cuda for the simulation.
In my case the simulations goes way faster with cuda, and a difference of many minutes can be seen.

Also the bullet engine core wich Lightwave now use for dynamic engine, is supposed to develop cuda support as I know of, donīt know if and when that would be implemented in lightwave 11.

Cuda also helps in coding animations and a lot of other stuff.
So as many cuda cores as possible perhaps, unless you have an equally fast cpu.
Edit..have never been thinkin of ati cards..been warned not to several years ago, and I constantly see people complaining about issues.

Michael