PDA

View Full Version : NVIDIA GTX 260 comming soon with 1792MB Memory



Ernest
03-14-2009, 04:48 PM
Obviously, higher-end memory chips are the key to larger memory size. Actually, 4Mx32bitx8Bank K4J10324QD memory chip isnít something new, and it just hasnít been utilized fully due to some technical reasons. But now, manufacturers can effortlessly build cards with such massive memory size. If to use 16 chips, then 2GB memory is also practical in the near future.

http://en.expreview.com/2009/03/14/exclusive-galaxy-non-reference-gtx260-vga-card-with-1792mb-memory.html

mattclary
03-15-2009, 08:06 AM
There better be a VERY small price increase over the standard model. I am very not impressed by the benchmarks.

AbnRanger
03-15-2009, 08:46 AM
I may have to go over to the Nvidia camp after all. I've been an ATI guy for a long time and I recently bought an ATI 4850 (rated the best "bang for your buck" card by several sites). For most things, it works fine. But on Vista 64 I can't seem to get it cooperate with Combustion much.

AbnRanger
03-16-2009, 05:33 AM
I was just reading some reviews of Workstation cards in last month's 3D World and at Tom's Hardware Guide. It seems that there is a pretty sizable gap in the performance compared to their consumer card equivalents after all.
One of the issues is OpenGL performance is pitiful on consumer cards comparitively. Seems they are "price-gouging," but the driver development, software certification, plus support is what they use to justify charging 5-20 times as much. Just seems like a rip-off scheme to charge that large of a difference for different drivers.

mattclary
03-16-2009, 09:55 AM
One of the issues is OpenGL performance is pitiful on consumer cards comparitively.

Only if the software makes use of the additional API calls that the pro cards provide.

Current LightWave will be no faster on a quadro than on a consumer card, but surely you know this, you aren't exactly new around here...

The jury hasn't even been seated for CORE yet though.

AbnRanger
03-17-2009, 03:17 AM
Only if the software makes use of the additional API calls that the pro cards provide.

Current LightWave will be no faster on a quadro than on a consumer card, but surely you know this, you aren't exactly new around here...

The jury hasn't even been seated for CORE yet though.Actually, I wasn't aware of that in regard to LW. The other major 3D applications definitely do...in 3ds Max tests it was showing the workstation card twice as fast in OpenGL than its consumer counterpart (3ds Max is actually more dedicated to DirectX), and Maya was 6 times faster!
I always thought that the benefit was negligible, but these latest tests seem to indicate otherwise....and I don't know of any softmods for these newer generation cards (googled and couldn't find anything)

mattclary
03-17-2009, 07:55 AM
Actually, I wasn't aware of that in regard to LW. The other major 3D applications definitely do...in 3ds Max tests it was showing the workstation card twice as fast in OpenGL than its consumer counterpart (3ds Max is actually more dedicated to DirectX), and Maya was 6 times faster!
I always thought that the benefit was negligible, but these latest tests seem to indicate otherwise....and I don't know of any softmods for these newer generation cards (googled and couldn't find anything)

Ask around here. Many people have used quadros and found no improvemnt with LightWave. For some apps I don't disagree that there is a gain, just not for LightWave. A program has to be written to take advantage of those cards, that's why games suck on them. Game cards are good at games, pro cards are good at 3D apps (other than LightWave, and I would bet, Modo). LightWave (still) relies mainly on the CPU for it's display. I suspect this is because of LightWave's roots as an affordable app. NewTek probably figured few people using LightWave would be dropping several grand (back in the day) on a pro video card.

Lightwolf
03-17-2009, 08:19 AM
Only if the software makes use of the additional API calls that the pro cards provide.
There aren't any (unless you need to control the extra hardware, such as the SDI out that is present in some boards, but that's unrelated to openGL).
However, the drivers are optimized for a different usage pattern (i.e. multiple viewports as opposed to a single viewport) and the small hardware changes (anti-aliased lines tend to be a lot faster on Quadros).

Cheers,
Mike

kfinla
03-17-2009, 08:57 AM
Like Mike said; Quadro's (pro cards) break up your screen into layers and refresh only what has changed, ie just the viewport your working in for fast performance since only a fraction of the screen is being redrawn. Gaming cards refresh the entire screen all the time which is good for games, where quadro's get bogged down with all the chaotic stuff changing all over the screen at once. Sadly nobody seems to want to write some sort of hybrid behavior that changes based on what is being displayed, I assume because it would eliminate product lines.

mattclary
03-17-2009, 11:02 AM
There aren't any (unless you need to control the extra hardware, such as the SDI out that is present in some boards, but that's unrelated to openGL).
However, the drivers are optimized for a different usage pattern (i.e. multiple viewports as opposed to a single viewport) and the small hardware changes (anti-aliased lines tend to be a lot faster on Quadros).

Cheers,
Mike

How is it LightWave is not accelerated in any appreciable manner by pro cards?

I can't find the source now, but I read a while back that pro cards provide OpenGL api hooks that consumer cards do not. Part of OpenGL spec, just not implemented by the drivers on consumer cards. Maya and the "big boys" make use of those additional API calls, whereas LightWave does not.

Or so I heard.

Lightwolf
03-17-2009, 11:26 AM
How is it LightWave is not accelerated in any appreciable manner by pro cards?
Probably just "bad" coding - bad as in not updated for more modern pipelines.


I can't find the source now, but I read a while back that pro cards provide OpenGL api hooks that consumer cards do not. Part of OpenGL spec, just not implemented by the drivers on consumer cards.
I just had a look:
http://developer.download.nvidia.com/opengl/specs/nvOpenGLspecs.pdf
The only extra OpenGL hooks for Quadros are for the video output and genlocking/syncing multiple cards.

However, that doesn't mean that the actual functions are implemented the same way. There is (relatively) little hardware and a big driver difference.

Cheers,
Mike

mattclary
03-17-2009, 12:04 PM
Some more info I found. Interesting read even if it is a little old.

http://www.nvidia.com/object/quadro_geforce.html

sbrandt
03-24-2009, 04:18 PM
I'd get one, but then I'd have to move next to
Grand Coulee dam.