PDA

View Full Version : Simple Video Card Comparison Test



Julian Johnson
09-07-2003, 10:32 PM
Arthur wrote:

perhaps you can post that test scene and results under its own topic. It would better alert other users, and provide a poor man's benchmark for people to generate results, especially with these new G5's. I know its not officially a benchmark, but if people were to dl this scene as post their screen resolution, mac model and configuration, it would answer alot of different question for alot of people. thanks again!!

I haven't seen either the latest nVidia GF4xx or ATi 8xxxx/9xxxxs in a Mac but after trying to establish a test with Mike Breeden at Accelerate Your Mac that would accurately reflect a card's real world performance in Lightwave many months ago, Mike concluded that for most of the time, Lightwave OGL performance was CPU-bound. Or, at least, the calculations required to feed the cards were not capable of saturating the cards' capacities.

If you do some simple tests by orbiting two spheres around one another and measure the frame rates in Layout you can see that that supposition might be true. Here's an example from my DP800 with 1.5Gig of RAM in 10.2.4:

ATi Rage 128 16Mb

Poly Count @10000: 21fps
Poly Count @20000: 11fps
Poly Count @80000: 3.5fps

Geforce 3

Poly Count @10000: 36fps
Poly Count @20000: 14fps
Poly Count @80000: 4fps

What you normally see in Layout is a kind of 'threshold' above which the cards revert back to relying on the CPU to derive the display. On my machine, with these two cards that's at about the 20000 poly mark. Above that point all cards I've tried tend to perform about the same. Whilst you keep below that threshold you can see and feel a genuine difference in performance in Layout.

This may have changed with the latest generation of ATi and nVidia cards (possibly the threshold has moved). It would be interesting to do some tests in Layout but I'd suspect, as they're still game cards, that they would continue to exhibit the same kind of behaviour - being honed towards providing rapid 'low' polygon screen refresh for games.

Compounding that issue there seem to have been greater constraints on the throughput of data from the video card to the display than on equivalent PCs - mlinde has suggested this may well be a firmware issue with all recent cards. This inhibited a fast flow of data from the card.

Would love to see results for different CPUs/cards to see if this behaviour is consistent and whether the G5 with it's huge bus/CPU advantage will fix this.

The test scene is here, if anyone's interested:

http://www.exch.demon.co.uk/ogltests.sit

It's 1.2Mb. Viewport set to Textured Shaded Solid. None of the display options (e.g. faster ogl highlights etc.) seem to have much effect on the frame rates. Layout resolution is set to the minimum it can go to. My screen resolution was
1600x1200 on the GF3 and 1280x1024 on the ATi. Neither Layout nor Screen resolution seem to make much difference, though.

As for Modeler, well, that's a complete mystery. With both my cards, the threshold rule seems to apply. Anything over 15000 polys starts to slow things down such that it's hard to distinguish between the two cards. Zooming in on a 100000 poly object takes the same time to refresh all viewports (@5s) with both cards.

Of course, all these tests are done within the limitations of my machine setup and specifications. It may be very different on other CPU/card combos.

Julian

Beamtracer
09-08-2003, 02:58 AM
This is very interesting, Julian. It explains why people have reported not noticing any difference when they upgrade their graphics cards.

Ge4-ce
09-08-2003, 01:11 PM
So, in other words..

the 167 Mhz motherboard speed on previous machines was the bottleneck?

Then we should see some awesome speedimprovements with our 1 Ghz motherboards now!

Triple G
09-09-2003, 05:23 PM
Originally posted by Ge4-ce
So, in other words..

the 167 Mhz motherboard speed on previous machines was the bottleneck?

Then we should see some awesome speedimprovements with our 1 Ghz motherboards now!


Well, that's good news if you've got the money for a new G5. If not, it seems like the only way to get better OGL performance out of a G4 is to upgrade the processor...which kinda stinks. :(