PDA

View Full Version : Kite tries G5



kite
07-01-2003, 11:32 AM
I had a test-run with the new 2x2Ghz apple G5 today at inLife store in Stockholm.
The machine gave me the impression of being well designed (as always with apple) and had a VERY low noise level.
I think I will have to take some cash from my kitesurf-equipment-account and save up for one of these when they arrive at the shops in august :D
It ran OS 10.2.7 and had both Lightwave 7.5c and Maya installed.
I had a go at the bench files in LW and tried with the tracer radiosity scene, and it was done in 550sec. It was with what I presume is the default settings, but it could not find the .avi exporter so I set it to save a rgb instead. I did not look for how many treads it used.
At the end of the rendering (pass 5 of 5) some people could not keep their hands away (cant blame them :)) and someone started the CPU-controller and ran the cursor over the dock a few times and such stuff.
Dont think that effected the render-time much as the CPU-controller looked like the machine used only one CPU at the time with LW? I never had any dual system so I dont know how to set that up (yet :) )
Guess its not much of a benchmark as I assume LW is not optimised for 64-bits systems yet.

One thing is for sure, its a hell lot faster than my Pismo :D

Ade
07-01-2003, 12:01 PM
This test compared to a G4 dp and pc please.. lets get an idea..

tallscot
07-01-2003, 12:55 PM
550 seconds? That would make it slower than the dual 1.42 Ghz G4.

Here's hoping your post is bogus! :)

mlinde
07-01-2003, 01:10 PM
When I ran Tracer-Radiosity on my dual G4x800 it was 1066 seconds.

mlinde
07-01-2003, 01:10 PM
Originally posted by tallscot
550 seconds? That would make it slower than the dual 1.42 Ghz G4.

Here's hoping your post is bogus! :)

Where's your reference? Did you run it on a 1.42? I'm just curious...

tallscot
07-01-2003, 01:25 PM
Chris' Lightwave Benchmarks

www.blanos.com

Even with your score, the G5 is very slow at 550 seconds considering a dual 2 Ghz G5 is 2.5 times faster just in Mhz than your dual 800, and the G5 is suppose to be much faster than the G4 at the same Mhz.

panini
07-01-2003, 02:41 PM
So, that would make it slightly more than twice as slow as dual Xeon PC. ( 270 seconds )

2 minutes slower than top of the line single Pentium 4 cpu and almost twice as slow as dual Athlon setup.

Scary to think how much slower than dual Opteron set-up.

kite
07-01-2003, 03:30 PM
It did showed rendering in progress during the benchmark, but that might be normal?
I think its quite fast, if you consider its not a 64bit app and it most likely just used one cpu.
And as I dont know how many treads that was used, it might have had some effect on rendertime as well.
At Chris site they all use more than one tread in the the duals, right?

tallscot
07-01-2003, 03:34 PM
So assuming that it only used a single processor, that means it's still slower than the dual 3.2 Ghz Xeon, according to the numbers on Chris' Lightwave Benchmarks.

mbaldwin
07-01-2003, 03:38 PM
multithreading definitely needs to be enabled(in the rendering options panel) to take advantage of the 2 cpu's.

curious too.

Johnny
07-01-2003, 03:39 PM
I'm really hoping that there are a handful of really good reasons for what appears to be a weak performance on that test.

but if this is accurate, I imagine the folks at Intel and AMD laughing until they pee.

sheesh!

J

munky
07-01-2003, 03:47 PM
hey kite, I'm gonna get a g5 but I still bought a sweet little RRD humansize to go with my X2's

There's rendertime and then there's HANG TIME

regards


paul

kite
07-01-2003, 04:16 PM
Thats tha way to do it munky!
I was saving for a 18m G-arc to go with my 16m F-arc, note: was :D

toby
07-01-2003, 06:27 PM
I see two of you assume this benchmark is accurate -

"show render in progress" puts a big hit on render times, and that test uses both processors evenly when more than 1 thread is chosen.

Your bias is showin' fellas

DaveW
07-01-2003, 07:09 PM
Show render in progress does not put a big hit on rendertimes. A few seconds maybe but when you're talking 550 seconds that's not huge. Turning it off and enabling multiple threads should put it closer in speed to the Xeons. The fastest Xeon speed for Tracer Radiosity was 201 seconds. That's still quite a bit faster than the G5, even if you account for multiple threads and show render in progress, and that was with dual 2.8ghz Xeon, although I find that score a little suspect when compared to the other 2.8ghz Xeon scores. If you throw that score out, the next fastest is a dual 3.06ghz Xeon with 214 seconds.

Kite, is it possible you can get back in there and run the benchmark again with 8 threads and show rendering progress turned off? I'd really like to get a more accurate rendertime.

tumblemonster
07-01-2003, 09:09 PM
Man, I've got the same dilemma! New Kite, or New machine? I just got home from getting skunked at the lake today. The wind report was awesome, I was gonna go big! And then *poof!*, got there, sat there, nothing. The wind never topped 6 knots.

Maybe tomorrow!

-tm

kite
07-02-2003, 03:21 AM
Good to see that its done a bit of kiting around here too! :)


There is only one G5-machine in sweden as I know of, and I dont think it will be around stockholm any more in the nearest future.
http://www.inlife.se
Its there right now, but I have wind today to take care of ;)
It goes down to Malmö (south edge sweden) in a few days, someone go there and try again :D

My calc goes like this:
As the app is not 64bit, and 64bit system is supposed to chew two 32bit at the time(?), might we cut down the time by 50% to 275sec?

And if the machine used just one CPU, might we assume its correct to cut another 50% to 137sec?

And with a optimised OS too with this and a decent amount of RAM as this machine only had 1.5Gig (dont forget max is 8Gig, and that is maximum just as long there are no bigger modules, we talking ALOT more RAM when the sizes goes up, virtually no limit in the hardware), we might cut of some more, right?
Are we talking a minute plus/minus a fart???
:)
Edit, spelling and calc was a bit too happy.

Karl Hansson
07-02-2003, 05:31 AM
This is interesting. Though it would be more interesting if someone could do a more indepth test.
It would be nice if someone in at this show or visiting the show could do another test with these G5s. I assume that these machines are previewed in other countries aswell so I hope someone could take the initiative and do a thorough test with LW. Oh and it is probably a good idea to bring the LW CD and your dongle in case they dont have LW installed.

DaveW
07-02-2003, 11:41 AM
Originally posted by kite

There is only one G5-machine in sweden as I know of, and I dont think it will be around stockholm any more in the nearest future.
http://www.inlife.se
Its there right now, but I have wind today to take care of ;)
It goes down to Malmö (south edge sweden) in a few days, someone go there and try again :D


That's too bad. We're getting some at work so I guess I'll have to wait until then. I hate waiting :)




My calc goes like this:
As the app is not 64bit, and 64bit system is supposed to chew two 32bit at the time(?), might we cut down the time by 50% to 275sec?

Doesn't work like that at all. In fact, Arnie (former LW programmer) seemed to think LW wouldn't get much benefit from being 64bit. If you just converted it all to 64bit code it would run about as fast as the 32bit code but require a lot more RAM.



And if the machine used just one CPU, might we assume its correct to cut another 50% to 137sec?


That really depends on the scene. Sometimes you'll see that big of a difference but most of the time you're not going to get exactly double the performance. Some things just can't be multithreaded, some things can be but aren't yet.



And with a optimised OS too with this and a decent amount of RAM as this machine only had 1.5Gig (dont forget max is 8Gig, and that is maximum just as long there are no bigger modules, we talking ALOT more RAM when the sizes goes up, virtually no limit in the hardware), we might cut of some more, right?
Are we talking a minute plus/minus a fart???


Optimized OS will help to some degree but extra RAM depends on the scene. A lot of the stuff I render doesn't use more than 500-800MB. So having 8 gigs wouldn't make a bit of difference for a lot of my work. Obviously some people can really make use of all 8gigs but I doubt most users will come close.

Altivec could help speed things up. If I remember correctly, the reason LW's renderer doesn't make use of Altivec is because Altivec isn't capable of double-precision floating point calcs. SSE2 can, so that's why it runs so well on P4/Xeon systems. Although I heard somewhere that the new version of Altivec in the G5's can do the floating point stuff necessary for 3d rendering, if that's true then maybe the new G5's will take the speed crown from the Xeons.

Karl Hansson
07-02-2003, 12:12 PM
I did the same test on my PMG4 450 (single)...eh...well the score is:
3088 Seconds!!! My god my computer is slow.;)

Lynx3d
07-02-2003, 02:19 PM
I thought LW does use Altivec, if my memory serves me correctly there were some benchmarks on the Newtek site to demonstrate the performance improvements on both P4 and G4 platforms with 7.0b...whatever.

about the 64bit = twice the speed...like DaveW said, you can not expect a huge improvement from 64bit. Floating point numbers of 64bit size can be calculated in the same speed as 32bit float since ages (from both, x86 and PowerPC CPUs), and 64bit integers and memory adresses basically just need more bandwidth if you use them extensively in this case i think...and the vector units (Altivec) don't have new features either.

But, there may be quite a bit potential in new G5 optimized compilers (but no 2x speed up either...)

And a dual CPU speedup of max. 1.8 - 1.85 should be realistic, so it would make those 550s to ~300s

But lets wait for some more benches before our speculations become to wild :D

Karl Hansson
07-02-2003, 02:53 PM
What about those two dubble-precision floating point units that the G5 has? They must add something to the G5. Can an application like LW be optimized to use these or are they already used?

js33
07-02-2003, 05:30 PM
Originally posted by Karl Hansson
I did the same test on my PMG4 450 (single)...eh...well the score is:
3088 Seconds!!! My god my computer is slow.;)

I ran the test for grins. :D
527 sec on a P4 2.53 Ghz with show rendering in progress on and I was browsing this forum at the same time.

Cheers,
JS

toby
07-02-2003, 05:51 PM
don't leave other programs running during the test - iTunes and Pshop were taking btw 10-20% of one processor each, at a dead idle

DaveW
07-02-2003, 06:13 PM
Originally posted by Lynx3d
I thought LW does use Altivec, if my memory serves me correctly there were some benchmarks on the Newtek site to demonstrate the performance improvements on both P4 and G4 platforms with 7.0b...whatever.


Maybe they did add some Altivec stuff, but I thought the speed improvements were only under OSX, Altivec shouldn't care if you're using OS9 or OSX. Either way, they can't use Altivec for floating point stuff, unless the G5 has new Altivec instructions.

Johnny
07-02-2003, 08:43 PM
I wonder about a couple of things in relation to each other...

Ed Catmull is saying that, based on running tests at Pixar, he feels that the G5 is the fastest personal/desktop whatever computer in the world.

his his statement in relation to the hardware Pixar uses, or that which is found on the shelves of Best Buy?

I can't imagine someone like him even wasting any time with crummy consumer gear.

If he's making a comparison of the G5 to the stuff Pixar uses to make movies, then that means that the G5 is snugly in that league, which bodes well for the actual performance on our own desktops.

dunno...just thankin'...

J

toby
07-02-2003, 08:49 PM
but we can't forget who his boss is... I heard from a former Pixar employee that you don't want to get in the elevator with Jobs -

Of course you wouldn't think that Catmull has to worry about that, just something to keep in mind... I hope he's right tho

Lightwolf
07-03-2003, 02:52 AM
Originally posted by Karl Hansson
What about those two dubble-precision floating point units that the G5 has? They must add something to the G5. Can an application like LW be optimized to use these or are they already used?
They would already be used, since the CPU schedules floating point operations to be handled by both pipelines.
An optimizing compiler can re-order instructions to optimize that process, but the CPU still does the scheduling.
I don't know much about the 64 bit extensions of the G5, whether they just extend the adressing or if they add new instructions / feature that could speed up applications due to a re-compile.
On the Opteron for example, in 64bit mode there are more registers available, which speeds up 64 bit code. This is not a 64bit feature per se though, but due to extra features available within that mode.
Cheers,
Mike

lasco
07-04-2003, 12:52 AM
Originally posted by tallscot
and the G5 is suppose to be much faster than the G4 at the same Mhz.

mmmm…
as the G4 was supposed to be faster than the G3 at the same Mhz…
but that happened to be wrong and between both the procs only the number of Mhz did change speed actually…

toby
07-04-2003, 01:00 AM
well that's just dead wrong!

If software is not written to use Altivec then maybe, but when I switched from G3 to G4 I did some tests on LW and got a definite increase

From G3 300 to G4 450dp, tripled render speed - and no way will you get a 1 to 1 increase in speed per mhz, especially when the mhz increase is split between 2 processors

lasco
07-04-2003, 02:17 AM
hahaaaa… yes Altivec of course !

The optimizatin of some softs for it is a true thing Toby but usually
don't expect more than 15 or 20% more speed just for it.

I'd rather say APART for very few softs like LW maybe only Mhz counts.
I actually never tested LW on a G3 but made tests when I switched
from a 233 mhz G3 with 128 RAM to a 450 mhz G4 with almost 1 Gb ram,
tests with Photoshop 5.5 (at the time) and especially After Effects 4.1.

At the time I did not get anything else than exactly twice more speed with
the G4 which was the exact difference in Mhz between both the computers :
even the RAM seemed not to influe on even a very little part of the speed…!

I think all the specs of CPUs (altivec, 64 bits, bus etc.) have their importance
but not at that point : 80% of the speed is only due to the Mhz :)

(other example : when I switched again my 450 G4 to a dual 867 I noticed
also the gain was only in the ratio of the difference in Mhz - was even lower for Photoshop that is known to be THE Altivec optimized soft - when I had heard that
the new RAM was supposed to be really incredibly faster !! well mus say it was
only incredibly more expensive…)

toby
07-04-2003, 03:36 AM
actually with your G3 233 to G4 450 comparison you didn't take into account faster bus speed, faster ram, and probably a few other things - A G3 450 put in place of your 233 would definitely be slower than the G4.

Altivec is the reason that Apple has been able to post faster benchmarks in photoshop than a PC with higher mhz

So how much faster was the dual 867 than the single 450?

lasco
07-04-2003, 04:07 AM
well I don't know how I could take account of what you say : fatser bus, ram etc.

I just saw that with a processor twice more faster (233 x 2 = 466, almost 450)
the second computer (G4 450) runned not more thant twice fatser than the
first one.
So how should I take account of the rest except to say that it does'nt change anything ? Now if yoiu think a G3 450 would'nt go twice speed than a 233 one
then we have troubles with CPUs…

However I noticed that and found out it was the same when switching
from G4 450 to dual 867.

Have a look at the little benchs 450 vs dual 867 I made a the time (was in last september) :

http://www.chantiergraphique.com/bench.html

Only LW really take fully advantage of this dual (fortunately besides, I bouht it on that purpose). A good point for AFX 4 too as it seems to use dual proc although
it's under OS 9 :)

claw
07-04-2003, 04:31 AM
kite,

I took a look at the G5 in stockholm also, I didn't found LW on it, where exactly did you found it??

Lynx3d
07-04-2003, 04:35 AM
Photoshops multithreading is a bad joke IMHO...
I have a Dual Athlon and it never uses more than one CPU when working with the layers (acticating/deactivating, changing opacity, zooming etc) only the filters are multithreaded, but how often do i need a filter that takes longer than two seconds??
I want to edit files with 5000x3000 pixels and 15 layers without tightened parking brake too...

I think i'm offtopic now...

toby
07-04-2003, 04:46 AM
this is more complicated than you think - just because your render times were twice as fast, and your processor has twice the mhz, doesn't mean that mhz is the only factor. Processors have a lot of "downtime", waiting for instructions and data. A 600mhz cpu will not be twice as fast as a 300mhz cpu if it has the same downtime. This "downtime" is what Altivec is trying to take advantage of. PC's have a version of the same thing now.

Some of your speed increase is due to the mhz, some is due to Altivec, some is due the memory and bus speed increases. Without increasing things like memory and bus speed, you get "bottlenecks" - if your processor blazes thru some data and has to sit and wait for more data, the result is slower times.

Any software can be written to take advantage of dual processors. Photoshop had it before osx even came out -
But I don't think there's any such thing as Multithreading display properties (acticating/deactivating layers, changing opacity, zooming etc) those are mostly handled by the graphics card - the multi-threading comes in handy for filters like radial and gaussian blur - they used to take forever on older machines like 450's

kite
07-04-2003, 04:47 AM
Originally posted by claw
kite,

I took a look at the G5 in stockholm also, I didn't found LW on it, where exactly did you found it??

In the dock, right next to Maya :)
It had LOT of apps installed.
But no xbench or similar bench app :(
I only had a closer look at FCP and LW.
It had LW 7.5c running in demo mode (or what ever - no usb dongle).


Fick du pilla mycket på den? Testade du nått annat som kan jämföras?

claw
07-04-2003, 05:17 AM
sorry guys, a bit swedish now:)

Jag var där i tisdags, precis när dom började visa den. Det lät ungefär på dom som att kör inte för mycke tester på den för de flesta program är ju inte 64 bitars anpassade än, dom var lite rädda tror jag:)

Körde mest tester i FCP och photoshop, och jag måste säga att jag är imponerad!

Men jag missade LW, tusan.. skulle viljat se hur LW presterar på nya G5:an, je je. en annan gång!

toby
07-04-2003, 07:49 PM
uh oh, what are they saying about us

lasco
07-05-2003, 01:47 AM
mmm…
think we can trust them toby ?

just did'nt find any way to translate this strange language via Google.

toby
07-05-2003, 01:53 AM
they must be terrorists... it's time to invade Sweden... :rolleyes:

tallscot
07-05-2003, 01:05 PM
Is Sweden training them and funding them?

The G3 and the G4 are the same processor, except the G4 has AltiVec.

The G5 is a completely different processor, and it is much faster at the same Mhz. Look at the SPEC benchmarks. There's no question.

claw
07-05-2003, 03:57 PM
"The G3 and the G4 are the same processor, except the G4 has AltiVec."

No, it's not true. Sure both are PPC processors, but the G4 has a different architecture.

tallscot
07-05-2003, 04:33 PM
From Motorola:

"The design philosophy on MPC7400 is to change from the MPC750 base only where required to gain compelling multimedia and multiprocessor performance. MPC7400's core is essentially the same as the MPC750's, except that whereas the MPC750 has a 6-entry completion queue and has slower performance on some floating-point double-precision operations, MPC7400 has an 8-entry completion queue and a full double-precision FPU. MPC7400 also adds the AltiVec instruction set, has a new memory subsystem (MSS), and can interface to an improved bus, the MPX bus. The following sections discuss the major changes in more detail."

When the G4 first came out, this was highly publicized. Benchmarks showed that you wouldn't get any significant speed increase at the same Mhz with a G4, unless you were using AltiVec aware software.

The IBM 970, on the other hand, is a completely different animal.

lasco
07-06-2003, 03:05 AM
Benchmarks showed that you wouldn't get any significant speed increase at the same Mhz with a G4, unless you were using AltiVec aware software.


one point for tallscot ;)

Karl Hansson
07-06-2003, 03:28 PM
<<<they must be terrorists... it's time to invade Sweden... >>>

I needed the G5 for guidens computer on my very own homebuilt tomahwk missile I have sitting in my garage. So I switched the G5 in stockholm with an G4 with the exact same case, smart or what? But hey, don't tell any one.:)

Beamtracer
07-06-2003, 04:37 PM
First generation G4s were definitely just G3s + Altivec.

Some of the latter G4s had other features and optimizations that may be worthy of the "new generation" title.

kite
07-31-2003, 05:37 AM
Is there any more demo machines out there?
It seems like there is not much tested around Lightwave on these G5's yet.
I really hope that it offer better performance than what I saw.

Have apple not showed these in US...?

claw
07-31-2003, 05:44 AM
I don't think you will see any kick as benchmarks yet. First Lightwave must be highly optimized for the G5.

Jimzip
07-31-2003, 06:14 AM
Yeah. My computer works fine.. Cripes I hate reading these long threads.


Karl Hansson - Mad scientist,
after that last post, I think you live up to your name!! ;)

Jimzip :D

Ed M.
08-20-2003, 07:27 AM
Looks like Chris Cox was right... NewTeks code is hosed and they did nothing to improve it. See my other posts where this was mentioned. :rolleyes:

--
Ed