PDA

View Full Version : ATI/AMD Firestream



creativecontrol
02-10-2008, 10:18 PM
Wouldn't it be nice if we could render with this...or something like it.

http://ati.amd.com/products/streamprocessor/specs.html

AbnRanger
02-10-2008, 10:45 PM
I'm sure it's aimed at markets like this...but how would you implement it? The software would have to reckognize it as a CPU or be written to accomodate it. With the latter probably being the case, which program would be ready for it?
I'm still confused as to whether or not SLI and Crossfire setups...with one card designated for physics...would benefit physics in 3D applications...or just games alone.
http://ati.amd.com/technology/crossfire/physics/index.html

AbnRanger
02-10-2008, 11:00 PM
Did a little reading of the FAQ's and it seems that programs like LW would have to be recompiled to take advantage of Stream processing. Not likely to happen right away, but the promising thing is that NVidia is supposedly trying to push the same technology. With the advent of multiple CPU's, GPU makers see their future in serious jeopardy if they don't try to get ahead of the power curve.

Able to do tens of thousands of instructions per clock cycle compare to a handful by CPU's....looks like the future to me.

RedBull
02-10-2008, 11:18 PM
One good thing about Ati is their Videocards the HD38xx range, already contains the same double precision floating point that the Firestorm has (they are from the same core) Nvidia does not provide this level of pro feature on any of it's videocards.... Proof that if ATi go downhill it's the consumers that will lost to Nvidia.

But meanwhile DPFP is already available on the Radeons.

AbnRanger
02-11-2008, 12:11 AM
Yeah, just read the review on Tom's Hardware Guide yesterday, regarding the new 3870X2...even if it's short-lived, ATI took back the Hi-End performance crown...with 2 GPU's on one board for just $450 (Nvidia's 8800 Ultra has only 1 and costs $600+)!
Looks like they are back in the race after all.

creativecontrol
02-11-2008, 07:39 AM
The double precision thing is what cought my attention. I think that was the major drawback to Nividia/CUDA solutions.

Obviously, LW would have to be significantly modified to use it. Perhaps something that functions like Fprime would be the easiest method? A seperate rendering engine compiled to use something like the Firestream.