PDA

View Full Version : Core i7 X58 SuperComputer



footfur
10-08-2010, 10:23 AM
Has anyone tried this with Lightwave? check out vid on utube

oliversimonnet
10-08-2010, 10:30 AM
what video are we looking at? :)

footfur
10-08-2010, 10:37 AM
Search core i7 x58supercomputer demo

biliousfrog
10-08-2010, 10:51 AM
it probably took longer to type that than insert a link

speismonqui
10-08-2010, 11:51 AM
maybe a "let me youtube that for you" would be nice :)

XswampyX
10-08-2010, 11:54 AM
http://www.youtube.com/watch?v=_87a6P0-Xjw

Hieron
10-08-2010, 12:32 PM
omg!

?

Titus
10-08-2010, 12:59 PM
Has anyone tried this with Lightwave?

For doing what? in which manner LW needs a Tesla or CUDA?

cresshead
10-08-2010, 01:23 PM
...maybe better to show 3dsmax iray with tesla x 4

Vs

lightwave 10 vpr with a EEE netbook


lightwave would be faster :D


fast code vs lazy code and brute force / spend spend spend

lardbros
10-08-2010, 02:26 PM
...maybe better to show 3dsmax iray with tesla x 4

Vs

lightwave 10 vpr with a EEE netbook


lightwave would be faster :D


fast code vs lazy code and brute force / spend spend spend

Hahaha, that's so true!! :)

rsfd
10-10-2010, 03:50 PM
another Nvidia viral marketing or what?
This may be the ultimate Gaming computer or the ultimate Nvidia demo machine, but GPUs can't do high poly scenes with physically correct shading atm.

Captain Obvious
10-10-2010, 04:37 PM
another Nvidia viral marketing or what?
This may be the ultimate Gaming computer or the ultimate Nvidia demo machine, but GPUs can't do high poly scenes with physically correct shading atm.
What you said is actually the opposite of true. :) Octane, for example, doesn't use a lot of memory for polygons. With Octane running on a Tesla, it's likely that you'd be able to render scenes with more polygon counts than you can do on Lightwave on a regular workstation unless you have an absurd amount of memory. And Octane's shading is pretty physically correct.

The real problem is image memory, not geometry memory.

rsfd
10-11-2010, 03:58 AM
^ seems, I was too tired ;)
what I meant, was that the more complex a scene gets (more geometry, more lights, very advanced shading, critical GI, …), the GPU seems to show various limitations so far: afaik, no displacement, errors with fresnel and attenuation and it's doing path tracing exclusively. And I was told (don't know if that applies) that the GPU renderers are limited to the supported resolution of the used Graphics Card(s). The incredible speed difference (100x faster) doesn't seem to be there either. For me, it seems to be a war between Intel and Nvidia. Don't know who will win.

Captain Obvious
10-11-2010, 08:01 AM
afaik, no displacement, errors with fresnel and attenuation and it's doing path tracing exclusively.
I can't think of any technical reason for why this would be the case. Except that displacements might generate too much geometry, of course.

rsfd
10-20-2010, 02:44 PM
found this (http://www.cgchannel.com/2010/08/gpu-vs-cpu-rendering-talk-by-luxology/) lately.
You may know it already, it's Luxology's Brad Peebler talking about the reasons, why upcoming modo 501 will not have any GPU rendering features. (Of course, it could be seen as "Intel" marketing)