PDA

View Full Version : CORE CPU/GPU recognition



ben martin
02-05-2009, 06:40 AM
Is CORE CPU/GPU recognition meaning that CORE is going to use GPU's to make final renders???? Oba! Oba! OBA! :foreheads

Mitja
02-05-2009, 08:42 AM
Some say yes, other say no...who knows?
We should wait the next vids before oba oba oba. ;)

hrgiger
02-05-2009, 08:49 AM
Don't know, it could just mean that buying a very good video card means very good performance in Lightwave. That hasn't always been true in the past. Or maybe the GPU will be used for physics calculations or something.

AbnRanger
02-05-2009, 08:53 AM
Yeah...I was wondering something similar. He said in the Presentation "Multi-Threaded and recognizes Multiple processors and GPU's."

Multiple GPU's? Did I hear that right? Up til now, a second GPU was worthless for LW. So SLI and Crossfire (multiple cards) setups can use the extra GPU's for...? for Hardware dynamics (PhysX or Havoc)?

It would have to use OpenCL (newly ratified and thus not yet widely used) to leverage General Purpose GPU processing across both platforms (ATI and NVidia), unless developers wrote for CUDA and/or ATI Stream.

phillydee
02-05-2009, 09:25 AM
Don't know, it could just mean that buying a very good video card means very good performance in Lightwave. That hasn't always been true in the past. Or maybe the GPU will be used for physics calculations or something.

Hopefully they'll be able to use physX to drive calcs... sure would speed things up. Knowing XSI has Ageia physX-based particles, I tried to find out if XSI 7 (With ICE etc) supported HW acceleration. The answer is NO. So, it'd be really awesome if they had some sort of HW acceleration with physics, dynamics, etc.

*crossing fingers*

ben martin
02-05-2009, 09:32 AM
I can't see a better time to introduce such feature in LW.
The GPU's are really awesome math calculators, surpassing standard CPUs, why not take advantage of such?

Let's hope! Oba! (one more oba to heat things-up) :hey:

Tobian
02-05-2009, 09:41 AM
What's not clear is if you can levarage the power of the GPU right out of the box.

What is clear is that it is both GPU aware and now highly extensible with a completelly open API. This means that if it doesn't now, it can, even if it is 3rd party and further down the line.

I doubt even Newtek knows if it can implement render improvements by leverageing extra horsepower from the GPU as until OpenCL is mature technology. It's not clear what it will be capable of, and if it will be of any benefit over standard render technology. graphics cards are still largely single/fixed function devices, and still have a way to go to be a general programmable device for rendering in a different format than OGL/DirectX. They do offer huge potentials in areas such as physics and voxels.. so I think we can expect big things!

LW_Will
02-05-2009, 06:38 PM
Tobian, I think that the speed of the CUDA, OpenCL and whatever ATI are calling it, can be seen, with speed increases of 4x's in the Adobe products that it implements.

I think it should come out with the OpenCL capability. I think that and the new release of Snow Leopard with a full 64bit version of Cocoa are but two of the things that Newtek are waiting for...

lwanmtr
02-05-2009, 06:55 PM
Yeah, I'll be interested in seeing what gpu support will do :)

jasonwestmas
02-05-2009, 06:59 PM
Is CORE CPU/GPU recognition meaning that CORE is going to use GPU's to make final renders???? Oba! Oba! OBA! :foreheads

Superb OGL manipulation of components and deformation of geometry is my guess.

Earl
02-05-2009, 08:45 PM
The GPU question is something I've had on my mind too. I really hope they cover this topic soon in the next reveals.

LW3D
02-06-2009, 06:49 AM
I think I can help a little :)

when I searched something for what LightWave Core provides before, I found some clues about GPU rendering with LightWave...

I think we will get cuda 3D rendering with LightWave Core.. :thumbsup:

following messages are from Nvidia Cuda Development forum...



Sep 1 2007, 06:45 PM
Hi all. I am just starting CUDA programming and wrote a simple Mandelbrot program as part of my learning process. I am submitting the project files for anyone to play around with. I did a timing test and found that it runs 85.5 times faster on an 8800 GTX GPU than an AMD Opteron 2GHZ processor.
...
...
Mark Granger
New Tek
source :http://forums.nvidia.com/index.php?showtopic=44847

and I found another message from Mark Granger...



Sorry to nit pick like this but I have been looking foward to having access to CUDA on my main work computer without having to dual boot to Windows XP (which is very tricky as it turns out).Source : http://forums.nvidia.com/index.php?showtopic=60431

ps. if you look Mark Granger's Profile on nvidia cuda development forum, you can also see SPWorley from Worleylabs who looked MarkGranger's profile before... it is possible that, worley also working something related Cuda...

Skonk
02-06-2009, 06:59 AM
I think I can help a little :)

when I searched something for what LightWave Core provides before, I found some clues about GPU rendering with LightWave...

I think we will get cuda 3D rendering with LightWave Core.. :thumbsup:

following messages are from Nvidia Cuda Development forum...

source :http://forums.nvidia.com/index.php?showtopic=44847

and I found another message from Mark Granger...

Source : http://forums.nvidia.com/index.php?showtopic=60431

ps. if you look Mark Granger's Profile on nvidia cuda development forum, you can also see SPWorley from Worleylabs who looked MarkGranger's profile before... it is possible that, worley also working something related Cuda...

Nice detective work :)

If you read further down you can even see that they have asked Mark if they could include his code in the next SDK release.

Elmar Moelzer
02-06-2009, 07:09 AM
Dont expect to much from GPU acceleration.
It can accelerate certain things better than others. You are still somewhat limited unless you spend a lot of money on GPUs- and even then.
I am looking foreward to Larrabee when/if that ever comes out. That is my best on when things will start to make sense... maybe...

LW3D
02-06-2009, 07:41 AM
@Elmar : What do you think about Tesla? it seems it really give great render power to desktop.

http://www.nvidia.com/object/tesla_computing_solutions.html


The world’s first teraflop many-core processor
NVIDIA® Tesla™ GPU computing solutions enable the necessary transition to energy efficient parallel computing power. With 240 cores per processor and based on the revolutionary NVIDIA® CUDA™ parallel computing architecture, Tesla scales to solve the world’s most important computing challenges—more quickly and accurately.

clagman
02-06-2009, 07:46 AM
What it can do very well is render shader information on GL objects. So in short it SHOULD be able to handle nodal/shader and higher detailed volumes in addition to the GUI.