PDA

View Full Version : Want to render at mach speed?



CGI Addict
07-01-2009, 09:38 PM
Here is an interesting piece of software, MACHStudtio Pro, an incredibly fast (and expensive $4,999 introductory*, windows only) renderer/cinematic tool:

http://www.studiogpu.com/press/studiogpu_ships_machstudio_pro/

With MachStudio Pro, render times can be dramatically reduced from hours to minutes and minutes to seconds or sub-seconds. Comparable final scenes are consistently rendered with MachStudio Pro at rates of 500 to 900 times faster than traditional rendering packages. A complex 1.98 million polygon high-definition image, for example, renders in 14 seconds using MachStudio Pro while the same scene rendered with a traditional rendering package can take more than three hours to complete.

* A total out-of-box high-performance solution, MachStudio Pro ships with an AMD ATI FireGL™ V8650 3D workstation graphics accelerator card featuring 2 GB onboard graphics memory and a parallel processing Unified Shader architecture.

Soth
07-02-2009, 12:15 AM
They shall try to recompile renderer in the CORE to use support of GPU cards (or copmuting cards based on GPU's - bot ATI and NVidia are selling them).

I hope Ray will change his mind about this...

DiedonD
07-02-2009, 12:20 AM
Shame there are no pics or videos like in JimmyRigs case!

They say high quality instant render, but the CEO on movie films company Twenty One, says its a great Pre-Viz to their pipeline!

This instant Pre-Viz or Final Render, issue is becoming grayish the more they get into it!

Mr Rid
07-02-2009, 01:43 AM
Shame there are no pics or videos like in JimmyRigs case!

http://www.studiogpu.com/

http://www.studiogpu.com/machstudio/tour

I read something earlier this year where somebody at a university somewhere configured I think it was 8 graphics cards in a way that did the work of 300 CPUs, or something like that. Maybe it was a thread here.

akademus
07-02-2009, 01:50 AM
Not impressed with renders! No demo versions and it seems like its sold with Ati FireGL card, which I wont buy since I already have Quadros.

It makes a great previs tool, but too expensive. However, I see GPU rendering as standard in years to come!

Mr Rid
07-08-2009, 11:39 PM
I read something earlier this year where somebody at a university somewhere configured I think it was 8 graphics cards in a way that did the work of 300 CPUs, or something like that. Maybe it was a thread here.

There it is-

http://www.dvhardware.net/article.php?sid=27538&mode=thread&order=0&thold=0

Soth
07-08-2009, 11:59 PM
NewTek please try to recompile your renderer to support OpenCL. That will give you guys such a huge advantage to competition...

Adobe is working on it already:
http://blogs.adobe.com/jnack/2008/05/pixel_bender_no.html

Mr Rid
07-09-2009, 02:46 AM
Although, its interesting that Vray RT (realtime) opted to stick with utilizing CPUs because they are cheaper to accumulate and yield consistent results.

Matt
07-09-2009, 03:54 AM
I thought the same, the renders just look like a nice game engine than anything that will replace current renderers.

But nice to see people having a go at realtime, because eventually, that's where it's all going.

Soth
07-09-2009, 04:19 AM
I rather wonder if it is possible to compile renderer that we will accomodate speed of GPU to have shorter render times, no realtime, but just having one computer with 4 graphic cards rendering with speed of small renderfarm.

Lightwolf
07-09-2009, 05:46 AM
NewTek please try to recompile your renderer to support OpenCL.
Erm, that's not how it works.
They would basically need to re-write the complete rendering engine from scratch. Not only that, but you also need to (for most parts) throw out existing concepts and re-design them from scratch.

Otherwise everybody would be doing it. But even converting small, seemingly easy to port code snippets can be a pain:
http://www.virtualdub.org/blog/pivot/entry.php?id=257#body
(and this is something that is dead simple compared to a production renderer).

Heck, it's hard enough to properly use multiple cores on current CPUs already (which are a lot more flexible).

This is quite interesting as well: http://techon.nikkeibp.co.jp/article/HONSHI/20090629/172373/ - it makes you wonder why they design a dedicated chip if GPUs are sooooo fast.

Cheers
Mike

Lightwolf
07-09-2009, 06:15 AM
While we're at it, Mark Granger (yes, our "Mr. Renderer"), on GPUs, CUDA and raytracing:
http://forums.nvidia.com/index.php?showtopic=72797

Actually, the whole topic is interesting.

Cheers,
Mike