PDA

View Full Version : Naiad (fluids) - interesting read



erikals
08-01-2010, 01:51 AM
http://www.fxguide.com/modules.php?name=News&file=print&sid=606

jay3d
08-01-2010, 02:39 AM
Cool!

The first time I heard about that solver was from Jonas Gustavsson of NewTek,
It looks promising.

erikals
08-01-2010, 03:18 AM
never quite got though why no 3D app has included realtime GPU fluids from NVIDIA,
isn't it open source? http://www.youtube.com/watch?v=r17UOMZJbGs

but good thing Bullet is working on implementing SPH, http://bulletphysics.org/Bullet/phpBB3/viewtopic.php?f=18&t=4067
so maybe we will see realtime fluids like the NVIDIA above in the (near?) future.

prometheus
08-01-2010, 03:50 AM
never quite got though why no 3D app has included realtime GPU fluids from NVIDIA,
isn't it open source? http://www.youtube.com/watch?v=r17UOMZJbGs

but good thing Bullet is working on implementing SPH, http://bulletphysics.org/Bullet/phpBB3/viewtopic.php?f=18&t=4067
so maybe we will see realtime fluids like the NVIDIA above in the (near?) future.

Agreed..it makes one wonder why.

this realtime volumetric particle simulation is cool also...from a cuda demo
running on a 8800 gt card.
http://www.youtube.com/watch?v=xh2q_p6hQEo&feature=related

erikals
08-01-2010, 06:22 AM
i disagree, GPU based fluids are extremely much faster than CPU based fluids.

well, NVIDIA fluids looks basic, and that's what most people need.
RealFlow is cool, has a lot of options, but is like a turtle compared.

bring it on, give us GPU fluids, i want it yesterday!

erikals
08-01-2010, 07:22 AM
your'e absolutely right though, all we can do is wait, and wait.... :]

hm, maybe a bribe could work... http://erikalstad.com/backup/anims.php_files/I_Love_NewTek.gif

Elmar Moelzer
08-01-2010, 07:56 AM
i disagree, GPU based fluids are extremely much faster than CPU based fluids.
Yeah and how do you combine them with your raytracer, unless you only solve the fluids on the GPU but render them on the CPU. Then there is the problem of memory on Graphics Cards. Anything volumetric takes up tons of memory. I know what I am talking about, we are running into these issues every day with VoluMedic and believe me, we are looking at everything that would make our application better.
We do already have hardware volume rendering in VoluMedic, but while faster, it does not even look remotely usable for final render quality. Lots of reasons for that...
Anyway, these tech demos are just that. Tech demos. They do one thing really good, but once you throw something else at them, or change you demands somehow, they fall apart. It is a bit like interpolated radiosity and MC radiosity. Interpolated is much faster, but it has a lot of limitations (that you have to find workarrounds for). The brute force MC always works.

erikals
08-01-2010, 08:17 AM
not sure, i do know that they used GPU fluids on Harry Potter though,
but maybe it wasn't raytraced... (the spinning fire scene)
http://www.youtube.com/watch?v=PlrphFBpuio

Titus
08-01-2010, 08:31 AM
not sure, i do know that they used GPU fluids on Harry Potter though,
but maybe it wasn't raytraced... (the spinning fire scene)
http://www.youtube.com/watch?v=PlrphFBpuio

Memory isn't a real issue anymore these days. We have new algorithms and parametric methods (Scanline) and efficient cache like KD-trees, used by the people of ILM in the Harry Potter fire. This has been a hot topic the last few SIGGRAPH.

If you want GPU liquids support D-storm, even if it's very simple they programmed nami-nami fx with CUDA support years ago.

erikals
08-01-2010, 09:10 AM
interesting,

20,000 self-colliding particles in LW > 2 minutes calculation per frame
http://www.newtek.com/forums/showpost.php?p=571834&postcount=60

128,000 self-colliding particles with NVIDIA > realtime
http://www.youtube.com/watch?v=r17UOMZJbGs

erikals
08-01-2010, 09:11 AM
checking liquidpack
(i assume that nami-nami and liquidpack is the same?)

Elmar Moelzer
08-01-2010, 09:18 AM
memory isn't a real issue anymore these days.
lol!

People are comparing an in house tool to commercial software. Again, very different things. Also, graphics memory is always an issue.
I dont even know where to begin...

Titus
08-01-2010, 09:32 AM
lol!

People are comparing an in house tool to commercial software. Again, very different things. Also, graphics memory is always an issue.
I dont even know where to begin...

People was appaled by the numbers when the guys at ILM said the fire simulation in HP took something around 50 MB using new data structures. While it's a fact apps will become more complex and require more resources, the best programmers get good optimizations.

Elmar Moelzer
08-01-2010, 09:48 AM
People was appaled by the numbers when the guys at ILM said the fire simulation in HP took something around 50 MB using new data structures. While it's a fact apps will become more complex and require more resources, the best programmers get good optimizations.

Mhmm, yeah sure.
Whatever...
Look, you are talking to someone who would be happy if all graphic cards on the market, including ATI would come down to finally supporting the OpenGL2.0 standard fully and without issues. To this date this is not the case. I will start thinking about anything else once we get there.

erikals
08-01-2010, 11:08 AM
ok, only one guy can fix this...

http://dwellingintheword.files.wordpress.com/2009/12/32-heston-as-moses.jpg

Myagi
08-01-2010, 12:00 PM
ok, only one guy can fix this...

bah, that dude can't even do correct gravity calculations in his liquid simulation, totally unrealistic. He and and that other guy, with way too high surface tension on his water sim, can pack it up. :D

stefanbanev
08-01-2010, 12:40 PM
Oliver> many people in the industry and especially research
Oliver> facilities see this 'GPU thing' as a nice temporary hype.

Can not agree more... Unless GPU becomes an efficient MIMD machine. The hype around GPU is fueled by NVIDIA propaganda and overgrown gamers. There is no an universal notion as computing speed, any speed numbers are the measurement how fast the specific algorithm runs. For example the best CPU volumetric ray casting outperforms the best GPU one by factor 10 (at least) for the data sets 1K cube / 1K projection plane / sampling rate 8+ (IC) / for the similar hardware cost and similar interactive quality; once data set size is getting down to 512^3 the advantage is still remains around 4..6 times and only for sizes below 128^3 GPU yields similar performance. The irony is that gap is getting dramatically bigger once comparison moves to the high-end hardware setup. Once execution code path does not depend on local data-content around each ray - GPU's SIMD architecture is blazing fast and indeed Texture Mapping or volumetric ray casting with regular/even distribution of samples along ray can run on GPU _MACH_ faster then on CPU but such VR algorithm has a cubic time complexity so up to some size threshold it's doing well against logarithmic time complexity of adaptive sampling volumetric ray-casting which has been effectively implemented for multi-core CPU.

Stefan

Elmar Moelzer
08-01-2010, 02:07 PM
I agree with Stefan on most of what he said, though I would not see it quite that dramatical.
The GPU has its place and you can do really nice stuff with it, but dont think that a tech demo in a paper, even an in house tool will give way to an equally impressive commercial production tool.
I remember all the Nvida demos. How long did it take for reality (actual games) to really catch up with the demos?

Red_Oddity
08-01-2010, 03:15 PM
I agree with Stefan on most of what he said, though I would not see it quite that dramatical.
The GPU has its place and you can do really nice stuff with it, but dont think that a tech demo in a paper, even an in house tool will give way to an equally impressive commercial production tool.
I remember all the Nvida demos. How long did it take for reality (actual games) to really catch up with the demos?

Quite frankly? they never did, i'n still waiting for games to actually run the demo material from cards as far back as the geforce 6 in an interactive gaming environment.

The only thing that makes games look better and run faster, is that developers learn the workarounds and optimizations of the architectures they've been working for quite a while now.

Titus
08-01-2010, 06:34 PM
I agree with Stefan on most of what he said, though I would not see it quite that dramatical.
The GPU has its place and you can do really nice stuff with it, but dont think that a tech demo in a paper, even an in house tool will give way to an equally impressive commercial production tool.
I remember all the Nvida demos. How long did it take for reality (actual games) to really catch up with the demos?

Last year the owner of a local animation studio was running in problems rendering an animated feature film. They were approaching fast the deadline, and I suggested them to try Mach Studio. They tested it and bought 4 licenses (bundled with ATI cards, but used their old nvidia cards). Guess what? they finished the movie on time, the estimated improvement over their modest renderfarm was 10x-20x. And yes, MachStudio is a commercial app.

Cageman
08-01-2010, 07:00 PM
I hope that Naiad will give RealFlow a run for the money. For too long, NextLimit have been quite dominant in producing an off the shelf solution for waterrelated fluidsims.

:)

Elmar Moelzer
08-02-2010, 05:10 AM
And yes, MachStudio is a commercial app.

Now, the examples may be deceiving, but I dont see anything on that website that goes much beyond ingame quality rendering.
No offence...
Oh and I am sure that a lot of users would complain if LW was only running on a certain graphics card...

Titus
08-02-2010, 08:29 AM
Now, the examples may be deceiving, but I dont see anything on that website that goes much beyond ingame quality rendering.
No offence...
Oh and I am sure that a lot of users would complain if LW was only running on a certain graphics card...

Wow, so basically you don't have arguments.

Elmar Moelzer
08-02-2010, 01:26 PM
Wow, so basically you don't have arguments.
Uhm, I do have plenty. I named them all and I am standing by them too.
GPUs have their place and they do certain things really well. Right now they are still a lot of issues that have to be overcome if you want to make good use of the GPU. There are lots of limitations that you have to deal with too. We have had simillar discussions in the past and it is not just me, but other developers on this board too, that will tell you the same thing.

erikals
08-03-2010, 02:04 AM
the thing i noticed about MachStudio is that it's not GI, but rather AO.
now, if you'd render passes of AO and SSS in Lightwave it's quite fast (renders in seconds)

so, even though it looked really nice, i'm not sure just how much better MachStudio is.
a wild guess is 3 times faster than LW, so not 20.

still, as GPU's are waay faster when it comes to calculating fluids, i'm quite sure someone would be able to output the data files so we could import them in Lightwave.

however, if a CPU can calculate 128,000 self-colliding particles realtime, that'd be cool too, but afaik that's nowhere near possible.

stefanbanev
08-03-2010, 02:09 AM
The GPU has its place and you can do really nice stuff with it

Well, it's difficult to disagree with such statement. The paragraph below is intended for some who is looking for more specificity about GPU limitations:

The major limitation of GPU is its SIMD architecture. As soon as an algorithm execution code-path depends on of data-content of each item in array of “Multiple Data” the “Single Instruction” makes impossible to process all “Multiple Data” in parallel, it can process it only sequentially; having a “few” SIMD units allows to run in parallel a “few” such data dependent algorithms; for example, x4 SIMD 16-floats-wide units can compute in parallel only 4-floats instead of 64-floats; thus data independent algorithms run x16 times faster on such x4 SIMD-unit GPU and correspondingly it's 16 times slower for data dependent algorithms. It is really confusing why such self-apparent limitation of SIMD is not well recognized. Instead, the really minor limitations as PCIex bottle-neck or/and memory size are commonly mentioned as GPU major handicap while it is not a fundamental problem of GPU and it is improving incrementally.

Stefan

Titus
08-03-2010, 08:16 AM
the thing i noticed about MachStudio is that it's not GI, but rather AO.
now, if you'd render passes of AO and SSS in Lightwave it's quite fast (renders in seconds)

so, even though it looked really nice, i'm not sure just how much better MachStudio is.
a wild guess is 3 times faster than LW, so not 20.


It doesn't matter because it's a fallacy. Machstudio is in its firts versions (it will incorporate GI soon), it's a young product and a clear demonstration that GPU rendering is here, not just in paper, on thin air or an in-house tool. And that was my point.

Now, thanks to Fprime we know there's no need to GPU when a smart programmer incorporates nice algorithms and optimizations.