PDA

View Full Version : How many AA passes are the kids using these days?



Dan Ritchie
08-02-2016, 02:08 PM
In the old days we actually did whole shows with 5 passes. Star Trek got 28 on one particularly noisy shot and I thought that was exotic at the time.
What are people using these days?

vonpietro
08-02-2016, 02:23 PM
between 15 and 30 usually - no more than 30 - it really jacks up the render times - but really does clean up nicely in case by case.

Prince Charming
08-02-2016, 03:17 PM
In the old days we actually did whole shows with 5 passes. Star Trek got 28 on one particularly noisy shot and I thought that was exotic at the time.
What are people using these days?

Also, depends on how much motion blur/dof you have going on. For low or no motion /blur dof you can get away with min 3 max 16 with gi MC samples at 8, Adaptive at .01, low discrepancy sampling. Depending on your lights and ref / refrac blur you may also want to up your light and shading samples if needed. If you have high dof/ motion blur I have gone up to 128 camera samples or more, but then take down the gi samples to 1 and shading and light to 1 as well. Its really a game of "fight the noise" where ever it ends up being whether it is in shading, lights, gi, or dof/motion blur.

raw-m
08-03-2016, 12:04 AM
I'm liking the look of the Gaussian recon filter for animation lately, min 1, max 16, no mb (done in AE using RSMB), dof or GI.

Amerelium
08-04-2016, 02:13 AM
1-8 samples, 0.01 threshold, box sharp classic - Doing a 3 min movie of a 90 km Culture GSV, going from outside to inside environment; going to take 3 months. Only visible difference I detected on higher settings were on the windows - got over 9 million of them placed, so I'd be rendering for years if I were to make it 100% 'pretty'...

Greenlaw
08-04-2016, 02:46 PM
I agree with the others--it really depends on the scene, i.e., type of detail, contrast, etc. Motion blur factors in too, but personally I haven't used 'in-camera' motion blur since 2005, preferring post-blurring with motion vector data, in studio production or personal work. Just don't have the time or patience for 'real' motion blur.

My typical settings are Adaptive Sampling On, 10-12 Samples and up from there but very rarely going over 18-20. I like to use multi-sample lights like Dome and DP Infinite, so I usually have my Light Samples between 6 - 10, possibly higher but it depends on how much noise I see in the shadows.

I've worked with artists who have varying ideas about what's optimal so I suggest experimenting and figuring out what works best for you.

Prince Charming
08-04-2016, 05:15 PM
but personally I haven't used 'in-camera' motion blur since 2005,
Did you use another app for instancing or something, or not use instances at all?

Greenlaw
08-04-2016, 06:37 PM
Lightwave Instancing can output motion vectors, as can FiberFX.

In the distant past, like back when we used HD Instance and Sasquatch, we often had to cheat the motion vector data by using proxy geometry or in the case of Sasquatch we used a custom Fusion tool to expand the vector data to cover the region where the 'hair' existed. This wasn't not especially accurate but visually it worked most of the time. In really difficult situations, we might render the Sasquatch pass with 'in-camera' motion blur but that was pretty rare.

TBH, I'm really glad I don't have to do any of the above anymore. It's so much easier to just render an exr with the proper motion vector data for composting.

Prince Charming
08-04-2016, 06:46 PM
Instancing can output motion vectors, as can FiberFX.
LOL... not since 2005.

Greenlaw
08-04-2016, 06:55 PM
No, not since then. :)

I think I started using FiberFX motion vectors in Fusion in 2012 or 2013. Wait I have a blog entry about that:

http://littlegreendog.com/2013/03/07/sister-makes-her-move/

Back when we did Devil May Cry (2011 or 2012 I think?), I used the vector expansion trick for FiberFX described above, and in some cases where I felt I could get away with it I just threw RSMB over the fiber pass. This was actually the first job where we used FiberFX; before that we always used Sasquatch.

Prince Charming
08-04-2016, 06:57 PM
Its lame to change your comment and make my response look not relevant. Could have easily just posted another response, but at least now I know what kind of person I am talking to...

spherical
08-04-2016, 10:39 PM
OK.... I'm outtahere.... This is turning, again, as ugly as other threads previously have. Yeesh.

Prince Charming
08-04-2016, 10:46 PM
OK.... I'm outtahere.... This is turning, again, as ugly as other threads previously have. Yeesh.
I mean, you do see what I was talking about right? I thought I was pretty kind about it.
I replied to a comment, then the comment I replied to was changed to be something other than what I replied to. Its not that big a deal, but it is kinda lame... do you not agree? Do you think that is proper forum etiquette?

magiclight
08-05-2016, 03:41 AM
Always include the comment you reply to in your own post (I didn't do that now ;) ) avoids that kind of problem even though you may get complains from moderators now and then if you include a comment from the previous post.

back on topic: I have to use around 30 or more but I am using noisy dome lights and spherical lights all the time.

Greenlaw
08-05-2016, 08:23 AM
Its lame to change your comment and make my response look not relevant. Could have easily just posted another response, but at least now I know what kind of person I am talking to...

My apologies, but I think I misunderstood your post. I thought you were asking how we handled motion blur for instances, or any image filter based renders that did not support motion vectors back in 2005. I'll be happy to clarify further but now I'm not exactly sure what you're asking.

It's possible, even likely, that I changed my post--I frequently do that to correct typos or improve clarity, but not to embarrass another user.

Prince Charming
08-05-2016, 11:30 AM
My apologies, but I think I misunderstood your post. I thought you were asking how we handled motion blur for instances, or any image filter based renders that did not support motion vectors back in 2005. I'll be happy to clarify further but now I'm not exactly sure what you're asking.

It's possible, even likely, that I changed my post--I frequently do that to correct typos or improve clarity, but not to embarrass another user.
I get that greenlaw,but the thing is... I would be curious as to why your use of motion blur since 2005 was even bought up to begin with? This was a thread on aa... which is dependent on motion blur... which IS a current feature in lw.

And your claim of using proxy objects is ridiculous under many circumstances that I have ran into personally. I have been using dp nodal instancing long before there was motion vector pass for instances. So if I had 200 different objects going in 200 different directions driven by nodes.... I would reallly like to know how you would get proxy objects to match that motion? I could do it using nodes, but what would be the point of even using instances to begin with then?

So while I am really happy for you that you have not needed lw motion blur since 2005... it is totaly irrelevant to the question posted in this thread. Motion blur is a feature that is avalible in lw and does effect aa in a big way if you need to use it. And even now that there is motion vectors I can think of many cases for motion graphics type fx that don't take long to render and have very complex motions that it is worth doing in camera as opposed to with motion vectors.

In a thread on aa I think it is important to discuss all settings currently in the app that effect it. Whether or not you have used them since 1880 is irrelevant to the conversation.

Greenlaw
08-05-2016, 11:45 AM
Oh, sorry. Somebody mentioned motion blur earlier. AA levels definitely have an impact on in-camera motion blur. I wanted to acknowledge that, even though I don't actually use in-camera motion blur.

As for 2005, that was the year we (the Box at Rhythm & Hues, which sadly is no more,) started using Lightwave motion vectors and post processing for our final renders in Fusion. This was when we did our first Call Of Duty commerical, which was also our first HD job. During the R&D phase we realized that we simply didn't have the schedule to support in-camera motion blur so we looked into post process motion blurring instead. We had great success with that and pretty much stopped using in-camera after that. We did use HD Instance at the time, which did not provide motion vector data so we had to come up with tricks for getting reasonably accurate motion vector data for it.

I admit, I may have gone off on a tangent. Sorry for straying off topic. :D

Anyway, as mentioned earlier, Instancing in LightWave 2015 does provide motion vector data for post processed motion blurring, which I believe is relevant to this topic. Actually, this worked in LightWave 11.5 too but I think it might have been broken in 11.6.

raw-m
08-05-2016, 12:15 PM
Thanks, elegant and informative as always, Greenlaw.

Prince Charming
08-05-2016, 12:28 PM
Yes... very very very elegant! Lmfao! You want a reach around to? Sounds like mark and spherical may be interested ;)

I personally still don't see the relevance of the info you are giving to actaull aa settings needed in lw. But hey... I might be crazy. Sounds to me like you enjoy listing your reusme in forums on a thread about aa.

Greenlaw
08-05-2016, 01:30 PM
meh.

Prince Charming
08-05-2016, 02:13 PM
Its just that I don't see how your employment history or the date you stopped using motion blur actually does a damn thing to help someone who may need to use that feature.... that's all. Like I said... I may be crazy.

hem.

erikals
08-05-2016, 09:16 PM
As for 2005, that was the year we (the Box at Rhythm & Hues, which sadly is no more,) started using Lightwave motion vectors and post processing for our final renders in Fusion. This was when we did our first Call Of Duty commerical, which was also our first HD job. During the R&D phase we realized that we simply didn't have the schedule to support in-camera motion blur so we looked into post process motion blurring instead. We had great success with that and pretty much stopped using in-camera after that. We did use HD Instance at the time, which did not provide motion vector data so we had to come up with tricks for getting reasonably accurate motion vector data for it.

good info, heard others also where doing that.

also the info is closely related to AA. after all, MotionBlur works in conjunction with AA.