PDA

View Full Version : Anyone used Pixel Motion Blur in AE CC?



raw-m
07-04-2013, 08:55 AM
As above, was wondering if it actually worked for LW work rendered without MB or the need to output vector passes/3rd party plugins and if you're happy with the results?

Haven't subscribed to the CC thing yet (it's just a matter if time), just researching....

dsol
07-04-2013, 11:57 AM
Like most post-processing solutions, It works - but don't *rely* on it to work perfectly. And of course, things like transparency and fine details (like hairs) completely throw it.

That being said, most large VFX shops seem to do all motion blur as a post-process (seems to simplify the compositing pipeline), so it's not a bad idea. Just a complicated one to get working perfectly.

Greenlaw
07-04-2013, 12:12 PM
I've never used the AE tool but I think it's just a form of pixel tracking like what RSMB does when vector data is not available.

If you have vector data, post processed motion blur works much better. In the Box at R&H, we did all our motion blur as a post process using vector data embedded in our LightWave .exr files and using Fusion with proprietary motion blur tools. My understanding is that the main studio did the motion blur in their feature film work as a post process too, but using their proprietary compositing program of course. At Little Green Dog, we use Fusion with embedded vector data but with the native Fusion Vector Blur node. Most of the time, post processed motion blur with vectors can as good as or better than 'baked in' motion blur and takes a small fraction of the time to render. As mentioned above are some situations, where it doesn't work so well but there are many 2D tricks you can apply to work around these issues.

I haven't worked at too many other studios but I believe these days this is fairly common practice because of the significant savings in render time and added flexibility during compositing.

One more thing--FiberFX now correctly outputs vector data for 'long hair' and it looks fantastic with post processed motion blur. You can see an example of this here:

Excerpt from 'Brudders 2' (A Work In Progress) (https://vimeo.com/channels/littlegreendog/68543424)

Just wondering, does the native AE tool (not RSMB) read vector data?

G.

Greenlaw
07-04-2013, 12:22 PM
Just to be clear, post motion blurring is being used selectively in this clip. For example, it's used on all the character layers and a few environment layers that have fast, sweeping camera moves but we're not using it for the blowing leaves (there's DOF but no motion blur applied--this is intentional.)

G.

raw-m
07-04-2013, 01:02 PM
Thanks for the examples guys. Lovely work there, Greenlaw, and the design and colours are gorgeous! How is that broken out when it comes to rendering? ie, separated background, characters, leaves..?

I'd only want to be subtle with the MB to take off a bit of digital edge. I really should spend some time with vector passes but don't have the RSMB plugin. I guess I should invest as it'll be saving in the long run but I'd like to see a few AE CC examples first.

Greenlaw
07-04-2013, 02:27 PM
Thanks!

My initial goal was to keep the layering simple but it still wound up being more complex than I wanted.

Basically, the environment was rendered in one pass with embedded depth, motion vectors, M/O ID's, and normals channels. I'm using all of these channels for the environments except motion--partly this was a creative decision to give it a 'stop-motion' look but also because the motion vectors for Instancing is corrupted (all the tombstones are instances, not the leaves.) I also rendered an RGBA matte pass to uses as control masks for various elements, and combined with depth and M/O ID, I can pretty much select anything for compositing effects. There's more info about this in the production log (link below).

Except for the mouth animation, each character is rendered in a single pass, including hair and all the same embedded channels. The mouth was kept separate and added later because this allows me to experiment with the lipsync method or even change the animation completely without having to re-render the characters. The mouth is in two layers, the main color pass and a separate alpha mask--I had to make the mouth two passes for technical reasons but fortunately the layers render very quickly.

M/O ID works great in the environments with instances--I'm using M/O ID to CC specific instanced tombstones because the shading came out a little darker on those than on other tombstones and I didn't want to re-render the environment just for that. Considering there are many thousands of tombstones in the environment (I forget exactly how many but it's a lot,) having M/O IDs available has been a major time saver.

M/O ID fails for the characters though because FiberFX isn't accounted for in the M/O ID buffers--so if I need to CC Sister's face, for example, I'm just using regular old color keying for the skin. It works fine.

As mentioned above, I'm not using motion vectors for motion blurring the environments except when there is a fast, sweeping camera move. This is because I want to keep motion blur at a minimum to suggest a 'stop motion' look. I did using vectors for blurring the character layers though--I resisted at first but I couldn't deny that the effect looked too good to disregard, especially for Sister's hair.

The only other major layer is the sky, which is usually just a sphere with a panoramic texture applied. Sometimes I'm rotating or translating the sphere to make the clouds appear to drift. There are two shots in the preview that use animated cloud rendered in Vue Infinite--the camera matching between LightWave and Vue using VueSync is 100% spot on.

In some layers, environment and characters, I'm using the normals channel to enhance or alter the lighting.

I'll probably go into a little more in detail at the prod log at some point but right now too busy trying to finish the film.

G.

Greenlaw
07-04-2013, 02:31 PM
RSMB optical flow based blurring works surprisingly well even without motion vector data. I use the AE version of the RSMB plug-in in Fusion in situations were I have no vector data available, like when I animate using object sequences but can't use the Object Sequence utility in PointOven to generate an MDD. From what I've seen so far, this is essentially what the AE motion blur plug-in does.

Otherwise, I prefer to use the native Vector Blur tool in Fusion because it reads the vector channel directly from LightWave's EXR files without any modifications required.

G.

m.d.
07-04-2013, 04:04 PM
the only issue I have is no vectors for instances...or no proper ones at least....

Greenlaw
07-04-2013, 05:06 PM
Yes, we ran into the same issue back when we used HD Instance. This is another situation where RSMB comes in handy--if I'm going to need motion blurred instances (DP, Instancer, or HDI,) I typically render them in their own layer and then pre-process the layer through RSMB before using it for compositing. Actually, this is easier to do now with Instancer because you can use M/O ID to mask all the instances and use RSMB without having to run a separate instances only layer.

Without RSMC, I can sometime get away with using the background's motion vectors to 'smear' the instances. Obviously, this depends entirely on what's happening in the shot and will not work in many cases.

On another job, I blocked out a proxy object that covered all the regions where there were instances, and fed the vectors from that into Fusion's vector blur tool.

There are many ways to workaround the problem, and when you're on a deadly tight deadline, it's amazing what solutions may suddenly pop into your head. Not that I enjoy relying on 'workarounds'. :P

As a last resort, I might run a separate layer for instances and use Photoreal MB for just this layer. But to be honest, I haven't needed to render anything with MB in LightWave for many, many years. When you're on a tight deadline, rendering 3D motion blur is just too costly, at least on any project I've worked on in recent years.

Just a guess but I think the LW devs are working on adding vectors for Instancer since we currently have other channels like M/O ID. Hope we see an official solution to this issue soon.

G.

raw-m
07-05-2013, 01:56 PM
Many thanks for your time in breaking that down, Greenlaw! Very interesting. Just a quick question while I pick your answer apart (and totally off topic!). How many passes would something like the graveyard scene take? Do you get as much as you can in a single exrTrader type render pass? DP custom render buffers?

Greenlaw
07-05-2013, 11:37 PM
Some additional layers I forgot to mention:

The ukulele is separate for many scenes because of the way the lighting and shadows are set up. I'm doing a lot of lighting and shadow cheats to minimize or even fake the use of GI and subsurface, which requires breaking out a few elements and creating a few custom control masks.

The cast ground shadow is also separate element. For this I'm using the old standby Shadow Density option for the Alpha channel. Naturally, this element is created only for shots where we see the feet touching the ground. I don't bother rendering fur for the cast shadow passes because it's being blurred anyway. In fact, I don't bother rendering hair/fur for any character passes except the main character RGB pass. I've also disabled hair/fur for reflections in the instrument surfaces (uke and harmonica,)--I have Reflection Blur turned up so high, you wouldn't see hair/fur detail and it would just be wasted render time.

I think that pretty much covers all the layers for the cemetary shots. It's quite a lot when you consider I originally thought I could get away with only a flattened environment and three individual character passes. Well, I guess I could have but... :p

G.

Greenlaw
07-06-2013, 12:02 AM
Many thanks for your time in breaking that down, Greenlaw! Very interesting. Just a quick question while I pick your answer apart (and totally off topic!). How many passes would something like the graveyard scene take? Do you get as much as you can in a single exrTrader type render pass? DP custom render buffers?

I'm using exrTrader and embedding only a few channels for the main passes. These are RGB, alpha, depth, normals, motion vectors, object ID and material ID. Additoinally, have custom RGBA passes set for control--this is a standard method for creating up to four masks with AA in a single quick-rendering pass. I use this when M/O ID selection isn't going to be precise enough.

(Note: with output from some 3D programs, like Vue for example, M/O ID selection can be very precise because you have the option to save coverage data which can be used to generate AA on selection edges--unfortunately, we don't have this data available for LightWave renders. That said, M/O ID from LightWave can still be very useful in many situations if you add just a tiny bit of blurring to the M/O ID generated masks.)

So far I haven't needed to create any custom buffers for this project.

G.

raw-m
07-06-2013, 10:03 AM
You've inspired me to stop being so pedantic with my passes setup! It's so easy to get bogged down with being as flexible as possible when just a few buffers will do! Would be great to have coverage for the ID passes, perhaps LW12? Thanks for your time here, Greenlaw :D