PDA

View Full Version : FACE MOCAP tech for film questions



silviotoledo
05-19-2012, 07:46 PM
I have some questions about FACE MOCAP I'd like to ask people wich is using the process in production actually:


Let's say you use XSI facerobot or Motion builder for retarget and wanna animate the body in Maya or lightwave. The pointcache data is not editable, so the best way is to use morph targets/endomorphs. So these are my questions:


1) Do the software export a list of morphs based on the MOCAP data? I know they do, but...

2) If yes, do they filter the morphs per area ( mouth, eyes, nose ... )?

3) Do they also export the animation of these morphs mixed at the timeline the way that match the MOCAP data? Or the animators must reconstruct the motion again with keyframes?

4) Do the software compares the deformed motion from capture with a prebuild library of morphs and give you the similar animation in timeline with editable mixed morphs?

silviotoledo
05-21-2012, 12:43 PM
No lightwave artists working with face mocap?

RebelHill
05-21-2012, 12:53 PM
Nope.

And Face robot doesnt use morfs, it uses a rig, and MB doesnt do face mocap... at all.

DigitalSorcery8
05-21-2012, 02:10 PM
Nope.

And Face robot doesnt use morfs, it uses a rig, and MB doesnt do face mocap... at all.

I think you're wrong about MotionBuilder and face mocap. I seem to recall testing it a while back with some Optitrack facial mocap data with morph targets in MB. I may be wrong, but I'm pretty sure I did that.

RebelHill
05-21-2012, 02:23 PM
yes ofc... there's character face and actor face arent there. You need to setup, I think, clusters on the target face. Don't think you can do it to/from LW.

DigitalSorcery8
05-21-2012, 02:47 PM
I seem to remember being able to transfer the motions to LW, but I didn't like the results AND it was a PIA workflow. Getting standard mocap from MB to LW was FAR easier in comparison.

Regardless, for the time being I've chosen to go the TAFA route - not as fast as mocap but certainly easy to work with. :) Perhaps eventually I'll try out Maskarad with LW - when I find the time.

silviotoledo
05-22-2012, 12:45 PM
Thanks for answers!

Rebell

"And Face robot doesnt use morfs, it uses a rig."

That's the point. Face mocap data is only usefull to drive a rig, not to drive deformers kinda bones and facerobot does a pretty good retarget from these points to a complete nice face rig facerobot has inside. So you say facerobot only outputs pointcache data? No morphs sequences?

"and MB doesnt do face mocap..."

yeah, I know, but it does retarget and I suspect it outputs mocap data as morphs, same way mascarade does. Just need to learn more about this.

DigitalSorcery8

The problem using mascarade with Lightwave is that mascarade does not do retarget, it just exports the point cloud and a " generic" 3d mask you may use to deform other object. I'm able to get the same data with syntheyes, boujour, after effects or blender ( as free ). The problem remains on retargeting.

http://youtu.be/IRxuLu_kods

Here we see Maya is cool and get the maskarade mocap data, compares to a previous list of morphtarget the model has and converts the mocap point data to morphs sequences. What is so desirable once morphs are easelly editable and controls the shape the way we want.

So, I'd like to know if Motion Builder would create the list of morphs alone based on a mocap data or does it need a set of prebuild morphs to compare with the mocap data like mascarade does. Sad if facerobot can't do it.

The second thing I need to know is how to export the mixed morph data and use it in lightwave. Will FBX or Collada send us the face morph animation? and would we have the animation in morph mix panel?


yeah. TAFA seems to be the best solution actually, but I'd like to save more time.

DigitalSorcery8
05-22-2012, 01:15 PM
So, I'd like to know if Motion Builder would create the list of morphs alone based on a mocap data or does it need a set of prebuild morphs to compare with the mocap data like mascarade does. Sad if facerobot can't do it.
I don't think Motion Builder creates the morphs at all. You have to have the morphs already IN the model and then you list them inside Motion Builder. MB then uses those morphs when retargeting. I don't know anything about FaceRobot except what I see on Youtube. I wish I could afford XSI, but then I just don't like Autodesk and resist the urge. :)


yeah. TAFA seems to be the best solution actually, but I'd like to save more time.
I've tried MimicPro for LW, MagPie Pro and cleaning up the data to make it look as good as TAFA took nearly the same amount of time - and TAFA was less irritating because you weren't cleaning up the data, just starting from scratch. And the cool thing is, I had never done ANY facial animation before! For the production I hope to be doing, we'll be using mocap for the body and hands, but most likely TAFA for all facial motion. Of course if there was a way to get facial mocap EASILY into LW, I'm open to that.

There must be a way though - how does ZignTrack get the data into LW?

silviotoledo
05-22-2012, 01:25 PM
how does ZignTrack get the data into LW?

Zigntrack does retarget but uses bones do deform the face. Several Bones deforming the face will look a bit jelly and I've not seen good results in lightwave with it, no matter if you have some muscle bones on face. But it seems to work better in Messiah.

Maybe it's a good solution to use zign retarget hability to get nulls that would drive a rig done in lightwave instead of bones itself. Anyway these nulls would work better driving morphs than bones.

silviotoledo
05-22-2012, 01:43 PM
Oh sorry! It doesn't do retarget. Zigntrack. It was poser done retargeting I saw and get confused.

Zigntrack on poser:
http://www.youtube.com/watch?v=-TTR0JrocsI

See it flics a little on face. Anyway poser has animation layers. Lightwave doesn't. So we can do corrections in poser easelly.

http://www.youtube.com/watch?v=LE-ZzyBLlDk


A curious way I will need to test for retargeting in Lightwave is:

MOVE PIVOT POINT from the nulls. Maybe just moving the pivot to the place we want will result in retargeting in lightwave.

pooby
05-22-2012, 01:46 PM
I don't understand why you think point caching is a bad option?

Surrealist.
05-22-2012, 01:57 PM
If you are going to be introducing SXI into your animation pipeline, what then is the advantage of bringing things over to LightWave for further rigging and animation?

XSI is a great tool that could be used to improve your animation pipeline greatly.

Have you thought about just finishing animation in XSI and then bringing the animation over via point oven?

You may already have good reason. But just wondering.

silviotoledo
05-22-2012, 02:38 PM
Pooby

I've seen your great face animations on XSI!

The disadvantages:

1) I can't edit MDD files on lightwave itself ( Modo allow sculpt over it. Cool ). So if latter tweaks are necessary I would have to back to another software and export the MDD again. No so fast as just adjust on lightwave.

2) MDD are also big size files.

3) I would have to CUT my characters head from the body once I can't apply MDD to a partial object or I would have to metalink and metalink does not work fine on partial objects.



Thanks for suggestion Richard

XSI licences costs a bit more from LW and XSI professionals would also cost too much more for me. They're 2.600 Kilometers away, woking in a competitive and compensative market and I'd have to pay more than the market pays to brig some. Here I have trainned some wavers that can do the job. Just need to be sure about what can be done in Lightwave.



And I'm Happy. My theory that MOVE PIVOT POINT is usefull for doing face retarget in lightwave works. See the test attached.

with some kind of CALIBRATION, I would also be able to use the point data from mocap to drive the morphs too if the drive of a bone-face-rig in Lightwave is not so good.

snsmoore
05-22-2012, 03:27 PM
A while ago I was exploring facial animation options and I settled on TAFA. Just by monkeying around in TAFA, my 1st animation was better than anything I had ever done with lipsync and I had good results in 20-30 minutes for 30+ seconds of dialog... (and the audio scrubbing is fast and pain free...) There are a lot of features I never even touched, but I plan on using it for all my facial animations.

Now I've just gotta settle on what to do for the rest of the character....really leaning towards mocap, since I'm seeing a lot of good results.

-shawn

pooby
05-22-2012, 03:38 PM
Thanks.

You can add bone animation over mdd driven points in lightwave.
If you use the dp kit nodal mdd reader and set it to before bones.

But it makes little sense to edit this in lightwave if you are generating the facial animation in a more suited package. You should do it all there.

DigitalSorcery8
05-22-2012, 07:37 PM
A while ago I was exploring facial animation options and I settled on TAFA. Just by monkeying around in TAFA, my 1st animation was better than anything I had ever done with lipsync and I had good results in 20-30 minutes for 30+ seconds of dialog... (and the audio scrubbing is fast and pain free...) There are a lot of features I never even touched, but I plan on using it for all my facial animations.
Exactly the same results here as well. Never did anything with lipsync and TAFA allowed me to get incredible results in a very short time.

Too bad it hasn't gotten more press because it's probably THE best program for lipsync in LW.

Now I've just gotta settle on what to do for the rest of the character....really leaning towards mocap, since I'm seeing a lot of good results.
I was lucky enough a few years ago to have been able to afford an 8 camera Optitrack system. Mocap for me was THE way to go. I was able to get the animation I needed far more quickly than with keyframe animation. I was able to focus more on telling the story than having to worry about walk cycle timing and other keyframe-related things. Mocap can really free your imagination if you've got a system that works well. Considering how well ipisoft is currently working, this is a great option! :thumbsup: