Page 2 of 2 FirstFirst 12
Results 16 to 26 of 26

Thread: Facial motion capture solutions?

  1. #16
    Newbie Member
    Join Date
    Mar 2006
    Location
    Wall Twp, NJ
    Posts
    46
    Will do. I've put in the request at work to attend SIGGRAPH. We'll see what happens. You're correct re the $5,000. I investigated the $5,000 base package for full-body. It doesn't include the suit, markers, camera mounts, etc. So it will get a bit more expensive. I've also concluded that I may have to toss in another $4,000 for MotionBuilder, depending how some of the cheaper options pan out. My understanding is that MotionBuilder will pass fbx files back and forth with Lightwave.

    Thanks again for the help everyone.

  2. #17
    Post-apocalyptic rakker16mm's Avatar
    Join Date
    Aug 2006
    Location
    Palo Alto California
    Posts
    872
    Zign Track looks looks like it might be the poor man's solution. I'd love to hear from any one who has actually used it, especially with LightWave.

  3. #18

  4. #19
    Eat your peas. Greenlaw's Avatar
    Join Date
    Jul 2003
    Location
    Los Angeles
    Posts
    7,164
    Wow, this is an old thread. Thanks for bumping it though...mycap looks pretty interesting and it apparently supports LightWave too. Will have to check it out when I have time.

    Just to bring this thread up-to-date, LW3DG's own Nevron Motion supports face capture if you have a Kinect and can run under Windows. I have yet to try this myself though. (Too many projects on my plate already.)

    G.

  5. #20
    Eat your peas. Greenlaw's Avatar
    Join Date
    Jul 2003
    Location
    Los Angeles
    Posts
    7,164
    Looking at the video examples, I guess I have seen this before. Still looks really good--don't know how I forgot about it.

  6. #21
    Plus the price is attractive. I'd like to see how the lw export works...

  7. #22
    Goes bump in the night RebelHill's Avatar
    Join Date
    Nov 2003
    Location
    jersey
    Posts
    5,771
    The export works the same way all these things work... you get a bunch of nulls who's animation matches the captured marker positions.

    So unless you can build a bones based face rig in LW (which is so problematic you may as well say, no you cant), its useless... And even if you could build such rig... there's no way to be able to retarget the marker data to fit an alternative face shape/proportions.
    LSR Surface and Rendering Tuts.
    RHiggit Rigging and Animation Tools
    RHA Animation Tutorials
    RHR Rigging Tutorials
    RHN Nodal Tutorials
    YT Vids Tuts for all

  8. #23
    Eat your peas. Greenlaw's Avatar
    Join Date
    Jul 2003
    Location
    Los Angeles
    Posts
    7,164
    Ah, thanks RH. I wondered if that would be the case because that's exactly what I ran into with Brekel Pro Face, which I felt was too much trouble to deal with in the insanely tight schedules I have.

    Face capture is not a top priority for me but I would a practical solution that works well enough to use on digital stunt doubles for fx work. Last week I shot mocap for digital doubles for a couple of film productions--the body motion will be fine but there's no face animation of course. Maybe next time around, I'll try Nevron Motion for the faces because for these type of shots even the most basic facial movements would be the 'icing on the cake'.

    G.

  9. #24
    You can use Expressions to measure the relationships (such as distances and angles) between the nulls etc and use that to drive morphs, which is how these facial retargetters often work. I build them regularly, although not in LW.

    As a very simple example. you can use the chin null position to drive a mouth open/shut morph, or even a bone rotation. By having a layer of abstraction like this, it really doesn't matter if the face doesnt fit your character.
    http://www.newtek.com/forums/image.php?type=sigpic&userid=18493&dateline=130857  4707

  10. #25
    Not doing nothing.... Tranimatronic's Avatar
    Join Date
    Dec 2003
    Location
    Vancouver BC
    Posts
    496
    thats pretty much what Nevron (kinectSDK) and Dynamicxyz http://www.dynamixyz.com/main_WordPress/ do.
    They measure the difference between the nulls current position and its start position and use this as the amount of the morph (grouping the nulls together to form logical facial expressions.)
    Only problem with this is these are broad approximations of the exact pose (with kinect we are trying to cover every possible expression using 6 morph targets) and you often find yourself painting corrective or more expressive morphs.
    Notice on the much more expensive Dynamicxyz you don't get told how many morphs were used. I think with a whole lot of patience (and the ability to record the audio /video from Nevron which currently you cant do) you could get good results half mocapping half hand animating

    For background characters or a quick first pass though they are fast and (in Nevron's case) inexpensive.

  11. #26
    Eat your peas. Greenlaw's Avatar
    Join Date
    Jul 2003
    Location
    Los Angeles
    Posts
    7,164
    Thanks for info! Yes, in this case all I'm expecting from Nevron is just a little extra life in the faces of the digital stunt doubles, not grand performances. But given my typical schedules, I'll have to be able to record and apply quickly. Such a pipeline could make a big quality difference for certain types of shots we do at work. Well, in the future anyway--my current schedule keeps me far away from too much experimenting.

    G.

Page 2 of 2 FirstFirst 12

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •