Page 1 of 2 12 LastLast
Results 1 to 15 of 22

Thread: Custom rig with Nevron, and other videos

  1. #1

    Custom rig with Nevron, and other videos

    Just to show it is possible. Nevron is also a gateway tool to cross rig animation transfer which I'm really excited about!



    I will post all of my demo videos and content in this thread to keep things organized. Maybe someone will find my experiments interesting.
    Professional-level 3d training: Ryan's Lightwave Learning
    Plugin Developer: RR Tools for Lightwave

  2. #2

    Facial Tracking Test

    NOTE: I removed the brow movements from both characters; to me it seems much more efficient to do the face elements in separate passes rather than trying to do it all at once. Also, the head rotation was done separately as well because the kinect has a tendency to mess with the lips as you turn your head.

    Last edited by Ryan Roye; 12-26-2013 at 07:36 PM.
    Professional-level 3d training: Ryan's Lightwave Learning
    Plugin Developer: RR Tools for Lightwave

  3. #3
    In this test i'm mainly looking at performance for full body tracking. This clip only received minor cleanup on the legs, and a bit of hand-animation for the hands/head. This whole thing was done in the timeframe of a few hours... a large chunk of that was spent lugging my whole computer downstairs and plugging it into the living room so that I had space to move around.

    As most know, limitation of only using a single kinect camera is that it has no way of tracking you if you turn beyond 30-40 degrees away from the camera, and if your arms or legs overlap significantly it will have to guess where they actually are (which is often wrong).



    I can do most of the simpler dialogue shots I need for Delura with the computer upstairs using mocap, but a little spring cleaning is going to be needed before I can do the stuff like shown in this video without hauling the compy downstairs. When MS gets off their duffs and puts out that SDK the LW3DG needs to enhance mocap functionality (to either use more or different cameras for increased capabilities), I'll be very excited to see it!
    Professional-level 3d training: Ryan's Lightwave Learning
    Plugin Developer: RR Tools for Lightwave

  4. #4
    Motion capture test with Speck (not-so-human legs). No cleanup, raw output. Baking+IK keyframe manipulation is needed for solid foot placement... but it is still a huge timesaver in terms of getting quick motions out of characters that don't require complex choreography.

    Professional-level 3d training: Ryan's Lightwave Learning
    Plugin Developer: RR Tools for Lightwave

  5. #5
    Code Muppet evenflcw's Avatar
    Join Date
    Feb 2003
    Location
    Stockholm, Sweden
    Posts
    2,642
    Loved the story twist in video 3. Cool tech demos. Happy New Year!

  6. #6
    Quote Originally Posted by evenflcw View Post
    Loved the story twist in video 3. Cool tech demos. Happy New Year!
    Thanks, and same to you! I'll be putting out a few more tech demos before I feel ready to bust out Nevron on actual productions. So far, it is really opening some doors that have been long closed to me.
    Professional-level 3d training: Ryan's Lightwave Learning
    Plugin Developer: RR Tools for Lightwave

  7. #7
    Member snsmoore's Avatar
    Join Date
    Apr 2005
    Location
    Santa Rosa, CA
    Posts
    200
    Ryan,

    Any plans to put some mocap based cleanup workflows in your ikboost training series? Wondering how ikb is efficiently used to correct mocap problem against a Genoma-Nevron based rig.

  8. #8
    Quote Originally Posted by snsmoore View Post
    Any plans to put some mocap based cleanup workflows in your ikboost training series? Wondering how ikb is efficiently used to correct mocap problem against a Genoma-Nevron based rig.
    Yes! I'll be sure to provide various common mocap issues and IKBooster remedies for them in the content... I want people to have a solid understanding of how to take advantage of motion capture without dealing with all the problems that people normally run into... like having to re-position the clip every time you load one, or having to manually adjust animations to get solid foot placement, or having to use someone else's rig to take advantage of mocap data.

    In reference to Nevron content in the IKB comprehensive series, I know for certain I'll be covering adapting a custom rig to the Nevron Genoma rig (and what its advantages are over the "native" method is), and I may throw in a quick tip about solidifying the leg motions generated from the Kinect so that it looks good even in closeup shots.

    It is also possible that I may do a self-standing overview and workflow video on Nevron; that's still a bit on the horizon.
    Last edited by Ryan Roye; 01-01-2014 at 06:21 PM.
    Professional-level 3d training: Ryan's Lightwave Learning
    Plugin Developer: RR Tools for Lightwave

  9. #9
    Super Member geo_n's Avatar
    Join Date
    Aug 2007
    Location
    jpn
    Posts
    4,677
    Very nice facial tracking test. The lipsync is pretty decent. Were you opening your mouth to extreme positions for kinect to pick it up? Would have been great to see how the actual human face needs to move to get decent results. I know lighting plays a big role for the kinect to get good face mocap.
    Speck video was really funny

  10. #10
    Quote Originally Posted by geo_n View Post
    Very nice facial tracking test. The lipsync is pretty decent. Were you opening your mouth to extreme positions for kinect to pick it up? Would have been great to see how the actual human face needs to move to get decent results.
    I had literally just gotten my Kinect when I did that demo, so I was a bit busy working out the kinks and quirks about facial mocap which I was tinkering with. Some wrenches include facial mocap being extremely flakey with smoothing settings over 70, or having items in your room that grab the kinect's attention and throw facial tracking off (it sure hated my red coat hanging on the door).

    I will put out another face recording demo with live footage paired with it; I think I can do much better than what is shown above now that i'm familiarized with Nevron. Basically, as said before I play the footage at about 1/3rd the clip's original playback speed in order to give the kinect more time to process my face.

    In the meantime, have a video of me mocapping a dragon and monkeying around like an imbecile. With a bit more work, the dragon's legs could be made to be aware of the floor plane and not ever push past it, but it's just a quick and dirty test for now.

    Professional-level 3d training: Ryan's Lightwave Learning
    Plugin Developer: RR Tools for Lightwave

  11. #11
    Member snsmoore's Avatar
    Join Date
    Apr 2005
    Location
    Santa Rosa, CA
    Posts
    200
    Quote Originally Posted by chazriker View Post
    and I may throw in a quick tip about solidifying the leg motions generated from the Kinect so that it looks good even in closeup shots.
    That would be a really nice addition. (even a rough bonus video would be welcomed)

  12. #12
    Not really a full tutorial, but people who are interested in my facial tracking workflow could find this informative.

    Professional-level 3d training: Ryan's Lightwave Learning
    Plugin Developer: RR Tools for Lightwave

  13. #13
    Super Member geo_n's Avatar
    Join Date
    Aug 2007
    Location
    jpn
    Posts
    4,677
    Thanks for the vid. Again the results are pretty decent especially considering the cost. How far are you from the kinect? Not sure I understand what you did with the graph editor there but it looks like you were calibrating the min/max and multiplying it but not sure how. There should be a way to do it with nevron faster and easier setup so people won't have to exaggerate their face for the kinect to pick it up. That was the problem we had with our test we had to open our mouth super wide to get something readable by the kinect. Faceshift has a way to calibrate a persons face so there's less need to multiply motion.
    I think it would be great if you include some of this nevron info in your upcoming tutorials at liberty. Very useful stuff.
    For the japanese market with anime, etc, its not really critical to have perfect lipsync which I'm sure anyone that has watched any anime knows it so this result is perfectly usable.

  14. #14
    Quote Originally Posted by geo_n View Post
    Thanks for the vid. Again the results are pretty decent especially considering the cost. How far are you from the kinect?
    I'm about 2.5 feet away from the kinect in that video. I do hope to have have some Liberty3d.com training content about Nevron in the near future, but of course finishing IKB Comprehensive videos take priority before that can happen. Basically, boosterlink is nearly identical to cyclist in its functionality with tiny exceptions, but mainly you determine the minimum and maximum of the desired face element, then shift around two keyframes until the sensitivity level suits your preference. In even fancier setups, one could put in even more keyframes so that the mouth will always ease into a closed or open state.
    Professional-level 3d training: Ryan's Lightwave Learning
    Plugin Developer: RR Tools for Lightwave

  15. #15
    I got a ps3 move working without a PS3 or move camera. I just needed the gyroscope functionality for the arms and a used $24 move controller from e-bay works nicely.



    I am using 3rd party drivers to make this work, and can only verify that this works with windows 7.
    Professional-level 3d training: Ryan's Lightwave Learning
    Plugin Developer: RR Tools for Lightwave

Page 1 of 2 12 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •