PDA

View Full Version : Custom rig with Nevron, and other videos



Ryan Roye
12-20-2013, 02:52 PM
Just to show it is possible. Nevron is also a gateway tool to cross rig animation transfer which I'm really excited about!


http://www.youtube.com/watch?v=cHh2WcaYNJ0&feature=youtu.be

I will post all of my demo videos and content in this thread to keep things organized. Maybe someone will find my experiments interesting.

Ryan Roye
12-26-2013, 07:31 PM
NOTE: I removed the brow movements from both characters; to me it seems much more efficient to do the face elements in separate passes rather than trying to do it all at once. Also, the head rotation was done separately as well because the kinect has a tendency to mess with the lips as you turn your head.


http://www.youtube.com/watch?v=j9-vMnyLbAk&feature=youtu.be

Ryan Roye
12-29-2013, 08:20 AM
In this test i'm mainly looking at performance for full body tracking. This clip only received minor cleanup on the legs, and a bit of hand-animation for the hands/head. This whole thing was done in the timeframe of a few hours... a large chunk of that was spent lugging my whole computer downstairs and plugging it into the living room so that I had space to move around.

As most know, limitation of only using a single kinect camera is that it has no way of tracking you if you turn beyond 30-40 degrees away from the camera, and if your arms or legs overlap significantly it will have to guess where they actually are (which is often wrong).


http://youtu.be/faX1H1GRthY

I can do most of the simpler dialogue shots I need for Delura with the computer upstairs using mocap, but a little spring cleaning is going to be needed before I can do the stuff like shown in this video without hauling the compy downstairs. When MS gets off their duffs and puts out that SDK the LW3DG needs to enhance mocap functionality (to either use more or different cameras for increased capabilities), I'll be very excited to see it! :)

Ryan Roye
12-31-2013, 01:50 PM
Motion capture test with Speck (not-so-human legs). No cleanup, raw output. Baking+IK keyframe manipulation is needed for solid foot placement... but it is still a huge timesaver in terms of getting quick motions out of characters that don't require complex choreography.


http://youtu.be/7kqFCWAP-N0

evenflcw
12-31-2013, 08:36 PM
Loved the story twist in video 3. Cool tech demos. Happy New Year!

Ryan Roye
01-01-2014, 11:16 AM
Loved the story twist in video 3. Cool tech demos. Happy New Year!

Thanks, and same to you! I'll be putting out a few more tech demos before I feel ready to bust out Nevron on actual productions. So far, it is really opening some doors that have been long closed to me.

snsmoore
01-01-2014, 12:56 PM
Ryan,

Any plans to put some mocap based cleanup workflows in your ikboost training series? Wondering how ikb is efficiently used to correct mocap problem against a Genoma-Nevron based rig.

Ryan Roye
01-01-2014, 06:17 PM
Any plans to put some mocap based cleanup workflows in your ikboost training series? Wondering how ikb is efficiently used to correct mocap problem against a Genoma-Nevron based rig.

Yes! I'll be sure to provide various common mocap issues and IKBooster remedies for them in the content... I want people to have a solid understanding of how to take advantage of motion capture without dealing with all the problems that people normally run into... like having to re-position the clip every time you load one, or having to manually adjust animations to get solid foot placement, or having to use someone else's rig to take advantage of mocap data.

In reference to Nevron content in the IKB comprehensive series, I know for certain I'll be covering adapting a custom rig to the Nevron Genoma rig (and what its advantages are over the "native" method is), and I may throw in a quick tip about solidifying the leg motions generated from the Kinect so that it looks good even in closeup shots.

It is also possible that I may do a self-standing overview and workflow video on Nevron; that's still a bit on the horizon.

geo_n
01-02-2014, 12:26 AM
Very nice facial tracking test. The lipsync is pretty decent. Were you opening your mouth to extreme positions for kinect to pick it up? Would have been great to see how the actual human face needs to move to get decent results. I know lighting plays a big role for the kinect to get good face mocap.
Speck video was really funny :D

Ryan Roye
01-02-2014, 02:29 AM
Very nice facial tracking test. The lipsync is pretty decent. Were you opening your mouth to extreme positions for kinect to pick it up? Would have been great to see how the actual human face needs to move to get decent results.

I had literally just gotten my Kinect when I did that demo, so I was a bit busy working out the kinks and quirks about facial mocap which I was tinkering with. Some wrenches include facial mocap being extremely flakey with smoothing settings over 70, or having items in your room that grab the kinect's attention and throw facial tracking off (it sure hated my red coat hanging on the door).

I will put out another face recording demo with live footage paired with it; I think I can do much better than what is shown above now that i'm familiarized with Nevron. Basically, as said before I play the footage at about 1/3rd the clip's original playback speed in order to give the kinect more time to process my face.

In the meantime, have a video of me mocapping a dragon and monkeying around like an imbecile. With a bit more work, the dragon's legs could be made to be aware of the floor plane and not ever push past it, but it's just a quick and dirty test for now.


http://www.youtube.com/watch?v=MmwA7fgwZDw&feature=youtu.be

snsmoore
01-02-2014, 02:05 PM
and I may throw in a quick tip about solidifying the leg motions generated from the Kinect so that it looks good even in closeup shots.


That would be a really nice addition. (even a rough bonus video would be welcomed)

Ryan Roye
01-02-2014, 02:55 PM
Not really a full tutorial, but people who are interested in my facial tracking workflow could find this informative.


http://youtu.be/wbnRGz-yWWo

geo_n
01-03-2014, 04:15 AM
Thanks for the vid. Again the results are pretty decent especially considering the cost. How far are you from the kinect? Not sure I understand what you did with the graph editor there but it looks like you were calibrating the min/max and multiplying it but not sure how. There should be a way to do it with nevron faster and easier setup so people won't have to exaggerate their face for the kinect to pick it up. That was the problem we had with our test we had to open our mouth super wide to get something readable by the kinect. Faceshift has a way to calibrate a persons face so there's less need to multiply motion.
I think it would be great if you include some of this nevron info in your upcoming tutorials at liberty. Very useful stuff.
For the japanese market with anime, etc, its not really critical to have perfect lipsync which I'm sure anyone that has watched any anime knows it so this result is perfectly usable.

Ryan Roye
01-03-2014, 05:57 AM
Thanks for the vid. Again the results are pretty decent especially considering the cost. How far are you from the kinect?

I'm about 2.5 feet away from the kinect in that video. I do hope to have have some Liberty3d.com training content about Nevron in the near future, but of course finishing IKB Comprehensive videos take priority before that can happen. Basically, boosterlink is nearly identical to cyclist in its functionality with tiny exceptions, but mainly you determine the minimum and maximum of the desired face element, then shift around two keyframes until the sensitivity level suits your preference. In even fancier setups, one could put in even more keyframes so that the mouth will always ease into a closed or open state.

Ryan Roye
02-11-2014, 09:56 PM
I got a ps3 move working without a PS3 or move camera. I just needed the gyroscope functionality for the arms and a used $24 move controller from e-bay works nicely.


http://youtu.be/hLZ3wJCTYyk

I am using 3rd party drivers (http://code.google.com/p/moveframework/downloads/detail?name=joyemu_3.2.zip&can=2&q=) to make this work, and can only verify that this works with windows 7.

stevecullum
03-29-2014, 03:19 AM
Hey Ryan, I've been watching through you IK booster videos - there is some really cool stuff that I didn't even know was possible in LW - onion skinning, that crazy spring effect from that long bone chain - awesome stuff. But there was a bit that kind of got skimmed over, that would be very helpful when working with nevron. How can I use IK booster to clean up the jitteriness on the feet, then turn it into something I can then loop? I have a walk, that I would like to use, but it need some cleaning etc... Perhaps you already covered this some where - if so can you post a link to the right video?

Many thanks!

Ryan Roye
03-29-2014, 09:35 AM
How can I use IK booster to clean up the jitteriness on the feet, then turn it into something I can then loop? I have a walk, that I would like to use, but it need some cleaning etc... Perhaps you already covered this some where - if so can you post a link to the right video?

Kinect mocap is really flakey when it comes to the legs; they will require a lot of adjustment for usable results if the camera isn't positioned to cut off the knees in the shot.

For cleaning up the jitteryness of the feet for Nevron mocap (Kinect), take a look at "FullTimeIK_with_IKB.mp4" bundled with the IKB content. After saving your mocap file, load it into another scene and bake out nulls like shown in the video which will act as the IK targets. You will then be able to delete blocks of keyframes on those nulls where jittering/sliding occurs in the legs, using linear keys as the end of the block to remove the wobble TCB causes. The only reason I suggest this method is that you will need to manipulate the center of gravity or equivalent in order to re-balance the character so that their torso is properly aligned over their legs. When done, you can re-save the relative motion and deploy it wherever desired.

For normal mocap, you have several options:

- Baking + fix will fit the bill most of the time; just delete a few keyframes using "child" mode on the upper leg for a frame or two after the character needs to lift their leg.

- Binding is a fast way to shove the motion of a limb up to the topmost parent of the heiarchy. This is useful for those occasional instances where both the object and the bones need to move at the same time in order for the motion to operate correctly (IE: character is on both of their knees).

- Interpolating via deleting.


As far as looping, you'll need to use the following tools:

- Delete key in child mode.

- Copy Key from Current in All mode

What you'll need to do is make the keyframes very loose at the beginning and end of the motion, and use copy key from current to paste the very first keyframe on the entire heiarchy to the last frame so that the first and last frames match. Alternatively, if you don't want to worry about keyframing work you can set a pre-mix of 1-4 frames when loading the motion so that it'll transition into it; just know there is a limit to how different the first frame of the second motion can be from the last frame of the first.

stevecullum
03-29-2014, 10:28 AM
Thanks for the advice, Ryan. I'm currently working with regular mo-cap right now. Been playing about with autobind and bakespots, after deleting the Z motion of the hips bone. The way it propels the character forward is superb! I can see from your 3rd IKB video that it's capable of some really amazing things. One thing I do see in my own experiments, is sometime you see the character appear to walk down an invisible flight of stairs. Haven't quite figured out the cause of this yet, but wondered if it might be related to the hips still. Thanks again for the tips - will see what I can come up with now...

Ryan Roye
03-29-2014, 11:37 AM
Haven't quite figured out the cause of this yet, but wondered if it might be related to the hips still. Thanks again for the tips - will see what I can come up with now...

If the character object is moving after loading a relative-prepped clip, then you need to make sure that you delete all the XYZ keyframes of the character object before saving. Remember, you want the "jump" bind null to take care of all the XYZ positioning so you can just load the clip and have it play from where the character is currently standing.

Good luck! If you get stuck at any point just give me a PM or post here.

stevecullum
03-29-2014, 12:57 PM
Thanks Ryan. The two circular bind nulls that get attached, is that to make sure the bake spot's Y position is maintained consistently? From what I'm observing from just playing with a simple bones only test, it's seems to be the cause of the stair stepping effect. I'm trying to understand the logic of the rig, so I can adapt it to my own setups..etc..

Ryan Roye
04-26-2014, 08:32 PM
Thanks Ryan. The two circular bind nulls that get attached, is that to make sure the bake spot's Y position is maintained consistently? From what I'm observing from just playing with a simple bones only test, it's seems to be the cause of the stair stepping effect. I'm trying to understand the logic of the rig, so I can adapt it to my own setups..etc..

At some point I must have missed this post. Breakdown:

- The two triangular nulls (footbind left/right) are for hand-keyed motions. All motions using these nulls should always assume the character is moving in only 1 direction. Animations that use these nulls can be steered around freely when prepared and saved correctly. Most often, these are used for any walk/run cycles, climbing things like a ladder, etc.

- The sphere-shaped null called "Jump" is used instead of the triangular footbind nulls when dealing with motion capture and motions that have pre-defined turns. The jump null is necessary to simplify centering it on the character object between uses.

Essentially, using a same as item constraint and holding the nulls in place, then baking the motion... inversely transfers all motion on that null. When a bakespot is then applied, and the motion is saved, the bakespots activate and re-invert the motion, applying it to the character object and yielding additive motion (letting it play from where the character is currently standing).

stevecullum
04-27-2014, 02:40 AM
Thanks for the Breakdown Ryan. After many hours of experimenting, I discovered the source of my problems was a dodgy mocap file. When viewed in an orthographic window, it sloped up hill over time. When I tried with a key framed walk cycle, the issue went away. I'll have to see if I can fix the source file first and then re-test with the advantage of the above information in mind. Cheers!