Results 1 to 6 of 6

Thread: A Request for Mocap Pointers

  1. #1
    Registered User
    Join Date
    Feb 2014
    Location
    United States
    Posts
    31

    A Request for Mocap Pointers

    Greetings,

    I am a proud newbie interested in eventually creating an animated pilot for a children's television show. I have Lightwave, Nevronmotion, and Adobe Master Suite. Can someone please suggest a quality mocap setup? I need the motion to be smooth, and the facial capture to be very accurate because the characters will be singing and dancing. I'm currently having someone model, texture, and rig the characters, but I'm getting confusing/conflicting information about the mocap end of things...Some say use iPi...multiple Kinect setups, hand controllers, plastic helmets with Wii remotes ducktaped to them...Aaaargh!!! Can someone please think back to when none of this made sense to them and have mercy on a dedicated and determined newbie...without the high tech lingo and condescendingly sarcastic tone please? Lol! I would GREATLY appreciate it.

    Oh yeah...I haven't purchased any mocap devices yet; I'm still researching the possibilities.

    Thanks,
    Ike

  2. #2
    Eat your peas. Greenlaw's Avatar
    Join Date
    Jul 2003
    Location
    Los Angeles
    Posts
    7,155
    In my opinion, getting into 'entry level' motion capture, isn't a trivial thing that you can pick up instantly, at least, not if you want to do it well and if your goal is to achieve 'pro-level' production animation. It's one thing to shoot random motions with a Kinect and retarget it to a rig, but to be able to knock out specific actions for a character for a project on a scheduled deadline will take a solid knowledge of the gear and its limitations, a good understanding of how to best perform to get the desired result, and knowing how to edit the data to make it work for your scenes. 'Home brew' mocap is a lot of work but if you have the will to do it, it can also be a lot of fun.

    Here's the process in a nutshell:

    1. Capture your performance.
    2. If necessary, process and edit the data.
    3. Retarget it to your character rig. If the character has different body proportions, you may need to edit the motion again.
    4. Import the character (or just the motion data) into your scene, and edit the motion for the scene if necessary.

    Here's my mocap workflow in more detail. I use iPi Mocap Studio, Motion Builder and LightWave. I also use Maya for a small part of the process but it's not absolutely necessary:

    1. Capture the performance using iPi Mocap Studio. In my setup, I use two or three Microsoft Kinect for Windows devices and three PS Move controllers. Two of the PS Move devices capture the hand rotations and the third is attached the head to capture head rotation. You can see my head rig here: Mocap Helmet Update.

    2. Track the motion and make corrections. iPi Mocap Studio is not a realtime process. Realtime capture may seem ideal but, IMO, the results are not very smooth. This is subjective of course, and it also depends on the motions you're trying to capture. The advantage with a 'post process' system like iPi Mocap Studio is that your computer and GPU can dedicate its power to higher accuracy over speed. I've heard some people complain that it's too 'slow' for them but my computer, which is fairly old and has a modest GTX 460 card, can process the mocap data at 0.6 seconds per frame, and that's been fast enough for our productions. Naturally, newer computers with more recent graphics cards can do this much faster. After tracking the motion to the body, you run a final pass where you apply the wrist and head motions from the PS Move controllers. Finally, if you wish, iPi Mocap Studio lets you apply finger animation using the new hand animation tools.

    When you're done, you have the option to import a rigged character and have iPi Mocap Studio retarget the motion to the character, or you can export the motion data as a BVH or FBX file. I prefer to export the motion as a BVH for Motion Builder.

    3. Prior to capture, I prepare my character in LightWave. The rig I use is a standard joints based mocap rig that's compatible with Motion Builder. One problem with LightWave's joints system is that the weight mapping is not compatible with third party programs because the weight maps have to be offset to a child joint. For example, in LightWave the arm weight map needs to be applied to the forearm joint, the forearm weight map needs to be applied to the hand joint, and so on. No, it doesn't make any sense but that's the way it's been since LightWave 9.6.1. In other programs, like Motion Builder and Maya, the arm weight map 'shockingly' goes on the arm joint, and the forearm weight map goes on the forearm joint, and so on. Imagine that!

    Anyway, this means you need to prepare a separate LightWave rig for third party programs to read properly, with all the weight maps applied to the correct joints. My 'short cut' is to simply export the rig via fbx to Maya, auto weight it, and make any necessary corrections using their weight painting brushes. Once you get the hang of it, you can do this in just a few minutes. It doesn't need to be perfect, just good enough to see correct deformations in Motion Builder. When I'm done, I export the rig from Maya as an FBX, which I can now use in Motion Builder for retargeting and editing.

    Naturally, I only need to run through step 3 once for each character. (In the case of the Brudders shorts projects, the last time I had to do this was for 'Happy Box' a few years ago. We're still using these rigs in our current Brudders production, we just added FiberFX fur and hair to them.)

    4. In Motion Builder, I import the BVH and the rigged character, retarget the motion to the character and make corrections. I might import the audio track and the camera and any essential props from my LightWave scene, to make sure the motion is exactly what I need for LightWave. When I'm done, I export an FBX of only the character rig with the animation.

    5. Finally, in LightWave, I open my original LightWave rigged character and then open the FBX with the motion data using Load Items from Scene, enable Merge Only Motion Envelopes, and click okay. If everything has been done correctly, the motion is transferred to the LightWave rigged character and it's ready for rendering. If you build your rig in LightWave with layered controls, you can edit the motion again if you need to. I typically just go back to Motion Builder for changes and tweaks.

    There are many variations of the above steps but that's what works for me. Other users may use Rebel Hill's rigging system (which has a mocap rig you can edit in LightWave,) Nevron Motion and Genoma, IK Booster, or any combination of these tools. Some alternatives to Motion Builder are the online service Ikinema Webanimate, Mixamo, or possibly iClone Pipeline with 3D Exchange.

    For our productions, the iPi-MB-LW workflow works great. Here's a peek at the process: Sister Mocap Test. Right now, we're using it for a music video production and you can see a small piece of it here: 'B2' Excerpt

    I hope this is helpful. There's more info with regards to our Little Green Dog productions in the 'B2' thread. There are also other mocap threads in this forum with lots of good info--here are a few search terms to try: motion capture, mocap, Nevron Motion, iPi Mocap Studio or Desktop Motion Capture, Motion Builder, Kinect, PS3 Eye.

    G.

  3. #3
    Registered User
    Join Date
    Feb 2014
    Location
    United States
    Posts
    31
    Dude! You have NO idea how much I appreciate you taking the time to type all of that information. I don't necessarily have a deadline for my project as I'm probably going to end up paying a production company to help me produce the pilot I'm pitching; I just like to know what's going on from a technical standpoint, so it's worth it for me to learn how to do it myself for future reference. Plus, like you said, it looks like a lot of fun once you have it down. If you don't mind my asking, what did your setup (iPi with all of the cameras) cost? How big of a space do you use to produce your work? It's awesome that you replied, I actually watched your video earlier. You're the reason I was researching iPi! Lol!

  4. #4
    Goes bump in the night RebelHill's Avatar
    Join Date
    Nov 2003
    Location
    jersey
    Posts
    5,766
    Depending on your budget... iPi if your budget is tighter... go for an optitrack or similar if you can afford it. For any serious amount of work you WILL want motionbuilder (or Maya) for retargeting and cleanup. When it comes to facial capture... you're gonna have to make a compromise somewhere. The kinect face stuff (like in nevron) is VERY basic, dont expect much detail or accuracy, and for any of the more accurate systems that are gonna be used for facial tracking, you're gonna have to develop your own face rig and retarget system for actually transferring the motion over to your characters... there really isnt much in the way of an off the shelf tool to do this... SI's face robot, and the boney face toolset for Max are about all there is.
    LSR Surface and Rendering Tuts.
    RHiggit Rigging and Animation Tools
    RHA Animation Tutorials
    RHR Rigging Tutorials
    RHN Nodal Tutorials
    YT Vids Tuts for all

  5. #5
    Eat your peas. Greenlaw's Avatar
    Join Date
    Jul 2003
    Location
    Los Angeles
    Posts
    7,155
    I was going to write a little bit about face capture but RH beat me to it. What he wrote is more complete than what I would have written anyway.

    You should also consider the design and complexity of your project and characters, and the level of quality you're expecting to get from the mocap system. I've seen some beautiful work done with iPi Mocap Studio but, it's true, the system doesn't compare to a higher end system for accuracy of the data. But if you design your characters and tailor your performances to what this system is capable of, it should be fine. iPi Mocap Studio is a comparatively inexpensive system but you need to have realistic expectations for it.

    FWIW, I can say that I'm mostly satisfied with what I've been able to get from it but I wont' say it was easy to do. It gets easier the more you use it and learn from your mistakes though.

    The cost for the iPi software is on their website, and I think they have links for all the gear and accessories you will need too. Off the top of my head, I'm not sure how much I have invested in 'home brew' mocap since I've been a user since iPi Desktop Motion Capture pre-1.0 beta, which was way, way back before PS3 Eye cameras and Kinect was even available. What you want in addition to the software and capture devices is a reasonably fast computer, a decent graphics card (a recent GTX model is highly recommended,) and a fast drive (I capture to an SSD.) You also need to make sure you have enough USB controllers (not to be confused with USB ports) for all these devices--for PS3 Eye, it's generally two to a controller; for Kinect it one per controller. You also need to add a bluetooth receiver for the PS Move controllers. Chances are, you will need to add multiple controller cards to your computer to accommodate all these devices. Finally, you'll need USB repeater cables (also called USB active extension cables) to wire it all together--these are special cables with a box on the end that boosts the signal to reach longer distances.

    If you go with PS3 Eye, you will need to consider the lighting situation and may need to add a light kit. You'll also need appropriate clothing and a fair amount of space. If you go with multiple Kinects, lighting is not so important but you'll want to shoot in a room with very little to no outdoor lighting (natural sun light can add IR pollution in you capture data.) You can use a smaller space with the Kinect setup but your capture space is smaller too.

    Some users use a laptop for capture and tracking. I recommend using a good desktop computer for tracking because tracking speed is highly dependent on the GPU speed, and graphics cards are simply much more powerful on desktops than on laptops. If you need to capture at remote locations, you might want to use the laptop for capture and a desktop for tracking. (I use only a desktop for capture and tracking, and I roll out several long repeater cables from our computer studio to the living room. Then I use a laptop with Team Viewer to wirelessly control the desktop in the other room. You can also remote control iPi Recorder using the buttons on the PS Move device.)

    I used to shoot in a two car garage using multiple PS3 Eye cameras. These days I shoot with two Kinects in our living room. We've been experimenting with using a third Kinect but have yet to use triple Kinect data in a production yet. Adding the third Kinect does require more space, which may limit how we can use this setup.

    Hope this helps.

    G.

  6. #6
    Registered User
    Join Date
    Feb 2014
    Location
    United States
    Posts
    31
    I thank both of you. I have time to experiment. Like I said, I'll probably pay someone to actually do the animation for the pilot; I just want to be familiar with the process. I'll be running all of this on a Core i5 with a Radeon HD 6470M, so I definitely have some limitations. I think I CAN upgrade the graphics card and the hard drive, though.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •