PDA

View Full Version : Rokoko announces prosumer mocap suit.



Ernest
07-06-2017, 11:25 PM
https://techcrunch.com/2017/07/05/make-hollywood-quality-animations-at-low-budget-prices-with-this-motion-capture-suit/

It looks like Perception Neuron has a competitor. They just don't seem to do fingers.

Surrealist.
07-08-2017, 12:51 AM
Major issues with the feet. Just like perception, unfortunately. Sliding, popping and general instability when doing even simple walks. Too bad.

erikals
07-08-2017, 09:18 AM
Rokoko = $2500
Perception Neuron = $1000 (cheapest version)


Major issues with the feet. Just like perception, unfortunately. Sliding, popping and general instability when doing even simple walks. Too bad.
as far as i know, most mocap systems have those issues, cleanup is always a must.

at the moment there is no "eureka"solution for low-cost motion capture.

Surrealist.
07-08-2017, 08:03 PM
I have worked with this stuff. These problems do not come under the heading of cleanup. You can't clean it up. I have worked with mocap data that comes from the proper suit tech. Night and day. The foot movement data that comes from these low cost systems is useless. You can only use it for in place upper body motion.

Look very closely at the demo vids. Take out the actor and it is obvious. You can't use the ambulant data.

Compare to data you get from high end suits. Night and day.

From what I have seen low cost camera set ups are more stable for feet. With other issues of course.

You are right though. No panacea for low cost.

It is funny though how they choreographed moves that cover up this limitation. The hopping thing really gets me. In the shot before he is doing all this fantastic movement. And as soon as he starts to walk the data falls apart and they cut. Then foot slides as he approaches the steps. Then the hoping bit. Covers up the foot jerk/slide movement.

Brilliant scam. Slight of hand.

Ryan Roye
07-09-2017, 06:09 AM
I have worked with this stuff. These problems do not come under the heading of cleanup. You can't clean it up.

Technically, it isn't the foot movement, it is the positioning data. The rotation data is highly accurate, but these systems don't actually know where you are in 3d space like a set of 5+ cameras do, so the system has to make some high-probability guesses. This brings up the need to calculate planted foot movement; perception neuron's mocap data isn't actually planting the foot, this is done *after* looking looking at the raw data, and inverse kinematics are calculated based on what it finds. This is why you don't see a whole lot of clips of people flipping or doing any complex movements that involve the feet leaving the ground a whole lot. (lets be honest, the kind of people buying these suits aren't exactly physically fit masterpieces).

That said, I completely disagree. You can clean it up... just not with Lightwave's native tools. The data isn't that bad either, and its certainly cost effective versus the alternatives.

As for my thoughts, I think Perception Neuron will still take the cake here compared to Rokoko's offering due to the price point difference without offering anything that's noticeably better.

Danner
07-09-2017, 10:16 AM
What has me excited is this https://www.youtube.com/watch?v=Khoer5DpQkE combined with this https://www.youtube.com/watch?v=ZT-zz8zDZws as I already own a vive.

Ernest
07-09-2017, 06:46 PM
What has me excited is this https://www.youtube.com/watch?v=Khoer5DpQkE combined with this https://www.youtube.com/watch?v=ZT-zz8zDZws as I already own a vive.

OK, that is impressive! The girl's video, where she kicks back with her left leg and her right foot wobbles as she tries to keep her balance is what did it for me, since it shows that it tracks 6DOF on the ankle.

Of course, doing your performance with such a heavy thing strapped to your head might not be ideal, but actors are used to having a cam rig strapped on to capture their faces anyway.

Seems to be rental-only, though.

Surrealist.
07-10-2017, 10:18 PM
Yeah to reitterate. You can't clean up the perception or RoKoKo suit mess it gives you on ambulant data. I know as I said. I have a perception suit. And it does this. Can you work with the data and "clean it up". Sure. You can waste your time. Especially in MotionBuilder. Great tools for this and you can have success to a degree. However some of it is just unusable.

The main problem is you were counting on the performance in key areas and it is litterally destroyed. And evrything is overall such a mess.

There is clean up and then there is repair which means animating by hand which is where you wind up.

Animating by hand is not a viable solution in a MOCAP - based production. And this is not clean up. Clean up is another mater.

In MOCAP production you alot a certain amount of time to clean up. And having the best tool for this, MotionBuilder, it saves a lot of time.

But there is a point where this time far exceeds that which is normally budgeted for.

At that point it ceases to be clean up.

It is far better to invest in a system that records properly.

They are available. All other suit set ups record the data properly. The Perception and RoKoKo do not.

tyrot
10-11-2017, 05:42 PM
As for my thoughts, I think Perception Neuron will still take the cake here compared to Rokoko's offering due to the price point difference without offering anything that's noticeably better.


I must buy a cheap mocap suite very soon .. for a live show. I watched some horror stories about durability of Neuron . I am leaning towards Rokoko .. Do you guys have any other info about that ?

https://www.motionshadow.com/videos

http://www.nansense.com

I have found these two companies.. Motionshadows looks very solid.. also i think their UE4 and Unity plugins rocks.. What do you think ?

Greenlaw
10-11-2017, 06:06 PM
This may be of interest: the guys at iPi mentioned they were looking into combining their system with intertial suits. I believe the thinking is that since their system tracks volume in actual 3D space, it might give the inertial system the positional data it's missing. That was a while ago though and I don't know what progress they've made since.

I'm still using their system with dual Kinect One (v2). I tried getting four Kinect One sensors working together a while back but I just don't have the horsepower for it. They released a update a few weeks ago that's supposed to be faster and more efficient, so I want to try at least three sensors again soon.

iPi Soft recently added support for the Logitech C922 camera, which can record 60fps and has double the res of PS3 Eye, theoretically increasing the capture space. I haven't used these cameras yet but the test videos looks very promising:


https://www.youtube.com/watch?v=Vhhybvt0q5U

I'm not planning to switch to the Logitech camera any time soon--it's been hard to find time to work with the system I have now. Maybe next year. Sigh! :o

Greenlaw
10-11-2017, 06:21 PM
Here's another one with the C922:


https://www.youtube.com/watch?v=0sZQ5rCTwlg

Before anybody asks, no, iPi Soft is still not real-time. That's always been the tradeoff for getting better quality data. Considering it only takes a few minutes to track the data, I've never felt this was a big deal.

That said, they hinted recently that they are working on a real-time solution.

Greenlaw
10-11-2017, 06:51 PM
Oh, I'd never even seen this one before. This demo is using the same dual Kinect One setup I have now:


https://www.youtube.com/watch?v=BMuZ3P1FCwk

I should edit and post some of tests I did earlier this year. Just too much stuff always going on at home. (This weekend I'm helping my daughter with her Halloween costume. No rest for dad.)

tyrot
10-12-2017, 03:23 AM
greenlaw ... these are great but i need live streaming to unity :(

Greenlaw
10-12-2017, 11:46 AM
Oh, yeah, sorry. At this time, iPi Mocap Studio is intended for creating pre-recorded motion data for animation, games, and vfx, not live performance.

They said they're looking into real-time capture, and it's my guess that this is related to to the hybrid system they hinted at, but that's just speculation on my part. Even if that's accurate, no idea when it might be ready.

Oldcode
10-12-2017, 11:53 AM
GreenLaw,

Can you get away with just 2 camera or do you need 6 like in the videos above?

Greenlaw
10-12-2017, 01:18 PM
I don't have C922 cameras but I'm going to guess you need at least four, just like with the PS3 Eye cameras. The C922 is an RGB-only camera so you need enough coverage to prevent self-occlusion problems. Adding more RGB cameras means greater accuracy because the tracker doesn't have to interpolate as much. As a matter of fact, if you have eight RGB cameras, you can record two performers at once with reasonable accuracy.

It's different with Kinect (v1 or v2) because you're recording 3D data, not RGB video, so it's possible to get full coverage with only two devices (that's what they're doing in the Storm Trooper video.) I imagine 3 or 4 might be better but I haven't been able to pull that off yet because my home computers are just too old or underpowered. (Curiously, my windows tablet supports Kinect One much better than my workstation.)

The downside with Kinect is smaller capture space and it's not as good as the PS3 or C922 for fast motion because of the lower frame rate. (Kinect is 30 fps; PS3 and C922 can record 60 fps. For motion capture, frame rate is much more important than resolution.) I don't think you can track multiple actors with Kinect data but I've never tried it either.

For me, the dual Kinect One configuration is the better option because I can set it up in five minutes in my small-ish living room and start recording right away. No special lighting or clothing required.

Based on my observations, the quality with dual Kinect One is less noisy than with Kinect for XBox 360, and the overall quality is comparable to using four PS3 Eye cameras. I don't know if using three or four Kinect One devices is actually better. iPi Soft says maybe only slightly. TBD.

The RGB camera setup requires a larger space, even room lighting (minimal and non-directional shadows) and special clothing considerations. It's a little more trouble to set up this configuration but if you can meet the requirements, you gain a larger space to perform in and greater accuracy for fast motion.

With either setup, you want to avoid a room with glossy surfaces--reflections might confuse the tracker, especially in the calibration step.

BTW, the Kinect can record RGB data alongside the 3D depth data but it isn't used for tracking. To improve bandwidth efficiency, you can disable it. (I used to record the RGB data but I don't bother anymore.)

A few computer considerations:

You can connect multiple Kinect XBox 360 (v1) sensors to a single computer; however, only one Kinect One (v2) can be connected to a single computer. To record with multiple Kinect One sensors, you use distributed recording with multiple computers. The data is automatically merged on a 'master' computer as soon as you stop recording, or you can save the data and merge it later. iPi Soft says the one-computer-per-Kinect One-sensor requirement is a Microsoft-imposed limit and they can't do anything about it. It's probably just as well though because the Kinect One requires more bandwidth than the original sensors.

With RGB cameras, I think the limit is four PS3 Eye cameras per USB controller. It's probably fewer for C922 because of the higher video resolution. However, if your computer allows it, you can add more USB controllers for additional cameras (up to 16 I think.) Otherwise, you can use distributed recording for RGB cameras too. (The most I've ever used is 6 PS3 Eye cameras and that was a long time ago.)

Sorry. that's probably a ton more info than you expected. But you know me, once I get started... :p

Spinland
10-12-2017, 01:55 PM
Please, sir, ramble on! Some of us are very interested in these developments as they relate to gigs we're landing (or trying to land). I've personally not had a lot of problems with the Neuron set but my use cases for it probably don't push far into where it starts to break down. The more versatile my toolbox the better I'm positioned. :jam:

Mark

Greenlaw
10-12-2017, 02:10 PM
Oh, I just remembered something important about Kinect RGB:

If you use three or more Kinect sensors, the RGB video data then becomes necessary because iPi Mocap Studio needs it to visually track a point for scene calibration. With only two sensors you don't need the RGB data because you can calibrate using a sheet of foam core and have the software track a flat plane using only the depth data. And if you have only a single Kinect, calibration is not necessary. (Obviously, a single Kinect is not nearly as good for complex motions as dual Kinect though.)

Oldcode
10-12-2017, 05:24 PM
Thanks GreenLaw,

Actually, you anticipated and answered just about every question I had. Based on what you've told me, I think motion capture is beyond me right now. Maybe later. I mostly wanted to use it to do routine motions that actually quite boring, but can be difficult to animate with key frames.

I'll keep my eyes open. Maybe someday the software will get better and the hardware cheaper.