PDA

View Full Version : occlusion with single kinect



geo_n
08-09-2013, 12:11 AM
What is the next step for nevron in the future to improve the occlusion problem with single kinect? Right now even a person going sideview, the occluded arm not seen from the kinect sensor goes berserk. Ipi seems better at dealing with occlusion even with single kinect only btw. Will dual kinect support for nevron be for v1?

tcoursey
08-09-2013, 11:54 AM
What is the next step for nevron in the future to improve the occlusion problem with single kinect? Right now even a person going sideview, the occluded arm not seen from the kinect sensor goes berserk. Ipi seems better at dealing with occlusion even with single kinect only btw. Will dual kinect support for nevron be for v1?

Yes I'm hoping that this is much improved as well. Many other solutions handle occlusion much better. There is a DEPTH part of the sensor data, surly that portion of the data is being used by others....maybe it can come into the LW version as well.

lino.grandi
08-22-2013, 07:17 AM
What is the next step for nevron in the future to improve the occlusion problem with single kinect? Right now even a person going sideview, the occluded arm not seen from the kinect sensor goes berserk. Ipi seems better at dealing with occlusion even with single kinect only btw. Will dual kinect support for nevron be for v1?

I can hardly see support for dual kinect motion capture for Nevronmotion 1.0.

That really depends on Microsoft.

iPiSoft is an offline solution. NevronMotion works in real time. ;)

tcoursey
08-22-2013, 07:34 AM
good point Lino. Hadn't thought of that...doh! But we can still hope and dream... :)

PabloMack
08-24-2013, 11:19 AM
iPiSoft is an offline solution. NevronMotion works in real time. ;)

I can see using real time feedback to actors by providing them with a real time composite with something like TriCaster to coordinate the MoCap with CG that is played back and overlaid with a virtual set and CG actors. I think it seems to be possible with two systems now. Does the realtime MoCap output video provide an appropriate background so that it can be chroma-keyed into a TriCaster-style composite?

RebelHill
08-27-2013, 11:54 AM
Nah... if ipi "loses" a limb it goes nutso too.

However, ipi uses the kinect data as kind of a "cage" and rattles the skeleton around inside, that's how it tracks. I presume nevron is just taking joints and their rotations as spat out my the MS sdk... in that case it should be possible for it to "know" when a limb has gone out of view and as such it could just be told to keep it still at the last known good position until it comes back into view... perhaps.

geo_n
09-01-2013, 01:33 AM
Nah... if ipi "loses" a limb it goes nutso too.

However, ipi uses the kinect data as kind of a "cage" and rattles the skeleton around inside, that's how it tracks. I presume nevron is just taking joints and their rotations as spat out my the MS sdk... in that case it should be possible for it to "know" when a limb has gone out of view and as such it could just be told to keep it still at the last known good position until it comes back into view... perhaps.

Hence the request to have dual kinect support like IPI.
But also even in single kinect with IPI the mocap quality is superior. For some actual output with single kinect IPI, Truebones has some really good youtube videos about it. Some medium complex motions where arms would get occluded still worked.




I can hardly see support for dual kinect motion capture for Nevronmotion 1.0.

That really depends on Microsoft.

iPiSoft is an offline solution. NevronMotion works in real time. ;)

I hoped it didn't really depend on anything that newtek has no control of. I would like more quality and complex motions than realtime but simplistic motions if given the choice.

jwiede
09-21-2013, 01:16 PM
I would like more quality and complex motions than realtime but simplistic motions if given the choice.

Agreed. I understand that the realtime capture allows for usage scenarios that iPi simply cannot, but I suspect the majority of Nevron customers would be willing to endure a bit of post-processing if it allowed for higher quality and subtlety in the captured performance. I'm not suggesting getting rid of realtime capture support, but instead providing an option to post-process the captured data that would do things like stabilize joints while occluded, perhaps smooth high-frequency noise, and so forth. Going forward past Nevron 1.0, that post-process stage could also open the door to merging multiple kinects' captures into a single stream, and (ideally) even offering third-party devs a way to plug in their own post-processing code.

Greenlaw
09-23-2013, 01:42 AM
Part of that is because iPi Mocap Studio's tracking system, being a 'post process', can track backwards as well as forwards, making it easy to interpolate through the tracking error with just a little tweaking. A few shots in Happy Box were shot using the single-Kinect system before dual Kinect support became available, and I was able to track a decent version of Sister's 'chainsaw dance' tracked from the single Kinect data. That said, the dual Kinect version of the 'chainsaw dance'--which we re-shot and tracked a week or so later when we got the beta--was much better so we went with it. But we still kept a couple of the single Kinect shots in the short--they were good enough for the time (this was Aug 2011) and we felt there was no need to re-do them. (Okay, to be honest, we just ran out of time.) :)

G.

erikals
09-23-2013, 09:18 AM
Part of that is because iPi Mocap Studio's tracking system, being a 'post process', can track backwards as well as forwards, making it easy to interpolate through the tracking error with just a little tweaking.

by delaying the realtime preview with about 4fps i guess it should be possible to track this realtime as well.
maybe not as good, but certainly better. the freeze option rebel mentions is an alternative too.