Results 1 to 10 of 10

Thread: occlusion with single kinect

  1. #1
    Super Member geo_n's Avatar
    Join Date
    Aug 2007
    Location
    jpn
    Posts
    4,677

    occlusion with single kinect

    What is the next step for nevron in the future to improve the occlusion problem with single kinect? Right now even a person going sideview, the occluded arm not seen from the kinect sensor goes berserk. Ipi seems better at dealing with occlusion even with single kinect only btw. Will dual kinect support for nevron be for v1?

  2. #2
    Registered User
    Join Date
    Jan 2004
    Location
    Broken Arrow, Oklahoma
    Posts
    936
    Quote Originally Posted by geo_n View Post
    What is the next step for nevron in the future to improve the occlusion problem with single kinect? Right now even a person going sideview, the occluded arm not seen from the kinect sensor goes berserk. Ipi seems better at dealing with occlusion even with single kinect only btw. Will dual kinect support for nevron be for v1?
    Yes I'm hoping that this is much improved as well. Many other solutions handle occlusion much better. There is a DEPTH part of the sensor data, surly that portion of the data is being used by others....maybe it can come into the LW version as well.

  3. #3
    TD/Animator lino.grandi's Avatar
    Join Date
    Jun 2003
    Location
    Rome
    Posts
    1,701
    Quote Originally Posted by geo_n View Post
    What is the next step for nevron in the future to improve the occlusion problem with single kinect? Right now even a person going sideview, the occluded arm not seen from the kinect sensor goes berserk. Ipi seems better at dealing with occlusion even with single kinect only btw. Will dual kinect support for nevron be for v1?
    I can hardly see support for dual kinect motion capture for Nevronmotion 1.0.

    That really depends on Microsoft.

    iPiSoft is an offline solution. NevronMotion works in real time.
    Lino Grandi
    3D Development, LightWave 3D Group/NewTek, Inc.

    https://www.lightwave3d.com/

    LightWave 3D Group YouTube channel:

    http://www.youtube.com/user/Official...?feature=watch

    My YouTube Channel:

    http://www.youtube.com/user/linograndi?feature=mhee

  4. #4
    Registered User
    Join Date
    Jan 2004
    Location
    Broken Arrow, Oklahoma
    Posts
    936
    good point Lino. Hadn't thought of that...doh! But we can still hope and dream...

  5. #5
    SciEngArtist PabloMack's Avatar
    Join Date
    Dec 2007
    Location
    Houston, Texas
    Posts
    348
    Quote Originally Posted by lino.grandi View Post
    iPiSoft is an offline solution. NevronMotion works in real time.
    I can see using real time feedback to actors by providing them with a real time composite with something like TriCaster to coordinate the MoCap with CG that is played back and overlaid with a virtual set and CG actors. I think it seems to be possible with two systems now. Does the realtime MoCap output video provide an appropriate background so that it can be chroma-keyed into a TriCaster-style composite?

  6. #6
    Goes bump in the night RebelHill's Avatar
    Join Date
    Nov 2003
    Location
    jersey
    Posts
    5,770
    Nah... if ipi "loses" a limb it goes nutso too.

    However, ipi uses the kinect data as kind of a "cage" and rattles the skeleton around inside, that's how it tracks. I presume nevron is just taking joints and their rotations as spat out my the MS sdk... in that case it should be possible for it to "know" when a limb has gone out of view and as such it could just be told to keep it still at the last known good position until it comes back into view... perhaps.
    LSR Surface and Rendering Tuts.
    RHiggit Rigging and Animation Tools
    RHA Animation Tutorials
    RHR Rigging Tutorials
    RHN Nodal Tutorials
    YT Vids Tuts for all

  7. #7
    Super Member geo_n's Avatar
    Join Date
    Aug 2007
    Location
    jpn
    Posts
    4,677
    Quote Originally Posted by RebelHill View Post
    Nah... if ipi "loses" a limb it goes nutso too.

    However, ipi uses the kinect data as kind of a "cage" and rattles the skeleton around inside, that's how it tracks. I presume nevron is just taking joints and their rotations as spat out my the MS sdk... in that case it should be possible for it to "know" when a limb has gone out of view and as such it could just be told to keep it still at the last known good position until it comes back into view... perhaps.
    Hence the request to have dual kinect support like IPI.
    But also even in single kinect with IPI the mocap quality is superior. For some actual output with single kinect IPI, Truebones has some really good youtube videos about it. Some medium complex motions where arms would get occluded still worked.



    Quote Originally Posted by lino.grandi View Post
    I can hardly see support for dual kinect motion capture for Nevronmotion 1.0.

    That really depends on Microsoft.

    iPiSoft is an offline solution. NevronMotion works in real time.
    I hoped it didn't really depend on anything that newtek has no control of. I would like more quality and complex motions than realtime but simplistic motions if given the choice.

  8. #8
    Electron wrangler jwiede's Avatar
    Join Date
    Aug 2007
    Location
    San Jose, CA
    Posts
    6,507
    Quote Originally Posted by geo_n View Post
    I would like more quality and complex motions than realtime but simplistic motions if given the choice.
    Agreed. I understand that the realtime capture allows for usage scenarios that iPi simply cannot, but I suspect the majority of Nevron customers would be willing to endure a bit of post-processing if it allowed for higher quality and subtlety in the captured performance. I'm not suggesting getting rid of realtime capture support, but instead providing an option to post-process the captured data that would do things like stabilize joints while occluded, perhaps smooth high-frequency noise, and so forth. Going forward past Nevron 1.0, that post-process stage could also open the door to merging multiple kinects' captures into a single stream, and (ideally) even offering third-party devs a way to plug in their own post-processing code.
    John W.
    LW2015.3UB/2019.1.4 on MacPro(12C/24T/10.13.6),32GB RAM, NV 980ti

  9. #9
    Eat your peas. Greenlaw's Avatar
    Join Date
    Jul 2003
    Location
    Los Angeles
    Posts
    7,164
    Part of that is because iPi Mocap Studio's tracking system, being a 'post process', can track backwards as well as forwards, making it easy to interpolate through the tracking error with just a little tweaking. A few shots in Happy Box were shot using the single-Kinect system before dual Kinect support became available, and I was able to track a decent version of Sister's 'chainsaw dance' tracked from the single Kinect data. That said, the dual Kinect version of the 'chainsaw dance'--which we re-shot and tracked a week or so later when we got the beta--was much better so we went with it. But we still kept a couple of the single Kinect shots in the short--they were good enough for the time (this was Aug 2011) and we felt there was no need to re-do them. (Okay, to be honest, we just ran out of time.)

    G.

  10. #10
    Part of that is because iPi Mocap Studio's tracking system, being a 'post process', can track backwards as well as forwards, making it easy to interpolate through the tracking error with just a little tweaking.
    by delaying the realtime preview with about 4fps i guess it should be possible to track this realtime as well.
    maybe not as good, but certainly better. the freeze option rebel mentions is an alternative too.
    LW vidz   DPont donate   LightWiki   RHiggit   IKBooster   My vidz

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •