View Full Version : Kinect Fusion Point Cloud to LWO Conversion?

08-24-2013, 12:38 PM
:dance:A reviewer on Amazon says that the "Kinect Fusion" application comes with the Kinect for Windows SDK. This app is supposed to be able to digitize 3D objects and environments into "point clouds". Is there an application that can convert these into LWO's for use in LW?

08-24-2013, 05:11 PM
You can use Scanect its very easy to use and not to expensive, exports to obj. Remember Kinect has low resolution so you wont get lazer scanner quality

08-24-2013, 10:05 PM
There's Reconstructme http://reconstructme.net/. The lite version is free, but it's for non-commercial use only. It doesn't save .lwo files, but LW does read .obj files.

There are probably others that I'm not aware of.

08-28-2013, 02:40 PM
As the Kinect Camera also collects RGB color images, can these SW packages also create a color image using a UV map that, when applied, will texture map the object? Skanect seems to do this. I have imported OBJ files to LW before but I seem to recall that texture maps are lost. If they are not then you might have to reselect them.

08-30-2013, 09:26 AM
I joined the Skanect forum and I have had some of my questions answered there. There's also another thread that seems to be answering some of my questions. Seems like a lot of people doing 3D scanning are using a free open source software package called MeshLab that I have downloaded but I haven't yet installed. I won't have a whole lot of use for it until I get the Kinect 2 camera which probably won't happen until next year.

08-30-2013, 09:50 AM
I think the way Skanect does it is it assigns color to the vertices. I'm pretty sure you can convert that to an actual image texture after generating a UV map of your geometry. Sorry, I haven't worked with Skanect very much--only played with it briefly.

I've done a little more work with ReconstructMe. It doesn't support color yet but according to the dev it will soon. I'm almost certain the method they use will be similar to Skanect's.

As a side note, iPi Mocap Studio somehow projects the video to the point cloud during tracking--it's really neat to see the 3D actor figure moving in full color while you track it. According the devs, they had to create special code to get the 2D RGB video data to stay aligned to the 3D mesh, and they intend to use this to enhance their tracking quality in a future build. Unfortunately, iPi Mocap Studio does not currently support output for the point cloud. (It would only be points anyway and not mesh geometry like ReMe's or Skanect's output.)