PDA

View Full Version : Easiest, cheapest at home head & body 3d scanning (w textures)



sami
12-26-2013, 01:12 AM
Hey, just wondering if anyone here has experience with this? Ive done a fair amount of looking around, but i find that the LW community is often a great source of practical and useful real-world information would be a great place to pick some savvy brains...

Anyway, sometime in the next 3 months, I'll be starting a project where I'll need to do multiple 3d scans at my home studio of various people. Doesn't need to be totally mm perfect, but the highest resolution point cloud/object and uv or per pixel texture would be best. I'll be doing a large number of scans over several weeks, so ideally it would be as robust and easy a scan as possible, and not "fiddly" if you know what I mean. Ideally a 10-15min scan of a full body pose would be ideal since there will be so many of them for this project. It's not feasible for me to have the scan take 2 or 3 hrs and have and hours clean up per model. They don't have to be totally perfect scans, but a very good watertight representation with color texture and bump/normal map would be good. Or at least I need the data cleanup to be scriptable so that I can write a script to "easily clean up the hundreds of source models I expect to scan. This is more of a research thing than a typical contract gig this time, and a good excuse to increase my 3d scanning chops.

Eventually I need to be able to compare models so positioning and stuff is important, but that may be solved by using a marked, revolving stand to ensure actors are similarly positioned.

Ideally the hardware & software for this project (beyond LW and all the other stuff I already use) would be under $2500. This includes any portable depth scanners or anything. I've looked into:

RecontructMe and it seems very fiddly and a very manual solution, and may not have color uv support yet. So this is a no go I expect...?

Skanect with a Kinect for Windows or PrimeSense/Asus scanner. The software on YouTube looks clean and easy and quick, not sure if the scanning process is easy or quick enough or high resolution enough. I'd like to be higher res than a crappy game model, but it can be lower than a perfect laser scan.

Cirri I saw this here awhile ago, but it looks a ways off and may be better suited to reliefs or textures or landscapes...?

Agisoft Photoscan this Russian software looks awesome and videos of the 3d constructed models looks great. I am not sure about the process though. It might take too long to make each one? It's ok if there is post process crunching as long as it is automated, but the scan process itself shouldn't be long. I don't mind taking a lot of photos, but it's unlikely I'll have a multi cam rig with many DSLRs just sitting on several surrounding tripods for months of this project, but I will have one I can manually take photos with and set up the lighting and stand for the scan process. Is Photoscan a good way to go to get the resolution I need?


I missed the boat on the Kinect One (v2 whatever it is called) for Windows developer program, so any Kinect scanning would be with the old Kinect for Windows as I'll need to start in a couple of months and it looks like anything to be released much more than that timeframe wouldn't work. And I think it'd be a nightmare to switch workflows and keep data from what method I started with (all data needs to match and be looked at as a whole). I expect Kinect 2 will be mid year before it is available to the public.

Happy to get some 3rd party portable hand scanner as long as there is useful workflow and software for it or if it works with Skanect or something similar. But I wouldn't want to go over $3k for the hw budget for this for sure unless there was a compelling reason - because this space is changing quickly, and next year whatever I buy now will be sadly obsolete.

Have I missed anything? What would you guys suggest? These would still, non animateable 3d body scans (well I might morph between them eventually, but I don't intend to rig them or anything).


Thanks so much for your thoughts as you digest your turkey & hope your holidays are wonderful! :)

sami
12-26-2013, 03:36 AM
Has anyone used the new Sense scanner?

http://cubify.com/sense/index.aspx?hp_bn_sense

It looks cool being used with a Microsoft Surface Pro 2 tablet - super portable and that should help workflow...

spherical
12-26-2013, 09:08 PM
Be careful. It's Cubify, which is 3D Systems. Not to be trusted. Borg of the 3D Printing Universe. Proprietary everything, even where it's not necessary. Their model is to get you into their sphere then bleed you dry with over-priced expendables. As opposed to making a product that is better than the competition and make money that way, they force you into it by luring you in and making sure you can't get replacements on the open market that work.

cresshead
12-26-2013, 09:37 PM
Be careful. It's Cubify, which is 3D Systems. Not to be trusted. Borg of the 3D Printing Universe. Proprietary everything, even where it's not necessary. Their model is to get you into their sphere then bleed you dry with over-priced expendables. As opposed to making a product that is better than the competition and make money that way, they force you into it by luring you in and making sure you can't get replacements on the open market that work.

really? ouch...thanks for the heads up on this.

Greenlaw
12-26-2013, 10:39 PM
I have ReconstructMe and PhotoScan here but have only fiddled with them a little. I intend to use them on a future project but that won't happen for a while. Here's what I know about these from my small experience with these programs:

ReconstructMe works really well but with Kinect the scans are not high quality. I think this has more to do with the low resolution of the Kinect device. That said, the quality appears to be higher than what I've seen from other Kinect based scanning programs. The software also supports the Carmine sensors and this is where it really shines--the quality of the scans I've seen are pretty amazing with Carmine. Unfortunately, Apple Computer recently purchased PrimeSense, the company that makes Carmine and they stopped selling the devices. Bummer...I was just about to buy a 1.09 model (for near scanning) too.

ReconstructMe recently got updated to version 2.0, which supports higher quality meshes and color--I haven't tried this version yet. My guess is that it applies the color to the vertices, which is how a lot of these systems work. I don't know if it then bakes the color to a texture.

PhotoScan is a photograph based system, it does not use Kinect. The quality is much higher than Kinect based systems. The downside is that it's not a realtime system. The idea is that you take a bunch of photos from many angles, covering as much of the subject as possible. The software then 'stitches' an image and generates a mesh from it. The color is then projected to the vertices. A UV map is generated and the colors are baked to it. I've only done some of the tutorials with this software. It works really well but it also takes more work--depending on how you shot your subject, you need to essentially rotoscope the subject from the background for nearly every image. The software has tools for this and it can interpolate quite a bit of the background subtraction if it has enough data. But, yeah, it's a lot of work. On the other hand, it would be a lot more work to model and paint the figure manually, so you need to weigh the advantage against your project resources (time/crew/skills/budget).

I don't know why this software doesn't support HD video--seems like that would be an ideal way to capture a lot of images very quickly. I guess you could shoot a video and extract the photos manually for the software. I haven't tried that yet.

Some artists use a hybrid system--Kinect based software for the full body scan and the higher res PhotoScan for the head and face. I think this is done because it's difficult to get a person to stand perfectly still for a long time and you can work quickly with Kinect. The head and face is easier because the subject can sit in a chair for that.

Regardless of which method you choose, it's likely that you'll want to do a bit of clean up if the meshes are meant to be animated. You'll probably want to pass the geometry through Zbrush or 3D Coat for retopo and final texturing.

Both programs are fairly inexpensive for individual artists--ReMe is like $250 and PhotoScan is around $180.

G.

sami
12-26-2013, 11:52 PM
Be careful. It's Cubify, which is 3D Systems. Not to be trusted. Borg of the 3D Printing Universe. Proprietary everything, even where it's not necessary. Their model is to get you into their sphere then bleed you dry with over-priced expendables. As opposed to making a product that is better than the competition and make money that way, they force you into it by luring you in and making sure you can't get replacements on the open market that work.

Thanks for the heads up, but I'm not too worried about 3d printing at this stage, and this Sense scanner seems like it works with Skanect too as I've read and I kind of see it as a disposable product if I get it as it will pay for itself if I use it and it worKs for my needs. But good to know Cubify isn't so straight up. I've also read elsewhere that their support is horrendous and that's why they accept no refunds at all...

sami
12-27-2013, 12:09 AM
I have ReconstructMe and PhotoScan here but have only fiddled with them a little. I intend to use them on a future project but that won't happen for a while. Here's what I know about these from my small experience with these programs:

ReconstructMe works really well but with Kinect the scans are not high quality. I think this has more to do with the low resolution of the Kinect device. That said, the quality appears to be higher than what I've seen from other Kinect based scanning programs. The software also supports the Carmine sensors and this is where it really shines--the quality of the scans I've seen are pretty amazing with Carmine. Unfortunately, Apple Computer recently purchased PrimeSense, the company that makes Carmine and they stopped selling the devices. Bummer...I was just about to buy a 1.09 model (for near scanning) too.

ReconstructMe recently got updated to version 2.0, which supports higher quality meshes and color--I haven't tried this version yet. My guess is that it applies the color to the vertices, which is how a lot of these systems work. I don't know if it then bakes the color to a texture.

Good to know about 2.0 color support, otherwise to me it's just a free tech demo. But I thought the 1.09 carmines were still available? And I could have this wrong, but I thought that was the sensor in the Cubify 3DSystems Sense scanner? Not sure...



PhotoScan is a photograph based system, it does not use Kinect. The quality is much higher than Kinect based systems. The downside is that it's not a realtime system. The idea is that you take a bunch of photos from many angles, covering as much of the subject as possible. The software then 'stitches' an image and generates a mesh from it. The color is then projected to the vertices. A UV map is generated and the colors are baked to it. I've only done some of the tutorials with this software. It works really well but it also takes more work--depending on how you shot your subject, you need to essentially rotoscope the subject from the background for nearly every image. The software has tools for this and it can interpolate quite a bit of the background subtraction if it has enough data. But, yeah, it's a lot of work. On the other hand, it would be a lot more work to model and paint the figure manually, so you need to weigh the advantage against your project resources (time/crew/skills/budget)....


Thanks! I know it's a photo based system, but what I don't know is how fiddly it is. I don't mind taking 50-100 photos by hand in 5 to 10 minutes, but masking or garabage matting or rotoscoping each picture manually is not going to work. Ive seen some Photoscan setups that have synced 16 DSLR cameras on tripods around the person, and I'm not going to have that many. Probably just one or two, and by hand, not synced. The high res looks great, but Photoscan is a no go if I have to prep the images too. If all I have to do is take 100 photos for each model then click a button or two and wait for the post process that is fine, but if we're talking 2 hrs or more work on each one then that's not going to work as I intend to scan around 250-300 full people.

I'm comfortably adept with retopo and 3DCoat, but don't want to have to do that with each of these. I may use the output models (no matter how high res) in a render or in a visualization in Unity but don't need to rig or animate at this stage. At least not with all of them - so I'll avoid that time consuming step if I can...

Do you think I can take 50-100 photos per person and have less than 1 hr to generate each model using Photoscan without any rotoscoping etc? I take it this means Photoscan doesn't do any kind of object tracking to separate it from the background?




Some artists use a hybrid system--Kinect based software for the full body scan and the higher res PhotoScan for the head and face. I think this is done because it's difficult to get a person to stand perfectly still for a long time and you can work quickly with Kinect. The head and face is easier because the subject can sit in a chair for that.

Regardless of which method you choose, it's likely that you'll want to do a bit of clean up if the meshes are meant to be animated. You'll probably want to pass the geometry through Zbrush or 3D Coat for retopo and final texturing.

Both programs are fairly inexpensive for individual artists--ReMe is like $250 and PhotoScan is around $180.

G.

If I was doing a game then, yes what you suggest of a hybrid system sounds great, but ideally I'd like an all in one solution (other than the LW import and render)...

Are there any other scanners for full body under a few $K that are quick and dirty like I describe?

I found some Sense scans someone uploaded, and the texture looks reallllly low :-(

I appreciate your experience & advice Greenlaw! :-)
118922

erikals
12-27-2013, 03:27 AM
as for ReconstructMe, it really sucks that PrimeSense stopped selling Carmine 1.09
which had great scan quality... :\


but... PhotoScan seems to be superior >

http://www.youtube.com/watch?v=u04HHvDeyAA
http://www.youtube.com/watch?v=2_insfYWPkA
http://vimeo.com/66650639
http://www.youtube.com/watch?v=vuIUpfGOSWo


I don't know why this software doesn't support HD video...
...I guess you could shoot a video and extract the photos manually for the software.

i did a quick test some time back, seemed to work :]
but the subject must sit still, which i didn't when i taped myself (!) guess that's why i ended up with low-res quality...
forgot to mention, i filmed with a 1080i camera, so each frame is interpolated / blurred. it would have been better to have used a 1080p camera.

a test by Erik Ferguson >


http://vimeo.com/67172549

PhotoScan would be quite cheaper as most of us already own a digital camera > $180
ReconstructMe requires a Kinect to work, so if you need to buy that also, the price becomes > $350


the best would be to own both systems http://erikalstad.com/backup/misc.php_files/smile.gif

adk
12-27-2013, 04:21 AM
I vaguely recall testing PhotoScan at my previous work and tried green screening some models (small) in order to avoid the manual photo prep work. From memory it worked ok but I can't be 100% as I don't have that work at at my disposal any more. You might wanna test the demo perhaps.

Greenlaw
12-27-2013, 07:39 AM
Good to know about 2.0 color support, otherwise to me it's just a free tech demo.

Not really...here's a few examples of scans from ReMe 1.0 using Carmine 1.09:

http://reconstructme.net/reconstructme-1-2-using-carmine-1-09-high-detail/

IMO, with this level of detail, it's far more than just a tech demo. For color textures, you could simply projection paint photos of the subject to the mesh in 3D Coat--easy peasy. (I just did something like recently on a fairly generic mesh for a cg stunt double in a feature--it wasn't meant for closeups but it looked enough like the actor for what I needed. Now, if I had a ReMe/Carmine scan to work with in 3DC, it could have been perfect.) :)

It's not free though. ReMe is about $250 and a Carmine 1.09 sensor would have been around $200.

As for the hybrid approach, it's what 1k0 was using for those beautiful 'Noël en Alsace' shorts for the last couple of years. There's info about it on his blog: http://1k0.blogspot.com/

G.

Greenlaw
12-27-2013, 07:52 AM
I see on ReMe's blog that they are busy working on supporting the new Kinect 2 for Windows. That's great news for me because I was planning on getting a Kinect 2 for Windows for iPi Mocap Studio when it begins supporting it. (iPi Soft recently got their developer's pre-release device too.) I'll definitely be renewing my ReMe license for that device.

spherical
12-27-2013, 05:50 PM
really? ouch...thanks for the heads up on this.

You're welcome. Warning our colleagues is important.


Thanks for the heads up, but I'm not too worried about 3d printing at this stage,

The reference wasn't about 3D printing, it was about 3D Systems' mindset and business practices. The reference was provided to place things into perspective.


and this Sense scanner seems like it works with Skanect too as I've read and I kind of see it as a disposable product if I get it as it will pay for itself if I use it and it worKs for my needs. But good to know Cubify isn't so straight up. I've also read elsewhere that their support is horrendous and that's why they accept no refunds at all...

Unless 3D Systems says, specifically, that their scanner will work with something else, it probably won't.

Here's what they did in the printing arena:


Borged Bits from Bytes
-
Immediately, without warning, killed a perfectly good pair of printers, RapMan and 3D Touch, the latter being one of the most capable and well built in the industry, marking them as "discontinued".
-
Redirected the BfB website to the cubify.com discontinued products page and killed the BfB support forums. Again, no warning. Information that had been built up in there to help users gone, all because there were a few negative posts about the quality of the Axon slicer and the prints it produced and a couple of users had difficulty with their hardware. It is a support forum, so there will be problems posted. Complete slap in the face, especially for customers who had recently purchased a BfB printer thinking that its long history indicated something. What, just because the hardware is "discontinued" to you, doesn't mean that I don't still have one that I would like to keep using. Rude.
-
"Developed" a "new" printer, CubeX. Gee, it really looks kind of familiar.... A regurgitated 3D Touch.
-
Here's where it gets better:
The CubeX has new extruder drive mechanisms and uses 1.8mm diameter filament, where the 3DT uses 3mm. Well, that's okay, right up until you realize that industry standards for filament diameters are 1.75mm and 3mm and the actual diameters are most often smaller by as much as .3mm. Where they screw their customers over, here, is that the new extruder drive is a fixed gap hobbed wheel, where the previous design was a lead screw with a set of spring-loaded pressure rollers to accommodate variances in filament diameter. Put non-3D Systems filament into the extruder and it won't drive properly because there isn't enough engagement with the mechanism. Nice.
-
The 3DT has three hubs in the bottom cavity to accommodate reels of filament. The CX has big proprietary cartridges.

The smoke screen BS story was that the filament is kept in a controlled environment for better quality (BfB PLA was very problematic with snapping of the filament overnight if left in the feed tubes) and the machine kept track of filament use, so you would know if you have enough material to complete the part you want to print. Nice idea, if it was kept wholesome.

The real reason is the chip that keeps track of filament use and prevents the machine from printing if there isn't enough also prevents the machine from printing at all if there is no cartridge installed. Well, not so bad... right?
-
When you can readily purchase high quality filament for $32USD on the open market, $110USD for a cartridge that has LESS material in it that you have to buy from them and make sure you have enough way ahead, because their delivery sucks, it gets to be very clear. The light also dawns when you get a new over-priced cartridge and the chip doesn't interface with the machine. You've got the material, you just can't use it and your schedule is in the toilet.
-
Initial efforts to run open market filament were unsuccessful but, when you are rude enough, long enough to your customer base, they will find ways around you.
-
New firmware updates came along that had features that many really wanted but, when flashed, the users found that more and more walls were coded into them in order to attempt to stop the reverse engineering and keep their cash cow money stream intact. Users had to do without improvements if they wanted to free themselves from servitude. Better that than continue to pay the piper for junk.
-
The Cubify slicer is crap, but it's the only one that will interface with the machine. But wait, there's more... it outputs encrypted print files. Standard slicers output G-code files that can be edited in a text editor. Having this capability has saved my bacon more than once. Having it ripped away by an insecure bunch of @$$hol3$ who think they rule the world is unacceptable.
-
As a result, other slicers were attempted to be used and the ongoing RE efforts are allowing that to happen, if you keep below a certain firmware rev. Not surprisingly, it was found that they introduced a bunch of non-standard G-code commands into the firmware. Just made the RE effort more difficult, not impossible.
-
Users were having difficulty with huge puddles of extrusion being deposited at the start of a print, thereby ruining it. Eventually, it was uncovered that this was a misplaced purge. A huge one. Wastes about 0.5% of your filament; normally into the wipe box. Great. It was appearing at the 1st layer start, due to not using their crappy slicer. "Hey, we get to sell more over-priced filament. It's a win-win, both on our side. What's not to like?"
-
The last straw? The machine has to be "activated" by contacting their website in order to be used. Seriously?


I really don't see any of their products being run differently.

"Democratizing 3D printing for everyone" my @$$. Companies like this should just die, period, and the suits that run them should be made to actually work for a living, for once...

sami
12-28-2013, 12:27 AM
Greenlaw, thanks for the high detail images, that's good news. :) However without uv image color or per pixel color, (and I understand that's coming) it is kind of useless to me for this project, as unfortunately there is no way I'm painting 200-300 models by hand for this, which is why I need good textures capturing for this. I've no time with the quantity of captures I intend to get to load them in 3DC and paint them, so I need a "full" working solution or at least one that works within the next couple of months. Maybe ReMe with color will be ready by then?

Or maybe Kinect 2 for windows will be available to the public by then? I think it will be at least 3 months from what I've read? Maybe I can pick one up on eBay or something...?

How is ReMe's software? The Cubify software and Skanect seems to do full watertightening and seems to do "inpainting" of texture capture and seems very simple. Unfortunately I can't be fiddling around per model so much when I have so many to scan for this.

I guess the key thing is for me, is to keep from start of full body scanning to color export obj model down to 2hrs total max each; otherwise it's too much work for this budget I don't care if I need a couple of weeks testing ahead of the project, just once I start, I need things to be semi-smooth, and since no rigging or hand painting for extreme closeups is needed, I'm hoping scanning gear and software can be under $3k.

Do you guys think Photoscan can do this? I might be able to setup a lit green screen and rotating platform if need be for one DSLR and Photoscan if that helps the workflow and prevents post processing workflow. Thanks adk, I think I will play with Photoscan and see just how fiddly the rotoscoping is or if a model can be created without it.

The reason I was keen on the Sense scanner is I saw a video of a dude with a windows tablet scan someone's head and torso in the middle of a convention floor totally ad hoc by hand in just 30sec! with one; but the quality was low, but passable maybe. It looked easy and quick and in a non lit busy area with no planning, so I hoped a bit of prep and more time would make it better...

Spherical, thanks for the details, and I fully agree, companies with business practices like you describe should not be supported, but for the price and if I go into it fully aware they suck, if it works, it might be worth it. But yes, that is strongly dissuading me from getting one even if it is throwaway....

It's just that this jump from the sub $3K solution range to well over $15K, which is difficult for me for this project. the Artec scanners or GoScans are like $25K and flash lights all over the place and are so custom, I doubt the workflow is worth the price...

Thanks for the ideas and experience though guys. maybe someone else can chime in with more options? :)

sami
12-28-2013, 12:56 AM
Erikals, btw thanks for that Agisoft link tutorial! It showed how horribly time consuming masking within Agisoft will be. Clearly the output is great, and I don't care that it might take computer time (as long as processing in unattended), but hand masking with those primitive masking rotoscoping tools makes me cringe.

Reminds me of the old days when I'd have to mask/rotoscope like that with Elastic Reality. I just don't have the patience for that anymore 8~

Oh well... Maybe green screen rotating platform will help me bypass that step?

Also one more Photoscan question if anyone knows, do I need pair shot images of the person? Or can I just take many random photos of the same object a few degrees apart all the way around? I.e do I need synced cameras or anything?

I also have Syntheyes 2012; Russ Andersson seems like the god of this sort of thing, does the new version Syntheyes generate models now? I could upgrade if it does much more than motion and object tracking now...?

sami
12-28-2013, 02:01 AM
Didn't know this was possible, here's a video of a guy manually combining 3 separate Skanect Kinect scans into 1 scan then having Skanect combine them - I assume this is to get higher resolution?


http://www.youtube.com/watch?v=lZ45H2o6e68

Greenlaw
12-28-2013, 02:47 AM
According to Microsoft's own website for Kinect 2 for Windows (http://www.microsoft.com/en-us/kinectforwindowsdev/newdevkit.aspx), they're shooting for next summer for final release, so I wouldn't count on that in the next couple of months.

From what I recall, PhotoScan isn't too fussy about how the images are shot so long as there is full coverage. In other words, handheld will work but you need to be sure there is enough coverage for the software to stitch a complete set of data. Regarding the masking process, yes, it's tedious. And it doesn't help that the masking tool they provide isn't really optimal for this sort of thing.

I have the latest SynthEyes here. I use it only for match moving plates and for some basic set reconstruction for placement reference. I've never used it for full reconstruction though. I'll take a look at the manual when I have time.

I haven't used ReconstructMe for color--that's a fairly new feature for 2.0--but according info from the ReconstructMe forums, an early version of the software used two methods for color. First was vertex coloring. This is the easiest method but it's not very detailed. The second method is to map a photo image into each triangle. I don't quite understand how that works but I imagine this data could be assembled as a UV map and color texture. I don't know if ReMe actually does that though--I don't think it does but you might want to ask in their forums about it.

Skanect does vertex color. I'm not sure if it can transfer this to a UV map internally--probably not.

That said, if you have a mesh with vertex color data from either program, you should be able to bake that to a retopologized mesh with an auto-generated UV map in 3D Coat. To be clear, I've used 3DC's automated retopo and auto-generated UV maps they're reasonably good for many production level situations. Naturally, if you need 'hero' quality, you'll want to supervise the process or do it manually, which is not that difficult in 3DC. But many times, I've found that the automated stuff can be good enough even for production work. The step I've never done before is bake vertex color to a UV color texture map--this should be easy to do in 3DC but you might ask at the 3DC forums to be sure.

The downside to this is, if you don't have a hugely detailed scanned mesh (millions of points) to begin with, the vertex color will probably bake as a fairly low quality texture map. I don't think the current Kinect is capable of that kind resolution. IMO, if the final texture map needs to be high quality, you're much better off projecting a high res photo on the geometry, which can be an easy process, especially if the mesh matches the person in the photo. (It's really only difficult when the mesh does not accurately resemble the person, which means you need to edit the mesh or edit the images.) You'll want front side and back photos of each person. If you have that, it should only take a few minutes to projection paint each one. Obviously, if the character is a 'hero' model, it will take a bit longer to get it perfect. I guess if you have both, you could really speed up the process--you might even get away with projection painting only the front image.

Anyway, if you really have 200-300 unique 'characters' to scan, I'm not sure there is an easy way to do this--this sounds like a incredibly ambitious project. The sheer number of characters alone can make the project very labor intensive no matter how you look at it. Do you really need unique bodies for all 200+ characters? You might get away with, say, a half dozen body types and randomize the clothing colors procedurally. Just a thought.

When I was with the Box, we sometimes hired specialists for scanning actors and I think they charged around $5,000 per scan. I believe the cost included rights to use the scanned person's image in a given production. The quality was fairly a good but at that price, we reserved this process for 'hero' models, especially since there was usually additional work to be done on the model like creating face morph targets, hair, etc. For non-hero or even semi-hero models, we often used tricks like swappable modular body parts or simple hue shifting of textures. This was mostly because our deadlines tended to be very short and we needed to see good results quickly.

Good luck and let us know what method works out for you.

G.

spherical
12-28-2013, 02:47 AM
Spherical, thanks for the details, and I fully agree, companies with business practices like you describe should not be supported, but for the price and if I go into it fully aware they suck, if it works, it might be worth it. But yes, that is strongly dissuading me from getting one even if it is throwaway....

So then you don't "fully agree".... I'll leave it at that.

erikals
12-28-2013, 03:13 AM
From what I recall, PhotoScan isn't too fussy about how the images are shot so long as there is full coverage. In other words, handheld will work but you need to be sure there is enough coverage for the software to stitch a complete set of data.

Regarding the masking process, yes, it's tedious. And it doesn't help that the masking tool they provide isn't really optimal for this sort of thing.

looking at the video tutorial over again, a greenscreen should make the masking process fairly fast... \:]

sami
12-28-2013, 03:42 AM
According to Microsoft's own website for Kinect 2 for Windows (http://www.microsoft.com/en-us/kinectforwindowsdev/newdevkit.aspx), they're shooting for next summer for final release, so I wouldn't count on that in the next couple of months.
Yeah, I finally found some posts too which said their release date (if you aren't on the dev program) to be mid summer 2014, so this is a no go for this one I'm afraid...



From what I recall, PhotoScan isn't too fussy about how the images are shot so long as there is full coverage. In other words, handheld will work but you need to be sure there is enough coverage for the software to stitch a complete set of data. Regarding the masking process, yes, it's tedious. And it doesn't help that the masking tool they provide isn't really optimal for this sort of thing.
Bummer, with 300 subjects, this would end up being the bulk of the work, and not a great use of time... But I might get the demo (if there is one) and see how it works with a green screen setup, maybe I can script something to speed up the masking?



I have the latest SynthEyes here. I use it only for match moving plates and for some basic set reconstruction for placement reference. I've never used it for full reconstruction though. I'll take a look at the manual when I have time.
Thanks, that's kind of you :)



I haven't used ReconstructMe for color--that's a fairly new feature for 2.0--but according info from the ReconstructMe forums, an early version of the software used two methods for color. First was vertex coloring. This is the easiest method but it's not very detailed. The second method is to map a photo image into each triangle. I don't quite understand how that works but I imagine this data could be assembled as a UV map and color texture. I don't know if ReMe actually does that though--I don't think it does but you might want to ask in their forums about it.

ahh, so that sound like for me ReconstructMe is still too basic/early days yet for it to be as useful.



Skanect does vertex color. I'm not sure if it can transfer this to a UV map internally--probably not.

That said, if you have a mesh with vertex color data from either program, you should be able to bake that to a retopologized mesh with an auto-generated UV map in 3D Coat. To be clear, I've used 3DC's automated retopo and auto-generated UV maps they're reasonably good for many production level situations. Naturally, if you need 'hero' quality, you'll want to supervise the process or do it manually, which is not that difficult in 3DC. But many times, I've found that the automated stuff can be good enough even for production work. The step I've never done before is bake vertex color to a UV color texture map--this should be easy to do in 3DC but you might ask at the 3DC forums to be sure.

The downside to this is, if you don't have a hugely detailed scanned mesh (millions of points) to begin with, the vertex color will probably bake as a fairly low quality texture map. I don't think the current Kinect is capable of that kind resolution. IMO, if the final texture map needs to be high quality, you're much better off projecting a high res photo on the geometry, which can be an easy process, especially if the mesh matches the person in the photo. (It's really only difficult when the mesh does not accurately resemble the person, which means you need to edit the mesh or edit the images.) You'll want front side and back photos of each person. If you have that, it should only take a few minutes to projection paint each one. Obviously, if the character is a 'hero' model, it will take a bit longer to get it perfect. I guess if you have both, you could really speed up the process--you might even get away with projection painting only the front image.

Good idea, Maybe I can just use the one click 3DC auto retopo and texture baking. I don't want to fiddle in 3DC as that too can be time consuming, but maybe, as you say, the automated tools will be speedy. Not sure if I need to retopo either, since for my purposes a heavy model might be ok...? I'll play and see. There are no 'hero' models really, and closeups will be mid shots probably...




Anyway, if you really have 200-300 unique 'characters' to scan, I'm not sure there is an easy way to do this--this sounds like a incredibly ambitious project. The sheer number of characters alone can make the project very labor intensive no matter how you look at it. Do you really need unique bodies for all 200+ characters? You might get away with, say, a half dozen body types and randomize the clothing colors procedurally. Just a thought.
....
Good luck and let us know what method works out for you.
G.

I really do have that many people to scan, and this isnt me being dorky, not wanting to do it the "right way" as you kindly suggest and of which I totally agree - it's just that this is mostly donated time by myself and a few staff plus a modest budget for essentially a medical research project for a good cause, not a game or VFX shot etc. If it was, I'd do what you suggest or get a better budget ;-) but for this we need portability etc to be able to scan those involved in non mocap studio situations. Plus my other project work needs to continue as usual in parallel so that's partly why time effort is so critical.

I was willing to buy some mid-end scanner - but that market seems conspicuously absent, though it looks like for the time frame and limitations of this project, my choices are:


Sense scanner using its software and/or Skanect

Kinect (1) for Windows using Skanect

ASUS Xtion Pro Live (still available) using Skanect

Carmine 1.09 sensor using Skanect. (can't find these on eBay or anywhere)

Photoscan with a lot of patience ;-)


Thanks again very much for the detailed response and I will come back with my experience in case it is of interest as I go through this. :-)

erikals
12-28-2013, 03:46 AM
5. Photoscan with a lot of patience ;-)

not so sure, see post #18

adk
12-28-2013, 04:20 AM
http://www.cgfeedback.com/cgfeedback/archive/index.php/t-468.html

It's an old post but seems to have a bit of info and some examples of what's achievable in Photoscan. Granted its with synchronized cameras tho so I'm not sure how helpfull this will be. Also, I'm not sure how forgiving the software would be with moving, albeit slight, subjects. All my tests were with still models.

erikals
12-28-2013, 04:24 AM
aha!! look at that... > greenscreen!  http://erikalstad.com/backup/misc.php_files/047.gif

thanks for posting! http://erikalstad.com/backup/misc.php_files/smile.gif


result with PhotoScan using only 10 pictures >

118961

adk
12-28-2013, 05:23 AM
aha!! look at that... > greenscreen!  http://erikalstad.com/backup/misc.php_files/047.gif

thanks for posting! http://erikalstad.com/backup/misc.php_files/smile.gif

No problem mate... assuming you mean me that is :)

This recent thread might also help... again you'd need to experiment.
http://www.agisoft.ru/forum/index.php?topic=1315.0
Seems to be a pre requisite as a matter of fact.
The green screen just automates the masking process which as others have pointed out can be really laborious. I'd definitely be interested in how you go with all this sami.

Thomas Leitner
12-28-2013, 05:26 AM
Hi,
we used the kinect for windows sensor for scanning peoples a few weeks ago.
We have reconstructme and artec studio and I have tested skanec and scenect.
I see no way to get clean models with the kinect sensor without manual work.
Sometimes the person you are scanning moves a little bit, some surfaces can not be detected by the sensor.
According to my experience the color map you can get from the kinect sensor isnīt production ready (but I donīt know what you need). We used it as an reference for a UV map.
So I donīt think that you can use any kinect based scanning solution within your guidelines.

ciao
Thomas

p.s.: keep in mind that there are two version of reconustructme (normal and sdk) with different features.

m.d.
12-28-2013, 09:11 AM
I returned my reconstructme license...
i could not get a single good scan out of it.....time consuming with a lot of manual effort...

what most people dont realize is that the default kinect will not focus closer then about 3 ft....the details it can get are very low at this quality
the primesense carmine with glasses could get focusing much closer (they put some optics in front of the existing lenses) but AFAIK the kinect cannot calibrate itself for this as the carmine could

also although the kinect (and the rest) say they are recording a 640x480 depth image....it is actually upscaled from 320x240.....even the newest Kinect 2 will only have 512x424 depth

trying to scan something with that low a resolution is the real problem here...the kinect 2 will be quite a bit better...but thats still a very small image


I have got much better results using autodesk catch 123 or whatever they are calling it now
beat reconstructme in terms of quality and included UV maps

Greenlaw
12-28-2013, 01:21 PM
looking at the video tutorial over again, a greenscreen should make the masking process fairly fast... \:]

Whenever I find the time, I need to try that. We still have the green screen set up in our garage for the live portion of 'B2'--unfortunately, some clutter has crept in there since last summer and it's not really usable at that moment. Ugh...too many projects, not enough time.

G.

erikals
12-28-2013, 01:49 PM
Ugh...too many projects, not enough time...

there is always... >

118971 http://erikalstad.com/backup/misc.php_files/047.gif

Greenlaw
12-28-2013, 02:53 PM
Some info regarding current devices.

If you have Kinect for Windows, you can switch it to Near mode, which reduces the range to about 15 inches to 9 feet. (Normal or Wide mode is about 3 ft to 15 ft.) Near mode typically used for face capture, and it's usually better for scanning objects and people (not so good for full body motion capture though.) Kinect for XBox does not support this feature, it's only available to Kinect for Windows and Asus Xtion. Carmine 1.09 is a near mode only device.

Asus's Xtion is not the same as Carmine 1.08 and 1.09--it's essentially a Kinect for Windows clone with a few interesting differences. Like K4W, it supports near mode. The data is slightly cleaner than Kinect for Windows and technically it can do 60 fps at half res. Unfortunately, according to the devs at iPi Soft, the low res mode severely compromises quality though, so they disabled this mode in their software. I don't think that helps for 3D scanning anyway. It's smaller than Kinect and uses less power, so that makes it more efficient to use with mobile computers. The Xtion uses third party drivers which are reportedly not as good as the Microsoft drivers. Because Xtion is much smaller than Kinect, it lacks the remote controlled pitch motor. The pitch motor is not a all necessary for 3D scanning but it's useful when setting up multiple Kinects for body capture. But I digress...

Carmine 1.08 and 1.09 was developed after the Xtion and these devices are more advanced than Kinect for Windows (version 1, I mean) and Xtion. Unlike K4W and Xtion, Carmine did not support both near and wide mode in a single device--1.08 was for wide scanning (i.e., fully body) and 1.09 was for near scanning (face capture, 3D object scanning.) The quality of the data was significantly higher than either Xtion or Kinect. Unfortunately, as mentioned already, you cannot buy either version of Carmine anymore--they went off the market a few weeks ago after Apple Computer purchased Primsense. Carmine was also a bit more expensive if you needed both modes--about double the cost of a single Kinect for Windows that had both.

At the moment, the best device is probably Kinect for Windows because it's immediately available, and the drivers are solid, still supported, and developed by Microsoft. K4W beginning to show its age a little but the controls and features are more advanced than Kinect for XBox (original). (Plus, Kinect for XBox has never been officially supported by MS for use with PC's, so the thin support it currently enjoys may disappear with a future driver update. You can buy one cheap now but I don't recommend it.)

I think if you don't need a scanning device immediately, it will be good to wait for Kinect 2 for Windows--developers are reporting that the depth data is significantly cleaner and more detailed. The jury's still out for multiple devices though--for example, iPi Mocap Studio currently captures motion data from 3 Kinect for Windows sensors simultaneously, but last I heard they are still investigating this possibility with Kinect 2 for Windows--but that's probably more important for mocap use.

I watched a review last night that compared different software, specifically ReconstructMe, Skanect, and Kinect Fusion. The review was done before ReMe got color support, so there was no comparison in the video for color. The review did mention the mesh resolution for each software. Kinect Fusion apparently has the highest resolution, with Skanect second and ReconstructMe third--HOWEVER, they stated that the Skanect resolution is much softer with very little detail compared to ReconstructMe. My guess is that the extra high resolution is used strictly for the color data and not for modeled data. ReMe, on the other hand, produces much higher quality geometry. Kinect Fusion apparently produces geometry that's 10x higher than ReMe. I don't know much about Kinect Fusion but I believe it's still experimental and I don't think it supports color. Kinect Fusion is also significantly slower but that's probably because of the extra amount of data it's capturing.

I'd like to see a comparison of the current ReMe and Skanect. I suspect the mesh/color quality is higher with ReMe but, as mentioned previously, I have no experience with the color version yet. Also, I believe both programs only do vertex color, not UV'ed textures, so I don't think Skanect has any advantage over ReMe other than price. (Which, admittedly, is considerably cheaper.)

Here's an interesting video of ReMe 2.0 with color in action:

http://www.youtube.com/watch?v=TzPLaNUQOzE

To me, the resolution (color or mesh) isn't incredibly high but it's pretty nice for something that works so quickly. Based on what I've seen from Skanect, the ReMe data looks better to me. If Skanect has a demo (I think they do,) I'll try to test both of these this week using the same subject.

Sorry, I don't have any definitive answers yet--just not enough time to try all this cool stuff. :)

G.

Greenlaw
12-28-2013, 04:13 PM
there is always... >

118971 http://erikalstad.com/backup/misc.php_files/047.gif

Oh, no...not for me. I think I will burn up in flames. :)

G.

sami
12-30-2013, 10:32 PM
No problem mate... assuming you mean me that is :)

This recent thread might also help... again you'd need to experiment.
http://www.agisoft.ru/forum/index.php?topic=1315.0
Seems to be a pre requisite as a matter of fact.
The green screen just automates the masking process which as others have pointed out can be really laborious. I'd definitely be interested in how you go with all this sami.

Thanks for the forum links adk, they were interesting and it was good to see how people worked through their issues. I'll keep posted here how I get on, but it will likely be a few weeks as I experiment in off time...



Hi,
we used the kinect for windows sensor for scanning peoples a few weeks ago.
We have reconstructme and artec studio and I have tested skanec and scenect.
I see no way to get clean models with the kinect sensor without manual work.
Sometimes the person you are scanning moves a little bit, some surfaces can not be detected by the sensor.
According to my experience the color map you can get from the kinect sensor isnīt production ready (but I donīt know what you need). We used it as an reference for a UV map.
So I donīt think that you can use any kinect based scanning solution within your guidelines.

ciao
Thomas

p.s.: keep in mind that there are two version of reconustructme (normal and sdk) with different features.

Thanks Thomas, Artec scanners are too expensive for this project for me, and good to know your experience. I suspect, as you say, the Kinect texture will be too lowres/blurry, but I might try it anyway just to see.


I returned my reconstructme license...
i could not get a single good scan out of it.....time consuming with a lot of manual effort...

what most people dont realize is that the default kinect will not focus closer then about 3 ft....the details it can get are very low at this quality
the primesense carmine with glasses could get focusing much closer (they put some optics in front of the existing lenses) but AFAIK the kinect cannot calibrate itself for this as the carmine could

also although the kinect (and the rest) say they are recording a 640x480 depth image....it is actually upscaled from 320x240.....even the newest Kinect 2 will only have 512x424 depth

trying to scan something with that low a resolution is the real problem here...the kinect 2 will be quite a bit better...but thats still a very small image


I have got much better results using autodesk catch 123 or whatever they are calling it now
beat reconstructme in terms of quality and included UV maps

Thanks, I think your comments, my suspicions and what I've read have all conspired to put me off ReconstructMe. The software seems far too manual, and as you say, would require quite a bit of work, so I'll probably pass on ReMe for now...




there is always... >

118971 http://erikalstad.com/backup/misc.php_files/047.gif

On my ph, that pic looked like a robot peeking up, I didn't notice the redbull logo, but then again, I still have a regular heartbeat - that sh*t is cray! :screwy: lol. Btw thank your for posting those pics from Photoscan, I'll have a play with it soon.


Some info regarding current devices.

If you have Kinect for Windows, you can switch it to Near mode, which reduces the range to about 15 inches to 9 feet. (Normal or Wide mode is about 3 ft to 15 ft.) Near mode typically used for face capture, and it's usually better for scanning objects and people (not so good for full body motion capture though.) Kinect for XBox does not support this feature, it's only available to Kinect for Windows and Asus Xtion. Carmine 1.09 is a near mode only device.

...

Carmine 1.08 and 1.09 was developed after the Xtion and these devices are more advanced than Kinect for Windows (version 1, I mean) and Xtion. Unlike K4W and Xtion, Carmine did not support both near and wide mode in a single device--1.08 was for wide scanning (i.e., fully body) and 1.09 was for near scanning (face capture, 3D object scanning.) The quality of the data was significantly higher than either Xtion or Kinect. Unfortunately, as mentioned already, you cannot buy either version of Carmine anymore--they went off the market a few weeks ago after Apple Computer purchased Primsense. Carmine was also a bit more expensive if you needed both modes--about double the cost of a single Kinect for Windows that had both.
...

Kinect Fusion apparently produces geometry that's 10x higher than ReMe.
...

Thanks very much for the details Greenlaw, I've actually got a good deal of experience hacking/programming the Kinect for realtime avatar control some years back (before MS released their SDK) and know a lot of the limitations of the skeletal tracking, but have no experience with using it for 3D scanning. I used xbox kinects at the time, but I just ordered one of the windows ones now with the near mode, which should probably work better for scanning than the xbox ones I suspect. I guess I'll find out soon. I just really don't want to be programming for this since I have so little time with it. I would consider it if I could knock out the script in a weekend, but the sdks sound fiddly. And Kinect 2 sounds too far off. Next year this project sounds easier, perhaps I should just build a time machine ;)

I just heard back from Cubify by email, and they said they have a 3 week lead time before shipping for their Sense scanners, so I may revisit them later when I've seen more closeup images of body scans from one of them. Hopefully in the next few weeks there will be more info on it. At this stage, I'm gonna wait on the Sense until I see more.


The Xtions look cool if I was going to use a windows tablet as a totally portable scanning solution, but I can only find them from hong kong and it will take awhile to get them too, so I might pass for now on them.

I think I'm going to start with Kinect for windows & Skanect and Photoscan w green screen for now. Since i have to lug it around to different locatios, this is the cheap portable green screen that I'll probably use for this project I think:
http://www.amazon.com/gp/product/B002S9KDKS/ref=ox_sc_sfl_title_1?ie=UTF8&psc=1&smid=A17W6NLJ3OBMCK


Also one more question, for either Photoscan or Kinect, do you guys think it's better to circle the person when scanning or put them on a turntable like this:
http://cheesycam.com/motorized-lazy-susan-heavy-duty/

I also think I may need some poles/arm handles so that they can have their arms totally still and in the same place while scanning. Do you think it's necessary? With that I'm worried about occlusion and having to make sure they are green too in order to mask out the poles...?

Thanks everybody for your input! :)

erikals
12-30-2013, 10:43 PM
Also one more question, for either Photoscan or Kinect, do you guys think it's better to circle the person when scanning or put them on a turntable like this:
http://cheesycam.com/motorized-lazy-susan-heavy-duty/

actually, i think it should work just as good without the turntable technique.
can't guarantee it, but as far as i know both Photoscan and ReMe will use an algorithm that solves that kind of perspective change automatically ... Photoscan does not like wide or fisheye lenses though

Greenlaw
12-31-2013, 12:48 AM
Also one more question, for either Photoscan or Kinect, do you guys think it's better to circle the person when scanning or put them on a turntable like this:

This is the rig I built a while back when I was planning to use ReconstructMe and Photoscan to create digital stunt doubles on production.

119015

I put off that method for simpler techniques because our schedule didn't allow for a lot of experimenting. Anyway, the idea was that the subject would stand still, resting hands on the yellow balls to keep the arms steadily in position. Then, I would walk around the subject with the Kinect device to scan the body. This would have been done in ReMe. For the head, I would have the subject sit on an office chair which would be rotated while two HD cameras recorded video from different height-angle levels. Afterwards, the frames would be pulled from the video and the background would be either subtracted from a clean plate or I guess I would have just recorded on our green screen stage in the garage. The images would then be imported to Photoscan for reconstruction.

The rig is pretty simple. I used two inexpensive lightstands (about $12 each,) the same type I use for motion capture (Kinects or PS3 Eye cameras) and for supporting large softboxes--these stands are really quite sturdy and they're a lot cheaper than a camera tripod with the same stability and sturdiness. The yellow balls are practice softballs purchased from a sporting goods store--the balls are made of a very dense foam rubber, so they are light but firm and durable. I drilled a hole in the ball and inserted a threaded rod coupler which was firmly glued in place. The coupler has a 1/4 screw thread so it fits snugly on the head of the stand like a camera on a tripod. Total cost was probably around $16 per stand, and I still had an extra softball to toss around (they came in a pack of three.) :)

I know 1k0 had a more sophisticated setup--the subject stood with a similar rig but on a turn table. The advantage with the turntable is that you can subtract the background much more easily using a clean plate (as in my head scanning scenario.) Also, if you're using a photo system, as opposed to a depth based system, you can set up the lights to cast a even light on the subject. If you're walking around the subject, the lighting is dependent on the room lighting which may not be ideal from all angles. In my case, I wasn't too concerned about that since my digital doubles only needed to resemble the actors enough for wide action shots.

Because I was running out of time, I wound up just projection painting the actor's face and clothing on a generic model that was tweaked to somewhat resemble the actor. I would have loved to try the above method because, even with the lower resolution, I still think the results would have been more accurate. Someday, I'll get back to setting up a workflow for that but right now I have too many other projects to finish first. Sigh!

Hope this info helps.

G.

Greenlaw
12-31-2013, 01:05 AM
FYI, this is the turntable I have:

Turntable - Heavy Duty Swivel With Steel Ball Bearings for Indoor/Outdoor Use (http://www.amazon.com/LapWorks-Swivel-Bearings-Outdoor-Monitors/dp/B00523MJMW/ref=cm_cr_pr_product_top)

It is not motorized, which would have been preferable, but it supports a lot of weight--200 lbs! I'm currently using it for some live action footage for 'B2'. To rotate it, I have an assistant in a green lycra suit manually pushing it off to the side while the actor stands on it.

This works okay for 90 degree turns (what I needed) but obviously it's not meant for smooth 360 degree rotations and the platform isn't big enough for my 3D scanning 'pose' rig. However, because it supports a lot of weight, I thought I might be able to place a platform on it large enough to accommodate the rig and subject. But, like I said, for me, that's a project for another day.

G.

Thomas Leitner
12-31-2013, 05:59 AM
...
Also one more question, for either Photoscan or Kinect, do you guys think it's better to circle the person when scanning or put them on a turntable like this:
http://cheesycam.com/motorized-lazy-susan-heavy-duty/

I also think I may need some poles/arm handles so that they can have their arms totally still and in the same place while scanning. Do you think it's necessary? With that I'm worried about occlusion and having to make sure they are green too in order to mask out the poles...?
...

When we scaned with the kinect we used a turntable. Since the kinect sensor has a terrible cable connection itīs difficult to walk around big objects (you need in any case a assistant) even with a laptop. The downside is that the scan software is confused by the static background things and that it is even more difficult to stand still on a turning stand. We didnīt use realtime fusion (like ReMe or scanect) so we were able to clean up the background before stitching the object. I donīt think that ReMe or scanect can handle a changing background (the only thing you can do is to reduce the scan depth to exclude the background from scaning).

For a photoscan solution with green screen itīs irrelevant if your background changes (you mask it anyway), but with a turntable you need the green screen only on one side. The downside is that the light may change when turning the person what may lead to poorer scan results.

But again: the cleaning work on kinect scaned objects will be to much within your guidelines.
Here is a early state of one of our scans (all unnecessary crap deleted and small holes filled) but you can see the big holes that come from blind angles until scaning (and that is not our worst scan).

119022

Itīs not the only downside that you have to fill the holes, you also have no color information from this spots.

I would try it with photo scan and greenscreen...

ciao
Thomas

p.s.: here is a frame from the rendered animation:

119023

erikals
12-31-2013, 07:42 AM
thanks again to both of you for the info! http://erikalstad.com/backup/misc.php_files/smile.gif

Thomas, that still looks great! http://erikalstad.com/backup/misc.php_files/king.gif

sami
01-03-2014, 02:41 PM
actually, i think it should work just as good without the turntable technique.
can't guarantee it, but as far as i know both Photoscan and ReMe will use an algorithm that solves that kind of perspective change automatically ... Photoscan does not like wide or fisheye lenses though

Hi erikals, thanks, but the Photoscan manual says it prefers wide angle lenses to telephoto/zoom ones. No fish eye makes sense though.

I'm still on holidays away from my gear, so all I have with me is a 5 megapixel smartphone, and I downloaded the Photoscan and tried 3 subjects: a small wooden rocking horse, a 3ft tall red/ebony statue (that was probably too shiny), and a person seated on a chair.

I could not get anything at all usable from any of them despite how many pictures I took. The rocking horse got the patterned rug it was sitting on very well, plus a little of it's legs, but nothing else. The shinier statue got nothing, and the person was the best, but completely unusable because of very very bumpy surface (I assume because of the relative low res 5mp, crappy sensor and relatively noisy image.

I will try again when I get back to my studio, with a 12-14mp Sanyo HD2000, a Canon 5D mii, and I may check out a new Sony Rx100 mii; anyone have any suggestions for minimum specs for the best results?



FYI, this is the turntable I have:

Turntable - Heavy Duty Swivel With Steel Ball Bearings for Indoor/Outdoor Use (http://www.amazon.com/LapWorks-Swivel-Bearings-Outdoor-Monitors/dp/B00523MJMW/ref=cm_cr_pr_product_top)

It is not motorized, which would have been preferable, but it supports a lot of weight--200 lbs! I'm currently using it for some live action footage for 'B2'. To rotate it, I have an assistant in a green lycra suit manually pushing it off to the side while the actor stands on it.

This works okay for 90 degree turns (what I needed) but obviously it's not meant for smooth 360 degree rotations and the platform isn't big enough for my 3D scanning 'pose' rig. However, because it supports a lot of weight, I thought I might be able to place a platform on it large enough to accommodate the rig and subject. But, like I said, for me, that's a project for another day.

G.

Thanks for the photo, the stands are a good idea, and many of the stands I saw have balls for handles as you put, but that bugs me that the underside of the hands won't get scanned. I was wondering if a peg based system between the fingers - something like this would be better so I can fully scan the hands too?
http://www.biovericom.com/biotech/hand_geometry.gif

Also thanks for the link, that heavy duty cheap 15" turntable will be useful for something I'm sure! :)


When we scaned with the kinect we used a turntable. Since the kinect sensor has a terrible cable connection itīs difficult to walk around big objects (you need in any case a assistant) even with a laptop. The downside is that the scan software is confused by the static background things and that it is even more difficult to stand still on a turning stand. We didnīt use realtime fusion (like ReMe or scanect) so we were able to clean up the background before stitching the object. I donīt think that ReMe or scanect can handle a changing background (the only thing you can do is to reduce the scan depth to exclude the background from scaning).

For a photoscan solution with green screen itīs irrelevant if your background changes (you mask it anyway), but with a turntable you need the green screen only on one side. The downside is that the light may change when turning the person what may lead to poorer scan results.

But again: the cleaning work on kinect scaned objects will be to much within your guidelines.
Here is a early state of one of our scans (all unnecessary crap deleted and small holes filled) but you can see the big holes that come from blind angles until scaning (and that is not our worst scan).

119022

Itīs not the only downside that you have to fill the holes, you also have no color information from this spots.

I would try it with photo scan and greenscreen...

ciao
Thomas

p.s.: here is a frame from the rendered animation:

119023

Thanks Thomas, I appreciate your post and the results of your scan, it looks good. I just picked up a Kinect for Windows and will try Skanect with it, if it has a threshold for distance that should be fine I suspect. I let you know how I get on with it and see how it works for me texture-wise. :)

Greenlaw
01-03-2014, 05:03 PM
The reason for the stands with the balls is to help the subject stand as perfectly still as possible, which is critical for a successful 3D scan. Losing the hands is a small price to pay for a good body scan.

For complete hands, I would just merge an existing 'generic' hand to the body mesh and roll the texture color--this way, you can use geometry that's already been cleaned up and prepared for animation.

If it's absolutely necessary that you have the subject's real hands in place, you can scan the hands separately and merge them to the mesh. In fact, anything with critically important detail that you're certain will be rendered up close, like the face and hands perhaps, probably should be created in a separate scanning pass for the highest possible resolution and texture quality. Breaking the scan up like that is not an uncommon workflow for 3D scanning. In the long run, you may find that you can get higher quality scans this way.

Just a few things to consider.

G.

Greenlaw
01-03-2014, 05:20 PM
That peg rig is pretty neat. I can see how that can insure that the front and back photographs will line up more precisely, making texturing a lot easier.

That said, if you're in a hurry, I still think it will be easier to re-purpose existing hands (male and female pairs,) and the roll colors in the existing texture maps to match the subject's skin color. 200+ plus people is an awful lot of hands to create, clean up, weight map and rig individually.

erikals
01-03-2014, 07:54 PM
looks like there will be an upcoming PhotoScan tutorial here >
http://forums.cgsociety.org/showpost.php?p=7727465&postcount=5

Thomas Leitner
01-04-2014, 02:15 AM
....For complete hands, I would just merge an existing 'generic' hand to the body mesh....

Thatīs what we did, too. We made a "standard" glove with different textures for each ski jumper. This is usually faster than cleaning up the scaned hands, even if you donīt want to animate the character (in this case you have to remesh your whole scan anyway).


....In fact, anything with critically important detail that you're certain will be rendered up close, like the face and hands perhaps, probably should be created in a separate scanning pass for the highest possible resolution and texture quality. Breaking the scan up like that is not an uncommon workflow for 3D scanning. In the long run, you may find that you can get higher quality scans this way....

And this too. We scaned the head of each ski jumper separate. This way itīs easier for the person to stand (or in this case sit) still. We used a swivel chair (you could reach the top of the head easier because the person is lower than you). Since we modelled the helmets and ski goggles we didnīt stay with perfect hair and back of the head scanning.


ciao
Thomas

sami
01-09-2014, 02:55 AM
Here's my 2nd test using Photoscan (demo).

I'm quite happy with the results, but it requires a very grunty computer. I tried Photoscan out on a small statue since I didn't have a person handy when testing, and ran it on one of our less powerful machines (since we don't want to tie up the newer ones for this mostly donated-time project).

- subject was 32cm tall statue
- normal indoor lighting - no keys, flashes, or greenscreen
- used a cheap Sanyo HD2000 12 Megapixel camera 4000x3000
- JPG (no RAW) on best quality the camera can do 38mm lens
- 68 handheld shots up and down and around the item, a few closeups (between the knees) with overlapping shots to get to closeups
- all 68 shots successfully aligned in Photoscan
- machine was Mac Pro 8 core - dual 2.8Ghz Xeon, 24Gb Ram, NVidia GTX 570, Running Win 7 x64
- Processed on high, 100,000 points, made mesh to 1.7million polys, 4096px texture
- Processing time:
Align the 68 photos: ~ 39 min
Build the dense cloud: ~ 1hr 11 min
Build Mesh: ~ 16 min
Build Texture: ~ 11 min
TOTAL: ~ 2hr 17 min

Bit of a pain in the butt time-wise, but otherwise very easy, and there were no / small holes that were easily filled and inpainted automatically by Photoscan. Not sure about the render quality since I have the demo and haven't bought it yet (I will soon) so I can't currently export.

But despite the post processing time, I found the process rather easy. In this case, I did not mask or use greenscreen since distance to other objects seemed ok to "crop" the object tracking. Also, I did not calibrate the camera or use a ruler marker or anything (since I don't know how to calibrate for Photoscan yet) - and although I didn't use a proper DSLR like a Canon 5D mk II since it was indisposed at the time nor have I yet made this project an excuse to get a 20.2 Mp Sony RX100 Mk II; certainly it was obvious, that even a point & shoot 12Mp Sanyo works quite adequately, while a 5Mp smartphone just generated garbage because of the noise in the images.

Here are some screenshots of the results. If you've had a chance to play with Photoscan more, please let me know if this is reasonable and what you'd expect for my circumstances here. I'll try with actual people soon as well as with proper lighting, and I need to double check camera settings and try a better camera. I expect the processing time to be much worse if I up the Megapixels.

Pictures here:
http://imgur.com/a/IiHaJ#0

p.s. painful 3D navigation in photoscan - would be helped by pro version allowing me to set up coordinate axis - but $3K more for the only other feature from pro that I need makes it a tough sell for that...

sami
01-09-2014, 03:43 AM
Same object, this time scanned with a Kinect for Windows scanner and Skanect (demo)

Clearly I am not an expert with either piece of software, having not done any tutorials or used it more than a couple of times, but I can say that out of the box, for a noob, Photoscan wins. However, maybe when I figure out the scanning procedure better with a Kinect I will have more luck. Skanect software is slick, but basic, very easy to use, but a bit tricky for a new user to do it right.

Here are the unfortunate pics of my first Skanect scan:
http://imgur.com/a/EMVEx#0

I handheld scanned it, and I'm thinking now that I definitely need to build a person sized turntable rig and also a kinect jib as shown in the Winter 2014 Make magazine article (btw what an excellent review of so many 3D printers in one spot - all using the same object and review specs! Not that I need a 3D printer for this project, but I can't recommend this issue highly enough - plus it has a short article on proper Kinect body scanning): http://makezine.com/volume/guide-to-3d-printing-2014/

Problems I encountered (despite slow scanning with a steady hand):
- Kept getting errors getting me to align the object to continue
- despite having better than average spatial abilities and body memory, this was very tricky, even with the Skanect overlay to re-register the object in order to continue scanning once it lost tracking.
- Eventually I gave up and thought I;d try again later with an assistant or a turntable or something

In terms of time to generate the model, Skanect wins by a mile (~ 5 min). But the quality is so much lower res, and it's hard to compare until I get a successful scan with Skanect. I'll keep playing and eventually try that multiple scan combining trick to see if I can get higher resolution with a Kinect for Windows scanner ... I'm not too hopeful.

What do you guys think? For 200 people to scan, at this point it seems "safer" to just take photos and process them slowly as time permits using photoscan. Post time is painful, though does not require much manual work except it needs handholding to monitor the results in Photoscan - though it has a batch mode I did not try. Skanect is far trickier to get a good scan. It may be that I am much more adept with a camera than I am with a Kinect in my hand. I'll see if turntables and jib rig for the Kinect makes the scanning process easier...

Btw, thanks Thomas for your notes on Skanect. :) But we won't be scanning heads or hands separately, as there is no time to stitch for all these subjects we need to scan, so I'll aim to scan them complete in once session with all their parts as good as possible...

Thomas Leitner
01-09-2014, 05:31 AM
....
Btw, thanks Thomas for your notes on Skanect. :) But we won't be scanning heads or hands separately, as there is no time to stitch for all these subjects we need to scan, so I'll aim to scan them complete in once session with all their parts as good as possible...

Hi,
we didnīt use Skanect for this, though the kinect sensor. However, my description should show the complicated workflow with a kinect based scan. Keep in mind that scaning a person is more dificult than a inanimate object and a full body scan with the kinect sensor can take 60 sec - 90 sec. And you always get some hidden areas and bad stiched geometrie and lost color information....and much manual post work.

As I mentioned before: I will go the photoscan way

please keep us up to date
ciao
Thomas

sami
01-09-2014, 05:05 PM
Hi,
we didnīt use Skanect for this, though the kinect sensor. However, my description should show the complicated workflow with a kinect based scan. Keep in mind that scaning a person is more dificult than a inanimate object and a full body scan with the kinect sensor can take 60 sec - 90 sec. And you always get some hidden areas and bad stiched geometrie and lost color information....and much manual post work.

As I mentioned before: I will go the photoscan way

please keep us up to date
ciao
Thomas

Yes, sorry I misread your post. I agree after just my first few scans, Kinect scanning is a fiddly toy in my opinion, (but of course I am no expert by far using it, it seems the Kinect is better at skeletal tracking than object tracking). It is awesome that the model is built in realtime with Kinect, and a shame that the processing time is so much with Photoscan, but the results are significantly superior.

With Photoscan, in my case, the photography was simple with me taking 68 handheld shots in a couple of minutes and the results were more than good enough. Processing time is a pain though. This guy captures a body pose with Photoscan using 80 DSLR Canons synced together (ouch that's like $250,000!! ) but clearly that's not feasible and is in a different league of body scanning. But sure is cool to look at: http://www.ten24.info/?p=1063

erikals
01-09-2014, 08:51 PM
there is the easier Scanner Killer, you might get away with mirroring the face, then tweak.
http://www.cgfeedback.com/cgfeedback/showthread.php?t=1488

cost more money though... at least $600...

unsure how Photo Sculpt compares.... (€140)
not so good maybe...
http://photosculpt.net/gallery/portraits