Welcome

Coordinator
Aug 19, 2013 at 4:19 AM
Edited Aug 19, 2013 at 4:20 AM
Thanks for trying out Face Fusion.

Please let me know what you think.
Sep 11, 2013 at 4:40 PM
Hi Joshua Blake, really a great Job! I'm a student of Computer Science at University of Verona (Italy) and in these months I've worked hard with windows kinect for my final degree project.
My application tries to detect the face (not the whole head) of an user in front of kinect and exports the relative mesh. This is done using the FaceTracking library and the Fusion library. But I've still some problems to export only the face, because I can't select the portion of volume to reconstruction.
I see in your source code that this application make a lot of processing, but I still can't understand where the cropping of the volume it's done. I mean, is it done cropping the depth image? (cropping it with Face Tracking Rect) Or is it done directly in world camera space? (maybe using particular translate/scale matrix)
I've seen that you use two skeleton joints to select the head, but I don't understand where you create a cube for isolate the head, I see only the volumeCenter elaboration.
Sorry for my bad english...I hope to be able to explain myself.
Thanks for the reply and the help!

Greeting,

Mattia Thiella
Coordinator
Sep 23, 2013 at 8:05 AM
Hi,

I do have some remnants of face tracking in there but ended upon not using the face tracking data since it loses track as you turn your head too far.

You are almost there in understanding -- the two skeleton joints are used to choose the center of the reconstruction volume. The _volumeCenter field is used in the ResetFusion() method, which passes it to FusionManager.ResetReconstruction(_volumeCenter); In that method, the volumeCenter is used to translate the worldToVolumeTransform, which effectively moves the reconstruction volume around in world space relative to the Kinect.

The size of the reconstruction volume doesn't change and was simply set to contain enough margin around a typical head size. As long as the skeleton tracking is working well, then when reconstruction is reset (on the first "Fusion Start" command or any "Fusion Reset" command) then Fusion will start tracking with the reconstruction volume containing just the head area. From then on, as the user moves his or her head, Fusion tracks those position changes and integrates the new data accordingly. Keep in mind, since the volume doesn't have any background (the room, furniture, etc.) then it can only go on the movement of the head. It is exactly the same mathematically as if you moved the Kinect around the head while the head stayed still.

Hope that helps! Let me know if you have questions on anything else.

Thanks,
Josh
Nov 13, 2013 at 6:38 AM
Hi Joshua Blake ! I'm a french computer science student, and am using Kinect (especially Fusion) for my final degree project too. Your sample application has been extremely useful to help me understand the basics of Fusion modeling. Thanks a lot !

I have played with it a bit, and have successfully implemented color reconstructions and color meshes in it to create .ply 3d color models instead .obj colorless models. The results are available here : https://github.com/kit-cat/ColorFaceFusion/. Any feedback on your part would be greatly appreciated !

Best regards,
Camille Fabreguettes
Coordinator
Nov 17, 2013 at 2:19 AM
Camille,

That's great! I took a look and it looks like it works. Kudos for also releasing your source code and under the MIT license. The color seems to work, although I didn't have good lighting to test. Adding an option (and voice command) to display the color reconstruction in the app would be a good idea.

Would you like to work with me to integrate your changes with the main project? I'm using git here on Codeplex and can integrate your changes. I had also considered posting FaceFusion on GitHub, but CodePlex has a better project page system.

Thanks,
Josh
Dec 6, 2013 at 9:45 AM
Joshua,

I'd be delighted to help you integrate these changes to the main project ! Should I add a voice command and option and make a CodePlex pull request when I'm ready ?
If you need anything, my email is camille(dot)fabreguettes(at)gmail(dot)com.

Thanks !
Camille
Jan 20, 2014 at 3:46 PM
Edited Aug 21, 2014 at 12:44 AM
Hi Josh,

I want to thank you for your work with Fusion. You're making lots of people around the world take a look at Kinect development and it's very cool to see what everyone is achieving.

Diego Carletti
Sep 6, 2014 at 1:58 PM
Edited Sep 6, 2014 at 2:49 PM
Hi Joshua,
I have some trouble trying to figure out which part of the code is responsible for setting the reconstruction volume...
I've read your answer to reekoz here and I was wondering (when I get the _volumeCenter to calculate correctly - the Spine or HipCenter joint will hopefully get its position calculated right - when I change the Seated to Default mode), which method/property should be used to increase the reconstruction volume, as I've understood you've hard coded it to allow for typical head size, not a torso or something even bigger? And, one question more, is the KinectFusionHeadScanning sample in the Kinect SDK v1.8 much different to how FaceFusion works (I gather its yours too) because just changing the JointTypes there doesn't seem to have much of an impact on the scanned area, maybe there's some more code you could point out there that uses face tracking/identification specifically, please?
TIA
Sep 7, 2014 at 12:33 PM
Dimitri, I might be able to help with that.

Face Fusion sets skeleton tracking mode to Seated, so it only tracks the upper-body joints. You have to change the SkeletonStream.TrackingMode to Default to track all joints. Read about it here: http://msdn.microsoft.com/en-us/library/hh973077.aspx

In MainViewModel, method StartKinect (I'm using my own version derived from Face Fusion, I may have moved things around):
newSensor.SkeletonStream.TrackingMode = SkeletonTrackingMode.Default;

Now, to change the reconstruction volume, you'll probably have to change quite a few things, since Face Fusion uses a standard size box. Get into FusionManager and understand it deeply. First you should mess around with the VoxelsPerMeter and VoxelResolutionX Y and Z values. It will depend on what you're trying to achieve, but if you edit those you should get a different size volume. I don't fully recall what I had to do to change it, but my version resets these values on runtime, they're not static. If you need to do this, you should take a look at ResetReconstruction, as you can't change the volume size once the volume has been created (you'll have to dump the current one and created a new volume to change the size).

I found the KinectFusionHeadScanning sample to be quite a mess and I was able to learn much more from Face Fusion. The FaceTracking from the SDK doesn't work well when you have to spin around like you do when using Fusion apps, so I ended up removing it completely from my version.

I'm working with my professors to open up my code properly, and I'll definitely keep it posted on this forum. I made an app called BodyFusion, to scan a person's whole body, as my final project for my Computer Engineering degree. I hope it'll help more people get into Kinect development.

Keep up your work and let me know if you need any more help, I'll do the best I can.

Diego Carletti
Sep 8, 2014 at 9:27 AM
Edited Sep 8, 2014 at 9:30 AM
Hi Diego,
Thanks for your prompt reply, I will take a look at the suggested FusionManager, though I'm a bit concerned with your omission of FaceTracking for the same reason I might have (subjects I'm trying to scan will have their heads/faces turned most of the time)... Hopefully, once I understand VoxelsPerMeter and VoxelResolution values I'll have something more meaningful to write. But, I've already had some partial success with the TrackingMode and JointType changes made right after I posted first time, so hopefully I'll soon be able to do stuff you may have already done... :)
Thanks