So, today I made some progress. I had some trouble getting started with ofTheo’s ofxKinect addon at first. I could not find documentation for ofxKinect and I got a little confused by all the differend addons and techniques that people use. So I kind of reversed engeneerd the example that comes with ofxKinect. By stripping out allmost every line of code and add them one by one again, I figured out how it works.
It turns out that the ofxKinect acts as a normal video grabber. You have to initialise, open and close it. You can check if a frame is new and get the pixels of a frame. Just like a normal video grabber. There are just some new methods for checking distance for a perticular point, and instead of only the colored pixels. You can get the depth pixels for instance.
The most handy features –of what I’ve found so far– are the far- and neartreshold members. Almost like a focal point of a normal camera, exept with hard edges. Whatever is out of those boundries, just does not get registered. So the result of that image –a black background for that what is not ‘in focus’, and a greyscaled image for what that is– can easely be put in some sort of blobdetection. Like OpenCV. Ofcourse, that is wat happens in the example. The image that came out of the Kinect is put in an OpenCV image so it can be analyzed. From here on, ofxOpenCV takes over.
Once I understood that, it all made some sense and I was able to put the finger detection of daniquilez in. That just analyses an OpenCV image and mine happened to came out of a Kinect.
There are still some bumbs in the road though. The greyscale image that comes out of the Kinect is kind of jagged. I still need to find a way to smooth that out so that the fingerDetection can do a better job. All in all this was the first try to make some sense of it all just by hacking together some examples I found online.
I did found an blogpost of Patricio González Vivo as well. He made a hack to the famous NUIGroups CCV app. His app takes the Kinect as source and uses the same nice interface CCV has. I did not dive into the source code of Particio’s app –which he so kindly shares– yet to see if the blob’s that come out of it have a z (or depth) member. But when I do, I will try to fit in daniquilez’s finger detection as well.
I do not have a demo yet because it really is the first try, and things are a bit buggy at best. But I want to post some soon.