Update: a video for kinradar has been added below.
It seems all the cool kids are coming up with and sharing their Kinect hacks these days. I decided I could join the fray by incorporating the Kinect into my home automation system, but first I'd need to understand the best way to process the depth data.
I've worked with 3D graphics in the past, and even wrote my own 3D modeling program in high school (which was a long time ago). But, lately I've been focusing a lot on the command line, so I decided to start there. Plus, since I'm targeting embedded systems with limited graphics power, it makes sense not to depend on X or OpenGL. So, to learn how to use libfreenect and determine the best way to process the depth data for my purposes, I made a few simple ASCII art demos, including a cool radar-like display.
Continuing the trend set by my release of cliserver, an example libevent-based socket server, I decided to give these learning tools away for free. All of these programs are written in C, based on libfreenect, and released under the GPLv2 (or later).
kinstats was the first Kinect test program I wrote. It displays some basic statistics from the depth stream and a depth histogram. A quick-and-dirty bash script is also included that controls the brightness of my lights based on the average distance in the scene. The script demonstrates how to use kinstats to pipe simple depth data through common UNIX utilities.
You can download kinstats from the kinstats github page.
kingrid followed logically from kinstats. I wanted to experiment with dividing a room into sectors, with different actions taken by the automation system depending on which sectors of the room were occupied. I started by dividing the depth image into a grid and displaying basic stats for each grid section. From there it was relatively easy to add an ASCII-art depth display.
The kingrid source code is on the kingrid github page.
Here's where things start to get fun. Since dividing the room into a 2D grid based on image dimensions obviously won't tell me where people are standing (or sitting) in the room, I needed to have a better way of visualizing depth. So, I decided to plot the Kinect's visible frustum from an overhead perspective, creating a radar-like display of the "echoes" in the room. These still images don't do it justice, so I'll be adding a video soon I've added a video as well.
As with the others, you can download kinradar from the kinradar github page.
I'm still making improvements to these hacks and will probably develop more, so watch my github page for updates. One of my next ideas is adding background detection and removal sufficient to track movement and occupation within the room. I'm open to suggestions, criticism, and feature requests, as well. Let me know if you find these demos useful or entertaining.
Once I've more fully developed my understanding of computer vision, I'll make better use of the Kinect in my home automation system. You can look forward to another post about that when it's done.