Continuing this series of posts there’s something that I’ve been using as a tool for Kinect development which I hadn’t really mentioned on the blog here so I thought I’d bring it up and that’s “Kinect Studio”.
There’s coverage of Kinect Studio here in the Channel 9 video;
but I hadn’t really appreciated how much Kinect Studio was giving me until I’d started doing some actual development against the Kinect sensor and also until I’d carried a Kinect sensor around in my bag and started to ask the question of whether that was really necessary.
In terms of working with a sensor like this there are going be times when;
- You don’t have a sensor available. Examples;
- You might be working on a 10-person team where not everyone has a sensor.
- You might be running code in some kind of test environment where it’s not practical to be plugging in a sensor.
- You might be travelling and getting out a Kinect sensor and plugging it in on the train/plane is going to (at best) get you some very funny looks from other passengers.
- You have a sensor with you but you can’t use it. This might apply to any of the situations above but, specifically, if you are doing solo development then debugging certain scenarios can be really quite tricky. Example;
- You are doing anything which involves standing in front of the sensor. Standing in front of the sensor while debugging on your laptop can be really tricky.
For all of these reasons and more, the folks on the Kinect SDK team have delivered both an architecture and a tool which allows for these scenarios and without which I think working with the sensor would be quite hard.
Kinect Studio is installed with the SDK. You can find it by searching Windows for Kinect;
and it runs up as a Windows desktop application;
and the underlying architecture here is really important in that there’s a service running away on the machine which can receive data from a physical sensor and pass it on to client applications running on the machine but that service has a public set of APIs that allow it to send/receive data from other code for record/playback purposes.
Kinect Studio is built on those APIs to allow it to do exactly that such that it becomes a “pretend” Kinect Sensor for those scenarios where using a real sensor isn’t practical and including those where a real sensor isn’t available.
There’s a good demo of Kinect Studio in the video that I referenced above so I thought I’d make something more specific to my own purposes here and show how I’ve used it when producing a couple of the blog posts that I’ve had on my site in recent weeks as that’s how it’s had an impact on me.
Example 1 – Body Data
In this post I was attempting to write some WPF code which drew skeletal data in 3D and debugging that can be really tricky if you’re working along and your body is at your keyboard rather than half way across the room standing in front of the Kinect sensor.
Here’s how Kinect Studio really helps in that case;
Example 2 – Audio Data
In this post I was trying to do something a little more advanced in that I wanted to;
- Detect the sensor.
- Detect a single user in front of the sensor.
- Detect that the user has their hands near their face.
- Attempt to detect the pitch of the note coming from a harmonica that they are playing.
I’m still debugging this code right now – I don’t have it at the “working” stage in that there are a couple of notes that aren’t being detected correctly by the pitch algorithm and the Kinect Studio is really helpful in trying to debug that;
Wrapping Up
Having not come from a “Kinect for Windows V1” world, I’m not sure how much of what the Kinect Studio offers would be “expected” to a developer who’s already been doing work there although I know that there have been a bunch of improvements to the way that it works. As a newcomer with the V2 sensor and SDK I think it’s really a case of a tool that you’d have to think about building if it didn’t already exist – I tend to hit points where I realise that I need to stop getting up in front of the sensor and start recording data via the Kinect Studio if I’m going to make any process.
It’s also the mechanism via which you can record data for custom gestures, but that’s another story…