Kinect for Windows V2 SDK: Jumping In…

I’m not a big video gamer.

Of course, I’ve played one or two (hundred? thousand?) video games Smile all the way back to the 1970s and I had the original PlayStation and then the Xbox and the Xbox 360 but, for me, video gaming is something I might do on a rainy day and I haven’t played a console game in maybe 9 months and I’ll quietly admit that I haven’t bought an Xbox One and I don’t plan to – gaming’s not a big thing for me.

Maybe because of this, I’ve got limited experience with the Kinect V2 sensor beyond perhaps the experience that everyone’s had – i.e. I’ve played with it, I’ve used it and I’ve read the specs and been impressed by the demos and games but, primarily, I’ve used it in an Xbox One setting to play a few games.

More directly relevant to me are some of the applications of the sensor beyond gaming.

Applications where the sensor becomes the “all seeing eye (and ears)” of a Windows PC in order to provide a unique level of combined inputs that provide new levels of natural user interface for an application.

Sometimes that might be a general application (e.g. maybe using gestures to drive the “next slide” in PowerPoint) and I guess often it’s going to be a more specialised, vertical and/or embedded solution (e.g. something like the work that’s been done for surgical teams).

Either way – it’s a very interesting bit of kit for a Windows developer and so I was very pleased when this landed on my desk last week;

image

I’m not sure it’s happy (or going to work very well) sitting on that GorillaPod so that’s perhaps something I’ll have to take a look at but, instantly, it prompted me to think about what I could with it from a development perspective so I plugged it into my laptop and set off to go and download the SDK from the site below;

image

The SDK is in public preview and is a relatively easy setup – for whatever reason I was expecting a huge download that didn’t happen but it’s worth checking the system requirements both for the V2 Sensor and the SDK as there are requirements around OS (Windows 8/8.1 including Embedded) and then a decent 64-bit processor, some decent RAM and USB 3.0 primarily but check them out in detail before you buy anything.

The SDK doesn’t cost anything and there’s no additional runtime licensing costs for apps built on top of the SDK so that’s good to know.

With the SDK installed I was curious as to what kinds of applications I could build against the sensor;

  1. Native Apps in C++.
  2. Managed Apps in C#.
  3. Windows Store Apps in WinRT.

I think I’m right in saying that (3) is a new and very welcome part of the SDK although a developer building Windows Store apps with Kinect will need to wait for the full release of the SDK before they can put those apps into the Store. I don’t know the details of this but it will be interesting to see how an app which has a dependency on the Kinect sensor will be described in the Store to ensure the user understands what it is they are looking at as I think (beyond ARM/x86) it’s the first time that the Windows Store has had a notion of “required hardware”.

Having got the SDK installed and having made sure that my sensor is working by trying a few of the SDK samples, I’m at the point where I want to dig in and see what I can do and the timing is great because just yesterday these Microsoft Virtual Academy videos went onto the web;

SNAGHTML31a147f0

and I’m heading over there to see if I can use those to kick-start the process of figuring out what I’ve got.

I may be some time….