Intel RealSense Camera (F200): ‘Hello World’ Part 1

I've been itching to experiment with Intel's RealSense technology. Here's the glossy video that sold it to me – I'm a sucker for a glossy video 🙂


there are then some more general details on Intel's website;

Intel RealSense

or, perhaps more for the developer view of the technology you can go to;

Developer info for RealSense

but, frankly, I've found RealSense as described here to be a little bit of a 'muddle' to get my head around as it relates to a number of different things and in the end I turned to Wikipedia in an attempt to try and figure this out a little more;

That page tells you that RealSense is a technology where you have a regular camera alongside an IR laser projector, an IR camera and a microphone array. Intel then has 3 camera models that they've announced around that technology but not all of them are available as far as I know;

  • The front facing camera (Front F200) which is intended to be built into laptops/desktops to be applied to natural interaction areas such as gesture recognition, facial recognition. Camera details are here
  • The Snapshot camera which is intended to be built into tablets/phones and which is applied to 'computational photography' – refocusing, filtering, taking measurements from a photograph after it's been captured. Camera details are here.
  • The rear facing camera (Rear R200) for augmented reality, object scanning. I don't think there are any camera details on that one to date.

Today, these cameras are 'far from prevalent'. In order to get hold of one, I recently bought the developer kit camera which you can get hold of for around $99 plus the rest if you're looking to ship it to somewhere like the UK.

I also recently happened to buy a Dell 2350 All-In-One PC which came with another of these cameras built into the screen and so I now probably have 2 more RealSense F200 cameras than around 99% of people but I'd expect that availability is going to change in coming months.

Once I'd got hold of a camera, I tried out a few of the demos and they worked really well for me – there are similar demos recorded on YouTube here;

Naturally, the next thing I wanted was an SDK so that I might try some of this out from a coding perspective. Intel doesn't make it 'super easy' to get hold of the SDK because they want a registration in order to download it. That's no problem in and of itself but I got bogged down trying to resurrect an old account on their website but, finally, got the SDK from here;

and that specifically targets Windows 8.1 64-bit desktop applications so beware if you're running on another operating system or if you wanted to build Windows apps because I don't think this SDK will help you in that regard right now.

as an aside, it looks like WIndows Apps are on the roadmap here and it’d be great to see because I’d have liked to match up using the SDK with drawing via the Win2D libraries.

The other thing to be aware of is that the SDK has specific processor requirements. You need a;

4th generation (or later) Intel® Core™ processor

I came a little unstuck on this with my 'work' laptop because I'm subject to a corporate, enterprise policy where my laptop and phone are on a disappointingly slow upgrade cycle and it turned out that the Core i7 in my Dell XPS 12 was too old for the SDK and it wasn’t compatible so be aware of that.

With the SDK set up, I was ready to try and write a 'hello world' application. In so far as I could figure out, the SDK targets a number of environments;

  • C++
  • C#
  • Unity
  • Java
  • JavaScript – specifically, I think this is about having a browser open up a websocket to some kind of HTTP server running on the local machine that’s then talking natively to the SDK.

Of those, my natural inclination is to try and write some C# code and so that's what I set about trying to do in the first instance.

So, time to do a quick File->New->Project and I made myself a blank WPF application;


and added in a reference to the RealSense SDK. There's a .NET assembly here which sits on top of a native library so, immediately, you're going to have to make that x86/x64 decision around which of these you reference.

I'm not 100% sure on this but I think this could be packaged as a Nuget package to get around the developer having to choose the processor architecture here as per this post.

Regardless, I went with the x64 and you can see the path that I'm picking it up from here;


and I changed my build configuration so as to add an x64 configuration and switch to it;


and then I can go and attempt to write my 'hello world' code and get hold of a RealSense device. That all begins with an object called a PXCMSenseManager and I find the naming here to be pretty unhelpful;

every object in the library seems to begin with PXCM – it's a nightmare to keep typing it out and especially when PXCM means nothing to me. All those objects also seem to be in a global namespace Confused smile

Regardless, I can go and write a bit of code inside of my WPF window startup code and I can attempt to create myself a session manager;

namespace HelloRealSense


using System.Windows;

public partial class MainWindow : Window


public MainWindow()



this.Loaded += OnWindowLoaded;

App.Current.Exit += OnAppExit;


void OnAppExit(object sender, ExitEventArgs e)




void OnWindowLoaded(object sender, RoutedEventArgs e)


this.sessionManager = PXCMSenseManager.CreateInstance();


PXCMSenseManager sessionManager;



Now, that's all well and good but I'll hit a problem at runtime when I first hit F5;


because the managed code has that dependency on the underlying native library and the native library isn't being copied out to the application's folder before I try to run it. The library is called libpxccpp2c (wow, what is it with the naming of these libraries? Smile).

I'm not 100% sure of the best way for a .NET application like this one to take a dependency on a native DLL but, for the moment, I simply added the 64-bit version of the library to my project as though it was a piece of content and had it copy to the output folder on build;




and that seemed to get me to the point where I can run up my executable without it falling over.

From thereon in, it's time to dig a little deeper into the SDK and see if I can get some data from it onto the screen.

I've had a little explore around and from what I can see there are quite a lot of synergies between the approach that this SDK takes and the Kinect for Windows V2 SDK that I've been experimenting with in recent months as per these posts. Hopefully, that means that I can re-use a little of what I learned there but I may be some time…