Intel RealSense Camera (F200): ‘Hello World’ Part 4

Following up on my previous post, I wanted to put some of what I’d figured out back behind something that I could at least point and click my way through and so I made a ‘UI’ in WPF for selecting a stream and viewing it;

Capture

shot2

In doing so, I stuck to the approach that I’d taken in the previous post of enumerating modules, devices, streaming profiles but I wanted to tidy things up at least a little by having a UI which is data-bound and so this is just 3 ListBoxes and an Image with a viewmodel called, imaginatively ViewModel;

<Window x:Class="HelloRealSense.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:local="clr-namespace:HelloRealSense"
        Title="MainWindow"
        Height="768"
        Width="1024">
  <Window.DataContext>
    <local:ViewModel />
  </Window.DataContext>
  <Window.Resources>
    <DataTemplate x:Key="nameTemplate">
      <TextBlock Text="{Binding Name}" />
    </DataTemplate>
  </Window.Resources>
  <Grid>
    <Grid.RowDefinitions>
      <RowDefinition Height="2*" />
      <RowDefinition Height="*" />
    </Grid.RowDefinitions>
    <Grid>
      <Image Source="{Binding ImageSource}" VerticalAlignment="Center" HorizontalAlignment="Center"
             Stretch="None"/>
    </Grid>
    <Grid Grid.Row="1">
      <Grid.ColumnDefinitions>
        <ColumnDefinition Width="Auto" />
        <ColumnDefinition Width="Auto" />
        <ColumnDefinition />
      </Grid.ColumnDefinitions>
      <ListBox ItemsSource="{Binding CaptureModuleDescriptions}"
               SelectedItem="{Binding SelectedCaptureModuleDescription,Mode=TwoWay}"
               SelectionMode="Single"
               ItemTemplate="{StaticResource nameTemplate}">
      </ListBox>
      <ListBox ItemsSource="{Binding DeviceDescriptions}"
               SelectedItem="{Binding SelectedDeviceDescription,Mode=TwoWay}"
               SelectionMode="Single"
               Grid.Column="1"
               ItemTemplate="{StaticResource nameTemplate}">
      </ListBox>
      <ListBox ItemsSource="{Binding DeviceStreamingProfiles}"
               SelectedItem="{Binding SelectedStreamProfile,Mode=TwoWay}"
               SelectionMode="Single"
               Grid.Column="2"
               ItemTemplate="{StaticResource nameTemplate}">
      </ListBox>
    </Grid>

  </Grid>
</Window>

I wrote a few more ‘helpers’ in putting this together. I wanted something that dealt a little with status codes for me although I stayed away from turning them into exceptions but I may end up there in the future;

namespace HelloRealSense
{
  static class StatusHelper
  {
    public static bool Succeeded(this pxcmStatus status)
    {
      return (status == pxcmStatus.PXCM_STATUS_NO_ERROR);
    }
    public static bool Failed(this pxcmStatus status)
    {
      return (!status.Succeeded());
    }
  }
}

and I wanted a class that abstracted out some of the SDK-mechanism for querying/enumerating although I’m still not sure I’m 100% happy with this, it’s still too ‘wordy’ but I find it better than attempting to use the SDK in the raw;

namespace HelloRealSense
{
  using System.Collections.Generic;

  delegate pxcmStatus RSQueryWithDescriptionAndReturnTypeIteratorFunction<D,T>(
    D descriptionType, int index, out T returnType);

  delegate pxcmStatus RSQueryWithDescriptionIteratorFunction<T>(
    T descriptionType, int index, out T returnType);

  delegate pxcmStatus RSQueryIteratorFunction<T>(
    int index, out T returnType);

  static class RSEnumerationHelper
  {
    public static IEnumerable<T> QueryValuesWithDescription<D,T>(D description,
      RSQueryWithDescriptionAndReturnTypeIteratorFunction<D,T> queryIterator)
    {
      int i = 0;
      T current;

      while (queryIterator(description, i++, out current) == pxcmStatus.PXCM_STATUS_NO_ERROR)
      {
        yield return current;
      }
    }
    public static IEnumerable<T> QueryValuesWithDescription<T>(T description,
      RSQueryWithDescriptionIteratorFunction<T> queryIterator)
    {
      int i = 0;
      T current;

      while (queryIterator(description, i++, out current) == pxcmStatus.PXCM_STATUS_NO_ERROR)
      {
        yield return current;
      }
    }
    public static IEnumerable<T> QueryValues<T>(RSQueryIteratorFunction<T> queryIterator)
    {
      int i = 0;
      T current;

      while (queryIterator(i++, out current) == pxcmStatus.PXCM_STATUS_NO_ERROR)
      {
        yield return current;
      }
    }
  }
}

and I wrote an extension to the PXCMSample class so as to extract from it the image that belongs to a particular type of stream (PXCMCapture.StreamType);

namespace HelloRealSense
{
  static class PXCMSampleExtensions
  {
    public static PXCMImage GetImageForStreamType(this PXCMCapture.Sample sample, PXCMCapture.StreamType streamType)
    {
      PXCMImage image = null;

      switch (streamType)
      {
        case PXCMCapture.StreamType.STREAM_TYPE_ANY:
          break;
        case PXCMCapture.StreamType.STREAM_TYPE_COLOR:
          image = sample.color;
          break;
        case PXCMCapture.StreamType.STREAM_TYPE_DEPTH:
          image = sample.depth;
          break;
        case PXCMCapture.StreamType.STREAM_TYPE_IR:
          image = sample.ir;
          break;
        case PXCMCapture.StreamType.STREAM_TYPE_LEFT:
          image = sample.left;
          break;
        case PXCMCapture.StreamType.STREAM_TYPE_RIGHT:
          image = sample.right;
          break;
        default:
          break;
      }
      return (image);
    }
  }
}

and I ended up writing some very basic ‘view model’ classes for three of the types in the SDK (PXCMSession.ImplDesc, PXCMCapture.DeviceInfo, PXCMDevice.StreamProfileSet). Wanting to be cheap, I’d hoped that I could avoid ‘view models’ for these types and just bind directly to them from my XAML but the SDK tends to have types that offer public fields which don’t work for binding and so I wrote these tiny wrappers around these types just to make them useful to binding;

namespace HelloRealSense
{

  class CaptureModuleViewModel 
  {
    public CaptureModuleViewModel(PXCMSession.ImplDesc implDesc)
    {
      this.implDesc = implDesc;
    }
    public string Name
    {
      get
      {
        return (this.implDesc.friendlyName);
      }
    }
    public PXCMSession.ImplDesc ImplDesc
    {
      get
      {
        return (this.implDesc);
      }
    }
    PXCMSession.ImplDesc implDesc;
  }
}

and

namespace HelloRealSense
{

  class DeviceDescriptionViewModel
  {
    public DeviceDescriptionViewModel(PXCMCapture.DeviceInfo deviceInfo)
    {
      this.deviceInfo = deviceInfo;
    }
    public string Name
    {
      get
      {
        return (this.deviceInfo.name);
      }
    }
    public PXCMCapture.DeviceInfo DeviceInfo
    {
      get
      {
        return (this.deviceInfo);
      }
    }
    PXCMCapture.DeviceInfo deviceInfo;
  }
}

and

namespace HelloRealSense
{
  class StreamProfileViewModel
  {
    public StreamProfileViewModel(PXCMCapture.Device.StreamProfileSet streamProfileSet,
      PXCMCapture.StreamType streamType)
    {
      this.streamProfileSet = streamProfileSet;
      this.streamType = streamType;
    }
    public string Name
    {
      get
      {
        return (string.Format(
          "{0} ({1} at {2} Hz [{3}x{4}])",
          this.streamType,
          this.streamProfileSet[this.streamType].imageInfo.format,
          this.streamProfileSet[this.streamType].frameRate.max,
          this.streamProfileSet[this.streamType].imageInfo.width,
          this.streamProfileSet[this.streamType].imageInfo.height));
      }
    }
    public PXCMCapture.StreamType StreamType
    {
      get
      {
        return (this.streamType);
      }
    }
    public PXCMCapture.Device.StreamProfileSet StreamProfileSet
    {
      get
      {
        return (this.streamProfileSet);
      }
    }
    PXCMCapture.StreamType streamType;
    PXCMCapture.Device.StreamProfileSet streamProfileSet;
  }
}

and, finally, I wanted a little helper class that took on the job of calling the ReadStreams method of the PXCMCapture.Device class to read the frames of streaming data in a way that was at least ‘compatible’ with using the async/await approach of modern .NET.

Interestingly, that PXCMCapture.Device class has a ReadStreamsAsync method on it but I found that calling that seemed to result in an ‘unsupported’ error and so I wrote this little class which offers a GetNextSampleAsync() method that’s intended to be called in a loop from the UI thread using the await pattern to restore the syncrhonisation context in order to then take action and display the data in an image on-screen;

namespace HelloRealSense
{
  using System.Threading.Tasks;

  class ImageReader
  {
    public ImageReader(
      PXCMCapture.Device device,
      PXCMCapture.StreamType streamType)
    {
      this.device = device;
      this.streamType = streamType;
    }
    public Task<PXCMCapture.Sample> GetNextSampleAsync()
    {
      // NB: taking the simple but possibly wasteful approach of acquiring each
      // frame on a separate task. The SDK offers a ReadStreamsAsync method but
      // all I get back from that is an 'unsupported' error code. I could come
      // up with a better 'architecture' here but, so far, this seems to work
      // ok.
      Task<PXCMCapture.Sample> task = Task.Factory.StartNew<PXCMCapture.Sample>(
        () =>
        {
          PXCMCapture.Sample sample = new PXCMCapture.Sample();

          if (!this.device.ReadStreams(this.streamType, sample).Succeeded())
          {
            sample = null;
          }
          return (sample);
        });

      return (task);
    }
    PXCMCapture.StreamType streamType;
    PXCMCapture.Device device;
  }
}

With that in play, I could write the main ViewModel class behind the UI which is fairly simple other than in a couple of places that I had to spend a bit of time thinking about. The class possibly got a bit ‘large’ and needs re-factoring but it’s as below (it derives from a ViewModelBase class which implements INotifyPropertyChanged for it);

namespace HelloRealSense
{
  using System;
  using System.Collections.Generic;
  using System.Linq;
  using System.Threading;
  using System.Threading.Tasks;
  using System.Windows;
  using System.Windows.Media;
  using System.Windows.Media.Imaging;

  class ViewModel : ViewModelBase
  {
    public ViewModel()
    {
      this.session = new Lazy<PXCMSession>(() => PXCMSession.CreateInstance());
    }
    public WriteableBitmap ImageSource
    {
      get
      {
        return (this.imageSource);
      }
      private set
      {
        base.SetProperty(ref this.imageSource, value);
      }
    }
    public IEnumerable<CaptureModuleViewModel> CaptureModuleDescriptions
    {
      get
      {
        var implDescs = RSEnumerationHelper.QueryValuesWithDescription(
          new PXCMSession.ImplDesc()
          {
            group = PXCMSession.ImplGroup.IMPL_GROUP_SENSOR,
            subgroup = PXCMSession.ImplSubgroup.IMPL_SUBGROUP_VIDEO_CAPTURE
          },
          this.session.Value.QueryImpl
        );
        var viewModels = implDescs.Select(i => new CaptureModuleViewModel(i));

        return (viewModels);
      }
    }
    public IEnumerable<DeviceDescriptionViewModel> DeviceDescriptions
    {
      get
      {
        IEnumerable<DeviceDescriptionViewModel> viewModels = null;

        if (this.selectedCaptureModule != null)
        {
          var descriptions = RSEnumerationHelper.QueryValues<PXCMCapture.DeviceInfo>(
            this.selectedCaptureModule.QueryDeviceInfo);

          viewModels = descriptions.Select(d => new DeviceDescriptionViewModel(d));
        }
        return (viewModels);
      }
    }
    public IEnumerable<StreamProfileViewModel> DeviceStreamingProfiles
    {
      get
      {
        IEnumerable<StreamProfileViewModel> viewModels = new List<StreamProfileViewModel>();

        if (this.selectedCaptureDevice != null)
        {
          for (int i = 0; i < PXCMCapture.STREAM_LIMIT; i++)
          {
            var streamType = PXCMCapture.StreamTypeFromIndex(i);

            var profilesForStreamType =
              RSEnumerationHelper.QueryValuesWithDescription<PXCMCapture.StreamType, PXCMCapture.Device.StreamProfileSet>(
                streamType, this.selectedCaptureDevice.QueryStreamProfileSet);

            viewModels = viewModels.Concat(
              profilesForStreamType.Select(p => new StreamProfileViewModel(p, streamType)));
          }
        }
        return (viewModels);
      }
    }
    public DeviceDescriptionViewModel SelectedDeviceDescription
    {
      get
      {
        return (this.selectedDeviceDescription);
      }
      set
      {
        base.SetProperty(ref this.selectedDeviceDescription, value);
        this.SelectedDeviceDescriptionChanged();
      }
    }
    void SelectedDeviceDescriptionChanged()
    {
      if (this.selectedCaptureDevice != null)
      {
        this.selectedCaptureDevice.Dispose();
        this.selectedCaptureDevice = null;
      }
      if (this.selectedDeviceDescription != null)
      {
        this.RecreateDevice();
      }
      base.OnPropertyChanged("DeviceStreamingProfiles");
    }
    public CaptureModuleViewModel SelectedCaptureModuleDescription
    {
      get
      {
        return (this.selectedCaptureModuleDescription);
      }
      set
      {
        base.SetProperty(ref this.selectedCaptureModuleDescription, value);
        this.SelectedModuleDescriptionChanged();
      }
    }
    void SelectedModuleDescriptionChanged()
    {
      if (this.selectedCaptureModule != null)
      {
        this.selectedCaptureModule.Dispose();
        this.selectedCaptureModule = null;
      }
      if (this.selectedCaptureModuleDescription != null)
      {
        PXCMCapture localCapture;

        if (this.session.Value.CreateImpl<PXCMCapture>(
            this.selectedCaptureModuleDescription.ImplDesc,
            out localCapture).Succeeded())
        {
          this.selectedCaptureModule = localCapture;
        }
      }
      base.OnPropertyChanged("DeviceDescriptions");
    }
    public StreamProfileViewModel SelectedStreamProfile
    {
      get
      {
        return (this.selectedStreamProfile);
      }
      set
      {
        base.SetProperty(ref this.selectedStreamProfile, value);
        this.SelectedStreamProfileChanged();
      }
    }
    async void SelectedStreamProfileChanged()
    {
      await this.StopStreamingAsync();

      if (this.selectedStreamProfile != null)
      {
        // TODO: Not sure I understand stream profile sets properly yet. What I find here is
        // that if I have a device that has used COLOR profile 1, DEPTH profile 2, IR
        // profile 3 then I can't change that device to use any other color profile or
        // depth profile. It's as though it gets 'stuck' on those profiles.
        // So...I'm destroying the device here 'just' so that I can have a chance
        // of changing its profile. I have tried the call to SetAllowProfileChange()
        // and I've tried ResetProperties() but nothing seems to work as of yet.
        if (this.currentDeviceNeedsRecreating)
        {
          this.RecreateDevice();
        }
        var result = this.selectedCaptureDevice.SetStreamProfileSet(
          this.selectedStreamProfile.StreamProfileSet);

        this.currentDeviceNeedsRecreating = true;

        if (result.Succeeded())
        {
          // fire and forget on this one.
          this.StartStreamingAsync();
        }
      }
    }
    async Task StartStreamingAsync()
    {
      try
      {
        this.streamingLoopCancelTokenSource = new CancellationTokenSource();
        this.streamingLoopTask = this.RunStreamingLoopAsync(this.streamingLoopCancelTokenSource.Token);
        await this.streamingLoopTask;
      }
      catch (OperationCanceledException)
      {
        // it's ok, we expect to be cancelled.
      }
    }
    async Task StopStreamingAsync()
    {
      if (this.streamingLoopCancelTokenSource != null)
      {
        this.streamingLoopCancelTokenSource.Cancel();

        try
        {
          // make sure it's stopped...
          await this.streamingLoopTask;
        }
        catch (OperationCanceledException)
        {
          this.streamingLoopCancelTokenSource.Dispose();
          this.streamingLoopCancelTokenSource = null;
          this.streamingLoopTask = null;
          this.ImageSource = null;
        }
      }
    }
    async Task RunStreamingLoopAsync(CancellationToken cancelToken)
    {
      // If we are streaming and the user changes (e.g.) the capture module or the device
      // then that will ripple binding changes such that at some point the SelectedStreamProfile
      // will be changed to NULL.
      // When that happens, the cancellation token passed here will be signalled and this task
      // will give up the ghost. However...because we're letting binding do the work, it means
      // that this function can't rely on class member variables that may get changed by
      // the 'ripple' that lasts until the streaming profile is changed - i.e. this function
      // will run on a little 'longer' than it theoretically should...
      ImageReader reader = new ImageReader(this.selectedCaptureDevice, this.selectedStreamProfile.StreamType);
      var streamType = this.selectedStreamProfile.StreamType;

      while (!cancelToken.IsCancellationRequested)
      {
        var sample = await reader.GetNextSampleAsync();

        if (sample != null)
        {
          PXCMImage image = sample.GetImageForStreamType(streamType);
          PXCMImage.ImageData imageData;

          if (image.AcquireAccess(
            PXCMImage.Access.ACCESS_READ, PXCMImage.PixelFormat.PIXEL_FORMAT_RGB32, out imageData).Succeeded())
          {
            this.EnsureImageSourceCreated(image);

            this.imageSource.WritePixels(
              this.imageDimensions,
              imageData.planes[0],
              this.imageSource.PixelWidth * this.imageSource.PixelHeight * 4,
              this.imageSource.PixelWidth * 4);

            image.ReleaseAccess(imageData);
          }
        }
      }
      cancelToken.ThrowIfCancellationRequested();
    }
    void RecreateDevice()
    {
      if (this.selectedCaptureDevice != null)
      {
        this.selectedCaptureDevice.Dispose();
      }
      this.selectedCaptureDevice =
        this.selectedCaptureModule.CreateDevice(this.selectedDeviceDescription.DeviceInfo.didx);

      this.currentDeviceNeedsRecreating = false;
    }
    void EnsureImageSourceCreated(PXCMImage image)
    {
      if (this.imageSource == null)
      {
        this.ImageSource = new WriteableBitmap(
          image.info.width,
          image.info.height,
          96,
          96,
          PixelFormats.Bgra32,
          null);

        this.imageDimensions = new Int32Rect(
          0, 0, image.info.width, image.info.height);
      }
    }
    CancellationTokenSource streamingLoopCancelTokenSource;
    Task streamingLoopTask;
    Int32Rect imageDimensions;
    WriteableBitmap imageSource;
    DeviceDescriptionViewModel selectedDeviceDescription;
    CaptureModuleViewModel selectedCaptureModuleDescription;
    StreamProfileViewModel selectedStreamProfile;
    PXCMCapture selectedCaptureModule;
    PXCMCapture.Device selectedCaptureDevice;
    Lazy<PXCMSession> session;
    bool currentDeviceNeedsRecreating;
  }
}

and the couple of points where this got ‘interesting’ for me are in;

  • the functions StartStreamingAsync, StopStreamingAsync, RunStreamingLoopAsync – these are ‘interesting’ for me because there are places where a simple click in the UI must stop any video streaming because the video module, device or streaming profile has been changed. That’s a bit challenging because the UI is data-bound which means an invocation onto a setter and a setter isn’t somewhere I can use ‘async/await’ style patterns and so I go to a bit of work here to try and make that work out.
  • the function RecreateDevice isn’t one that I originally planned to write but, so far, I’ve not figured out how to change the streaming profile on the device once it’s been set. More specifically, I find that once I’ve used the device for COLOR it will not accept any other COLOR profile but it will allow a new IR/DEPTH profile but, again, once one of those types of profiles have been used it seems I can’t switch. Consequently, at the moment the code destroys and recreates the PXCMCapture.Device instance in order to change profiles. I suspect this is a lack of understanding of how this is meant to work on my part.

I had a bit of fun putting that together and, in the end, it works ok in that I’ve a WPF app which supports displaying some of these streams.

( I should point out that the SDK already has samples that do this much better but I strongly believe that you learn stuff by doing it, not looking at samples Smile and, also, I don’t think there’s any WPF in the samples ).

If anyone (now or in the future) is playing with the SDK, I put the code for this post here for download

Intel RealSense Camera (F200): ‘Hello World’ Part 3

Following up from my previous post, I wanted to experiment a little more with the object model offered by the RealSense SDK. While I’d managed to get some video onto the screen in the previous post via the PXCMSenseManager class, I hadn’t really felt that I’d known what it was doing for me.

With that in mind, I went back to writing a simple console application to see if I could spend a little more time on some of the classes being used below the PXCMSenseManager without actually bringing that into play and I moved on to the PXCMSession object which the SDK tells me is the place to start if I want to get some configured I/O modules up and running.

I started off by creating one which didn’t feel too challenging;

      using (PXCMSession session = PXCMSession.CreateInstance())
      {

      }

and once I’ve got hold of a session like this, it becomes a form of ‘container’ which can be queried for the modules it contains and then can be asked to instantiate those modules. It feels a little like a specific form of IoC container but I don’t think the analogy is a perfect one.

I have to be honest and say that I found the method by which you execute queries in this SDK to be a bit tedious. It’s not very ‘modern .NET’ in that there’s not a set of enumerables or queryables but, instead, there are APIs that feel to me very much like old-school Windows APIs that do enumeration (like EnumWindows, EnumPrinters type APIs). That is, the query APIs take the form of;

  • (optionally) define some descriptor of what you're looking to find – e.g. which type of module you want
  • call some Query function with that descriptor and an index 0,1,2,3…and passing a reference parameter to capture the result
  • wait for the Query function to return an error as that will let you know that you've hit the end of the enumeration

Additionally, the API functions tend now to throw exceptions, they return a pxcmStatus value which is like an HRESULT and which also means that a lot of API calls take out parameters which makes for longer code. My first attempt at dealing with this was to define these 2 delegates and functions;

    delegate pxcmStatus DescQueryFunction<T,U>(T desc, int index, out U returnValue);
    delegate pxcmStatus QueryFunction<T>(int index, out T returnValue);
and I defined these functions;
    static IEnumerable<T> Enumerate<T>(QueryFunction<T> queryFunction)
    {
      int i = 0;
      T queryResult;

      while (queryFunction(i++, out queryResult) == pxcmStatus.PXCM_STATUS_NO_ERROR)
      {
        yield return queryResult;
      }
    }
    static IEnumerable<U> EnumerateWithDescription<T,U>(T description, DescQueryFunction<T,U> queryFunction)
    {
      int i = 0;
      U queryResult;

      while (queryFunction(description, i++, out queryResult) == pxcmStatus.PXCM_STATUS_NO_ERROR)
      {
        yield return queryResult;
      }
    }

to try and help in generally calling these types of enumeration functions and, with that in place, I can then write some console code which tries to enumerate all the modules on my installation;

      using (PXCMSession session = PXCMSession.CreateInstance())
      {
        // dump all the modules the session knows about
        var description = MakeImplementationDescription(
          PXCMSession.ImplGroup.IMPL_GROUP_ANY, 
          PXCMSession.ImplSubgroup.IMPL_SUBGROUP_ANY);

        var modules = EnumerateWithDescription<PXCMSession.ImplDesc, PXCMSession.ImplDesc>(
          description, session.QueryImpl);

        Console.WriteLine("===All Modules");
        foreach (var module in modules)
        {
          Console.WriteLine(module.friendlyName);
        }
      }

and on my system I see;

image

that code makes use of this simple function to create a thing called a PXCMSession.ImplDesc – again, the SDK has some very ‘interesting’ names for types here and it uses a lot of nested types which I don’t find particularly discoverable or even memorable. Regardless, that little utility function is just hiding a constructor call for me;

    static PXCMSession.ImplDesc MakeImplementationDescription(
      PXCMSession.ImplGroup group,
      PXCMSession.ImplSubgroup subgroup)
    {
      return (
        new PXCMSession.ImplDesc()
        {
          group = group,
          subgroup = subgroup
        }
      );
    }

and if I wanted to be more specific and look for modules related to video capture then I believe that I can do it as;

     using (PXCMSession session = PXCMSession.CreateInstance())
      {
        // dump all the modules the session knows about
        var description = MakeImplementationDescription(
          PXCMSession.ImplGroup.IMPL_GROUP_SENSOR, 
          PXCMSession.ImplSubgroup.IMPL_SUBGROUP_VIDEO_CAPTURE);

        var modules = EnumerateWithDescription<PXCMSession.ImplDesc, PXCMSession.ImplDesc>(
          description, session.QueryImpl);

        Console.WriteLine("===Video Capture Modules");
        foreach (var module in modules)
        {
          Console.WriteLine(module.friendlyName);
        }
      }

which tells me;

image

Once I've got those modules, I can navigate to find a lot more info from them – this larger lump of code;

static void Main(string[] args)
    {    
      // create session
      using (PXCMSession session = PXCMSession.CreateInstance())
      {
        // dump all the modules the session knows about
        var description = MakeImplementationDescription(
          PXCMSession.ImplGroup.IMPL_GROUP_SENSOR, 
          PXCMSession.ImplSubgroup.IMPL_SUBGROUP_VIDEO_CAPTURE);

        var modules = EnumerateWithDescription<PXCMSession.ImplDesc, PXCMSession.ImplDesc>(
          description, session.QueryImpl);

        Console.WriteLine("===Video Capture Modules");
        foreach (var module in modules)
        {
          Console.WriteLine("===Module [{0}]", module.friendlyName);

          // create one of those
          PXCMCapture captureModule;

          if (session.CreateImpl<PXCMCapture>(out captureModule) == pxcmStatus.PXCM_STATUS_NO_ERROR)
          {
            Console.WriteLine("  ===Devices");
            var deviceInfos = Enumerate<PXCMCapture.DeviceInfo>(captureModule.QueryDeviceInfo);

            foreach (var deviceInfo in deviceInfos)
            {
              Console.WriteLine("    [{0}]", deviceInfo.name);
              Console.WriteLine("    [{0}]", deviceInfo.model);
              Console.WriteLine("    [{0}]", deviceInfo.location);
              Console.WriteLine("    [{0}]", deviceInfo.orientation);
              Console.WriteLine("    [{0}]", deviceInfo.streams);

              using (var device = captureModule.CreateDevice(deviceInfo.didx))
              {
                for (int i = 0; i < PXCMCapture.STREAM_LIMIT; i++)
			          {
                  PXCMCapture.StreamType type = PXCMCapture.StreamTypeFromIndex(i);

                  Console.WriteLine("      ===PROFILES FOR TYPE [{0}]", type);

                  var profiles = EnumerateWithDescription<PXCMCapture.StreamType, PXCMCapture.Device.StreamProfileSet>(
                      type,
                      device.QueryStreamProfileSet);

                  foreach (var profile in profiles)
                  {
                    var relevantSet = profile[type];
                      Console.WriteLine(
                        "        PROFILE: [{0}], [{1}], [{2},{3}]",
                        relevantSet.frameRate.max,
                        relevantSet.imageInfo.format,
                        relevantSet.imageInfo.width,
                        relevantSet.imageInfo.height);
                  }
			          }
              }
            }
            captureModule.Dispose();
          }
        }
      }
    }

queries out all video capture models, creates them in turn, figures out any devices associated with them and then creates those devices and asks them what streaming profiles they support.

That gives me a long list of streaming profiles as below;

===Video Capture Modules

===Module [IVCam I/O Proxy (Xc)]

===Devices

[Intel(R) RealSense(TM) 3D Camera]

[DEVICE_MODEL_IVCAM]

[PXCMPointF32]

[DEVICE_ORIENTATION_FRONT_FACING]

[STREAM_TYPE_COLOR, STREAM_TYPE_DEPTH, STREAM_TYPE_IR]

===PROFILES FOR TYPE [STREAM_TYPE_COLOR]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [1920,1080]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [1280,720]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [960,540]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [848,480]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [640,480]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [640,360]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [424,240]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [320,240]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [320,180]

PROFILE: [60], [PIXEL_FORMAT_YUY2], [848,480]

PROFILE: [60], [PIXEL_FORMAT_YUY2], [640,480]

PROFILE: [60], [PIXEL_FORMAT_YUY2], [640,360]

PROFILE: [60], [PIXEL_FORMAT_YUY2], [424,240]

PROFILE: [60], [PIXEL_FORMAT_YUY2], [320,240]

PROFILE: [60], [PIXEL_FORMAT_YUY2], [320,180]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [1920,1080]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [1280,720]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [960,540]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [848,480]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [640,480]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [640,360]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [424,240]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [320,240]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [320,180]

PROFILE: [60], [PIXEL_FORMAT_RGB24], [848,480]

PROFILE: [60], [PIXEL_FORMAT_RGB24], [640,480]

PROFILE: [60], [PIXEL_FORMAT_RGB24], [640,360]

PROFILE: [60], [PIXEL_FORMAT_RGB24], [424,240]

PROFILE: [60], [PIXEL_FORMAT_RGB24], [320,240]

PROFILE: [60], [PIXEL_FORMAT_RGB24], [320,180]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [1920,1080]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [1280,720]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [960,540]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [848,480]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [640,480]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [640,360]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [424,240]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [320,240]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [320,180]

PROFILE: [60], [PIXEL_FORMAT_RGB32], [848,480]

PROFILE: [60], [PIXEL_FORMAT_RGB32], [640,480]

PROFILE: [60], [PIXEL_FORMAT_RGB32], [640,360]

PROFILE: [60], [PIXEL_FORMAT_RGB32], [424,240]

PROFILE: [60], [PIXEL_FORMAT_RGB32], [320,240]

PROFILE: [60], [PIXEL_FORMAT_RGB32], [320,180]

===PROFILES FOR TYPE [STREAM_TYPE_DEPTH]

PROFILE: [30], [PIXEL_FORMAT_DEPTH], [640,480]

PROFILE: [30], [PIXEL_FORMAT_DEPTH], [640,240]

PROFILE: [60], [PIXEL_FORMAT_DEPTH], [640,480]

PROFILE: [60], [PIXEL_FORMAT_DEPTH], [640,240]

PROFILE: [110], [PIXEL_FORMAT_DEPTH], [640,240]

===PROFILES FOR TYPE [STREAM_TYPE_IR]

PROFILE: [30], [PIXEL_FORMAT_Y8], [640,480]

PROFILE: [30], [PIXEL_FORMAT_Y8], [640,240]

PROFILE: [60], [PIXEL_FORMAT_Y8], [640,480]

PROFILE: [60], [PIXEL_FORMAT_Y8], [640,240]

PROFILE: [110], [PIXEL_FORMAT_Y8], [640,240]

PROFILE: [300], [PIXEL_FORMAT_Y8], [640,480]

PROFILE: [200], [PIXEL_FORMAT_Y8_IR_RELATIVE], [640,480]

PROFILE: [100], [PIXEL_FORMAT_Y8_IR_RELATIVE], [640,480]

PROFILE: [60], [PIXEL_FORMAT_Y8_IR_RELATIVE], [640,480]

PROFILE: [30], [PIXEL_FORMAT_Y8_IR_RELATIVE], [640,480]

===PROFILES FOR TYPE [STREAM_TYPE_LEFT]

===PROFILES FOR TYPE [STREAM_TYPE_RIGHT]

===PROFILES FOR TYPE [32]

===PROFILES FOR TYPE [64]

===PROFILES FOR TYPE [128]

===Module [Video Capture (Media Foundation)]

===Devices

[Intel(R) RealSense(TM) 3D Camera]

[DEVICE_MODEL_IVCAM]

[PXCMPointF32]

[DEVICE_ORIENTATION_FRONT_FACING]

[STREAM_TYPE_COLOR, STREAM_TYPE_DEPTH, STREAM_TYPE_IR]

===PROFILES FOR TYPE [STREAM_TYPE_COLOR]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [1920,1080]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [1280,720]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [960,540]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [848,480]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [640,480]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [640,360]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [424,240]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [320,240]

PROFILE: [30], [PIXEL_FORMAT_YUY2], [320,180]

PROFILE: [60], [PIXEL_FORMAT_YUY2], [848,480]

PROFILE: [60], [PIXEL_FORMAT_YUY2], [640,480]

PROFILE: [60], [PIXEL_FORMAT_YUY2], [640,360]

PROFILE: [60], [PIXEL_FORMAT_YUY2], [424,240]

PROFILE: [60], [PIXEL_FORMAT_YUY2], [320,240]

PROFILE: [60], [PIXEL_FORMAT_YUY2], [320,180]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [1920,1080]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [1280,720]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [960,540]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [848,480]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [640,480]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [640,360]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [424,240]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [320,240]

PROFILE: [30], [PIXEL_FORMAT_RGB24], [320,180]

PROFILE: [60], [PIXEL_FORMAT_RGB24], [848,480]

PROFILE: [60], [PIXEL_FORMAT_RGB24], [640,480]

PROFILE: [60], [PIXEL_FORMAT_RGB24], [640,360]

PROFILE: [60], [PIXEL_FORMAT_RGB24], [424,240]

PROFILE: [60], [PIXEL_FORMAT_RGB24], [320,240]

PROFILE: [60], [PIXEL_FORMAT_RGB24], [320,180]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [1920,1080]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [1280,720]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [960,540]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [848,480]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [640,480]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [640,360]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [424,240]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [320,240]

PROFILE: [30], [PIXEL_FORMAT_RGB32], [320,180]

PROFILE: [60], [PIXEL_FORMAT_RGB32], [848,480]

PROFILE: [60], [PIXEL_FORMAT_RGB32], [640,480]

PROFILE: [60], [PIXEL_FORMAT_RGB32], [640,360]

PROFILE: [60], [PIXEL_FORMAT_RGB32], [424,240]

PROFILE: [60], [PIXEL_FORMAT_RGB32], [320,240]

PROFILE: [60], [PIXEL_FORMAT_RGB32], [320,180]

===PROFILES FOR TYPE [STREAM_TYPE_DEPTH]

PROFILE: [30], [PIXEL_FORMAT_DEPTH], [640,480]

PROFILE: [30], [PIXEL_FORMAT_DEPTH], [640,240]

PROFILE: [60], [PIXEL_FORMAT_DEPTH], [640,480]

PROFILE: [60], [PIXEL_FORMAT_DEPTH], [640,240]

PROFILE: [110], [PIXEL_FORMAT_DEPTH], [640,240]

===PROFILES FOR TYPE [STREAM_TYPE_IR]

PROFILE: [30], [PIXEL_FORMAT_Y8], [640,480]

PROFILE: [30], [PIXEL_FORMAT_Y8], [640,240]

PROFILE: [60], [PIXEL_FORMAT_Y8], [640,480]

PROFILE: [60], [PIXEL_FORMAT_Y8], [640,240]

PROFILE: [110], [PIXEL_FORMAT_Y8], [640,240]

PROFILE: [300], [PIXEL_FORMAT_Y8], [640,480]

PROFILE: [200], [PIXEL_FORMAT_Y8_IR_RELATIVE], [640,480]

PROFILE: [100], [PIXEL_FORMAT_Y8_IR_RELATIVE], [640,480]

PROFILE: [60], [PIXEL_FORMAT_Y8_IR_RELATIVE], [640,480]

PROFILE: [30], [PIXEL_FORMAT_Y8_IR_RELATIVE], [640,480]

===PROFILES FOR TYPE [STREAM_TYPE_LEFT]

===PROFILES FOR TYPE [STREAM_TYPE_RIGHT]

===PROFILES FOR TYPE [32]

===PROFILES FOR TYPE [64]

===PROFILES FOR TYPE [128]

I said it was a long list! It’s worth saying that these profiles can be combined so it’s possible to ask the SDK to return COLOR, IR and DEPTH at the same time but not every combination of every profile is going to work and the object model has a method that can be called to ask whether a mix of profiles is valid or not.

Once I have created the device, as in the code above, I can start to acquire frames of data from it and I can also query/set a tonne of properties on it such as;

device.QueryMirrorMode();
device.QueryColorAutoExposure();
device.QueryColorAutoWhiteBalance();

and there are many, many more of these each with a corresponding Set method.

An important one of these Set methods seems to be the SetStreamProfileSet method which appears to tell the device what streaming profiles I'm interested in using to read streamed data. I find that if I don't set this then on my device any attempt to read streaming data throws a memory allocation exception and I suspect that might be because it's trying to allocate memory for every different kind of stream available and, as you can see, it’s a long list so if that’s what it’s doing then I’m not surprised it runs out of memory.

To take that extra step and to try and acquire frames from the device, I could change the code to do less enumerating and just accept the first module/device/profile that it comes across and then try to use it to grab data as;

static void Main(string[] args)
    {    
      // create session
      using (PXCMSession session = PXCMSession.CreateInstance())
      {
        // get the first video capture module descriptor.
        var captureModuleDesc = 
          EnumerateWithDescription<PXCMSession.ImplDesc, PXCMSession.ImplDesc>(
            MakeImplementationDescription(PXCMSession.ImplGroup.IMPL_GROUP_SENSOR, PXCMSession.ImplSubgroup.IMPL_SUBGROUP_VIDEO_CAPTURE),
            session.QueryImpl)
          .First();

        PXCMCapture captureModule;

        // create an instance
        if (session.CreateImpl<PXCMCapture>(out captureModule) == pxcmStatus.PXCM_STATUS_NO_ERROR)
        {
          // get the first device info
          var deviceInfo = Enumerate<PXCMCapture.DeviceInfo>(captureModule.QueryDeviceInfo).First();

          // create an instance
          using (var device = captureModule.CreateDevice(deviceInfo.didx))
          {
            // get the first stream profile for color.
            PXCMCapture.Device.StreamProfileSet streamProfileSet =
              EnumerateWithDescription<PXCMCapture.StreamType, PXCMCapture.Device.StreamProfileSet>(
                PXCMCapture.StreamType.STREAM_TYPE_COLOR,
                device.QueryStreamProfileSet)
              .First();

            Console.WriteLine("Using stream profile for [{0}Hz], [{1}], [{2} x {3}]", 
              streamProfileSet.color.frameRate,
              streamProfileSet.color.imageInfo.format,
              streamProfileSet.color.imageInfo.width,
              streamProfileSet.color.imageInfo.height);

            // filter the device to use that profile set.
            if (device.SetStreamProfileSet(streamProfileSet) == pxcmStatus.PXCM_STATUS_NO_ERROR)
            {
              PXCMCapture.Sample sample = new PXCMCapture.Sample();

              // capture 10 frames from the device.
              for (int i = 0; i < 10; i++)
              {
                // note, this blocks until it gets a frame...
                if (device.ReadStreams(PXCMCapture.StreamType.STREAM_TYPE_COLOR, sample) == pxcmStatus.PXCM_STATUS_NO_ERROR)
                {
                  Console.WriteLine("Read frame [{0}] from device", i);
                  PXCMImage.ImageData imageData;

                  if (sample.color.AcquireAccess(PXCMImage.Access.ACCESS_READ, out imageData) == pxcmStatus.PXCM_STATUS_NO_ERROR)
                  {
                    Console.WriteLine("Image is in format [{0}]", imageData.format);
                    sample.color.ReleaseAccess(imageData);
                  }
                }
              }
            }
          }
          captureModule.Dispose();
        }
      }
    }

and, sure enough, this does seem to capture 10 frames of COLOR data for me;

image

I learned quite a bit from playing around with the object model here around modules/devices/profiles. I don’t think I have it entirely right in my head at the time of writing but, having taken this step, I then wanted to put a UI back onto it to explore further. I’ll write that up in the next post.

Intel RealSense Camera (F200): ‘Hello World’ Part 2

Picking up from my previous post, I thought I'd see if I could get a simple stream of video data from the RealSense camera.

The SDK docs walk you through the architecture of coding against the camera which is done via a series of configured modules;

clip_image002

with the I/O modules pushing data to/from a device and the algorithm modules doing work like facial recognition and so on.

All the modules derive from this PXCMBase class;

public class PXCMBase : IDisposable
{
  public const int CUID = 0;
  public const int WORKING_PROFILE = -1;
  public IntPtr instance;
  protected int refcount;
  protected static Dictionary<Type, int> Type2CUID;
  public virtual void Dispose();
  public TT QueryInstance<TT>() where TT : PXCMBase;
  public PXCMBase QueryInstance(int cuid);
}

and a module has some kind of unique identifier (CUID) although I'm not sure what the 'C' stands for and modules are disposable and there's some way to navigate from one module to another module via the QueryInstance method which feels a little like a specific version of COM's QueryInterface function.

The programming model's a bit bewildering at first because it feels like there are lots of ways of doing the same thing with more/less control and granularity. I think the simplest model is to use the PXCMSenseManager as it seems to provide a higher level programming model where a number of bits are pre-configured for your use and so I started there.

I figured I'd write a little bit of 'UI';

<Window x:Class="HelloRealSense.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow"
        Height="350"
        Width="525">
  <Grid>
    <Image x:Name="screenImage" />
    <StackPanel VerticalAlignment="Bottom"
                HorizontalAlignment="Center"
                Orientation="Horizontal">
      <StackPanel.Resources>
        <Style TargetType="Button">
          <Setter Property="Margin"
                  Value="5" />
          <Setter Property="Padding"
                  Value="5" />
        </Style>
        <Style TargetType="RadioButton">
          <Setter Property="Margin"
                  Value="5" />
        </Style>
      </StackPanel.Resources>
      <Button Content="Start"
              Click="OnStartButton" />
      <Button Content="Stop"
              Click="OnStopButton" />
      <StackPanel Orientation="Horizontal" VerticalAlignment="Center">
        <RadioButton Content="Color" x:Name="radioColor" IsChecked="True" />
        <RadioButton Content="IR" x:Name="radioIR" />
        <RadioButton Content="Depth" x:Name="radioDepth"/>
      </StackPanel>
    </StackPanel>
  </Grid>
</Window>

and that then gives me Start/Stop buttons, an image to display things and some radio buttons via which I can choose to display color/depth/infra-red data.

I then sketched out a bit of code. Here's the start of my class with the implementation of the start button handler;

namespace HelloRealSense
{
  using System;
  using System.Collections.Concurrent;
  using System.Windows;
  using System.Windows.Media;
  using System.Windows.Media.Imaging;
  using System.Linq;

  public partial class MainWindow : Window
  {
    Int32Rect sampleImageDimensions;
    ConcurrentQueue<PXCMCapture.Sample> sampleQueue;
    PXCMSenseManager senseManager;
    WriteableBitmap writeableBitmap;
    bool stopped;

    public MainWindow()
    {
      InitializeComponent();
      this.sampleQueue = new ConcurrentQueue<PXCMCapture.Sample>();
    }
    void OnStartButton(object sender, RoutedEventArgs e)
    {
      this.stopped = false;

      this.senseManager = PXCMSenseManager.CreateInstance();

      var radioButtonsAndStreamTypes = new[]
      {
        Tuple.Create(this.radioColor.IsChecked, PXCMCapture.StreamType.STREAM_TYPE_COLOR),
        Tuple.Create(this.radioDepth.IsChecked, PXCMCapture.StreamType.STREAM_TYPE_DEPTH),
        Tuple.Create(this.radioIR.IsChecked, PXCMCapture.StreamType.STREAM_TYPE_IR)
      };
      var streamType = radioButtonsAndStreamTypes.Single(r => (bool)r.Item1).Item2;

      this.senseManager.EnableStream(streamType, 0, 0);

      this.senseManager.Init(
        new PXCMSenseManager.Handler()
        {
          onNewSample = this.OnNewSample
        }
      );
      this.senseManager.StreamFrames(false);
    } 

The startup code is essentially just trying to create an instance of PXCMSenseManager and then ask it to enable either a STREAM_TYPE_COLOR or DEPTH or IR based on the currently checked radio button and it's not making any requests around the resolution required nor the frame rate (both of which are options in the call to EnableStream above).

It then initialises the PXCMSenseManager and passes it a handler which contains a delegate pointing to a method to be called whenever a new sample arrives (OnNewSample). It’s a bit odd that this isn’t defined as a .NET event, but there you go.

With that set up, the sense manager is told to StreamFrames (without blocking – the false parameter – as I need WPF to go back to its dispatcher loop).

The OnNewSample function is a simple one in that it queues up items onto a concurrent queue once it's checked that things are still running;

    pxcmStatus OnNewSample(int mid, PXCMCapture.Sample sample)
    {
      pxcmStatus status = pxcmStatus.PXCM_STATUS_PARAM_UNSUPPORTED;

      if (this.stopped)
      {
        status = pxcmStatus.PXCM_STATUS_NO_ERROR;
      }
      else if (mid == PXCMCapture.CUID)
      {
        status = pxcmStatus.PXCM_STATUS_NO_ERROR;

        this.sampleQueue.Enqueue(sample);

        this.Dispatcher.InvokeAsync(this.DrainQueueUIThread);
      }
      return (status);
    }

the function also kicks the dispatcher to call the DrainQueueUIThread function to make sure that items that have been queued get processed on the UI thread. I took this route because OnNewSample seemed to have no notion of capturing the synchronization context and there's definitely an optimisation here which would be to not queue up these frames if they can't be consumed in a timely fashion and perhaps just process the last frame that’s arrived.

The function that drains that queue on the UI thread looks like this;

    void DrainQueueUIThread()
    {
      PXCMCapture.Sample sample;

      while (this.sampleQueue.TryDequeue(out sample))
      {
        PXCMImage.ImageData imageData;
        PXCMImage image = PickFirstImageAvailableInSample(sample);

        pxcmStatus status =
          image.AcquireAccess(PXCMImage.Access.ACCESS_READ, PXCMImage.PixelFormat.PIXEL_FORMAT_RGB32, out imageData);

        if (Succeeded(status))
        {
          this.EnsureWriteableBitmapCreated(image.info.width, image.info.height);

          this.writeableBitmap.WritePixels(
            this.sampleImageDimensions,
            imageData.planes[0],
            this.writeableBitmap.PixelWidth * this.writeableBitmap.PixelHeight * 4,
            this.writeableBitmap.PixelWidth * 4);

          image.ReleaseAccess(imageData);
        }
      }
    }

and so it simply tries to drain the queue, acquires access to the image data from the SDK in (convenient) RGB32 format and then copies it across into a WriteableBitmap which we ensure is created in this routine to match the size of whatever image is being displayed;

    void EnsureWriteableBitmapCreated(int width, int height)
    {
      if (this.writeableBitmap == null)
      {
        this.writeableBitmap = new WriteableBitmap(
          width,
          height,
          96,
          96,
          PixelFormats.Bgra32,
          null);

        this.screenImage.Source = this.writeableBitmap;

        this.sampleImageDimensions = new Int32Rect(0, 0, width, height);
      }
    }

Finally, I have another couple of little functions – this one to try and pick out whichever part of a sample is not null whether it be color, depth, IR, etc;

    static PXCMImage PickFirstImageAvailableInSample(PXCMCapture.Sample sample)
    {
      PXCMImage image = sample.color;
      image = image == null ? sample.depth : image;
      image = image == null ? sample.ir : image;
      image = image == null ? sample.left : image;
      image = image == null ? sample.right : image;
      return (image);
    }
and this one to simply check a status code;
    static bool Succeeded(pxcmStatus status)
    {
      return (status == pxcmStatus.PXCM_STATUS_NO_ERROR);
    }

and that's it – naturally, the code's a bit hacky and (e.g.) doesn't enable/disable UI when it needs to but, fundamentally, it grabs colour/depth/IR frames from the camera and displays them – here's the app running displaying depth data;

clip_image004

In taking this approach though I feel like I let a lot of 'higher level' pieces do work on my behalf and so didn't perhaps understand quite what they were doing for me. I'd like to revisit a similar post using the lower level pieces to see if I can figure that out…