Experimenting with Research Mode and Sensor Streams on HoloLens Redstone 4 Preview (Part 2)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

This is a follow-on from this previous post;

Experimenting with Research Mode and Sensor Streams on HoloLens Redstone 4 Preview

and so please read that post if you want to get the context and, importantly, for the various caveats and links that I had in that post about working with ‘research mode’ in the Redstone 4 Preview on HoloLens.

I updated the code from the previous post to provide what I think is a slightly better experience in that I removed the attempt to display multiple streams from the device at the same time and, instead, switched to a model where I have the app on the device have a notion of the ‘current stream’ that it is sending over the network to the receiving desktop app.

In that desktop app, I can then show the initial stream from the device and allow the user to cycle through the available streams as per the screenshots below. The streams are still not being sent to the desktop at their actual frame rate but, as before, on a timer-based interval which is hard-wired into the HoloLens app for the moment.

Making these changes meant altering the code such that it no longer selects one depth and one infrared stream but, instead, attempts to read from all depth, infrared and colour streams. When the desktop app connects, it is returned the descriptions for these streams and it then has buttons to notify the remote app to switch on to the next/previous stream in its list.

Here’s how that looks across the 8 different streams that I am getting back from the device.

This first one would appear to be an environment tracking stream which is looking more or less ‘straight ahead’ although the image would appear to be rotated 90 degrees anti-clockwise;

1

This second stream would again appear to be environment tracking taking in a scene that’s to the left of my gaze and again rotated 90 degrees anti-clockwise;

2

This next stream is a depth view, looking forward although it can be hard to see much in there without movement to help out.

I’m not sure that I’m building the description of this stream correctly because my code says 15fps whereas the documentation seems to suggest that depth streams are at either 1fps or 30fps so perhaps I have a bug here but this depth stream feels like it is at a wider aperture and so perhaps this is the stream which the docs describe as;

“one for high-frequency (30 fps) near-depth sensing, commonly used in hand tracking”

but that’s only a guess based on what I can visually see in this stream;

3

and the next stream that I get is an infrared stream at 3 fps with what feels like a narrow aperture;

4

with the follow-on stream being depth again at what feels like a narrow aperture;

5

and then I have an environment view to the right side of my gaze rotated 90 degrees anti-clockwise;

6

and another environment view which feels more of less ‘straight ahead’, rotated 90 degrees anti-clockwise;

7

and lastly an infrared view at 3 fps with what feels like a wider aperture;

8

This code feels a bit more ‘usable’ than what I had at the end of the previous blog post and I’ve tried to make it a little more resilient such that should one end of the connection drop, the other app should pause and be capable of reconnecting when its peer returns.

The code for this is committed to master in the same repo as I had in the previous post;

https://github.com/mtaulty/ExperimentalSensorApps

Feel free to take that, experiment with it yourself and so on but keep in mind that it’s a fairly rough experiment rather than some polished sample.

Experimenting with Research Mode and Sensor Streams on HoloLens Redstone 4 Preview

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Previews, Research and Experiments

I recently installed the Redstone 4 Preview onto a HoloLens as documented here;

HoloLens RS4 Preview

and one of the many things that interested me around what was present in the preview was the piece about ‘research mode’ which (from the docs);

“Allows developers to access key HoloLens sensors when building academic and industrial applications to test new ideas in the fields of computer vision and robotics”

which then detail the sensors as;

  • “The four environment tracking cameras used by the system for spatial map building and head-tracking.

  • Two versions of the depth mapping camera data – one for high-frequency (30 fps) near-depth sensing, commonly used in hand tracking, and the other for lower-frequency (1 fps) far-depth sensing, currently used by spatial mapping.

  • Two versions of an IR-reflectivity stream, used to compute depth, but valuable in its own right as these images are illuminated from the HoloLens and reasonably unaffected by ambient light.”

So, it sounds like there’s a possibility of 8 streams of data there and developers have been asking about access to these streams for some time as in this forum question;

Will we have access to the the depth sensors, IR cameras, and RGB cameras data streams?

and prior to the RS4 preview the answer was “not possible” but it looks like the preview has some experimental support for getting access to these streams.

That said, in order to switch this on a developer has to (from the docs);

“First, ensure “Use developer features” and “Enable Device Portal” are set to On in Settings > Update & Security > For developers on HoloLens. Next, on a desktop PC, use Device Portal to access your HoloLens through a web browser, expand System, select Research mode, and check the box next to “Allow access to sensor streams.” Reboot your HoloLens for the settings to take effect.


Note: Apps built using Research mode cannot be submitted to the Microsoft Store.”

and if you visit that device portal and switch this setting to allow “Research Mode” then you’ll notice that it says;

Capture

So the guidance here is pretty strong and says that this setting will damage performance, is not recommended apart from for active research and will mean that an application using it will not be applicable for the Windows Store.

With all of those caveats in mind, I wanted to try this out and see if I could get some data from the device and so I started to write some code.

Before getting there, I want to re-state that the code here is just my own work, likely to be quite rough and experimental and there are official samples coming in this area later in the month so keep your eye on the URL that the device portal points you to;

https://aka.ms/hololensresearchmode

for official updates. Meanwhile, on with my rough work which I’ve actually attempted before…

Previous Attempts at Accessing Sensor Data

I’d had a look at this type of stream access in this post;

InfraredFrameSources–Access to Camera Streams

where I was trying to use UWP media capture pieces (e.g. MediaCapture, MediaFrameReader, MediaFrameSourceGroup etc) in order to get access to sensor streams but I only came away with a media source group called MN34150 which I think represents the built-in webcam on the device and it didn’t surface any depth streams or infrared streams nor streams from the other 4 environment sensing cameras on the device.

That had proven to be a dead-end at the time on the Anniversary Update but I thought that I could use the same classes/techniques for trying again in the light of RS4 Preview…

A New Attempt at Accessing Sensor Streams from a 2D UWP App

I wanted to start fairly small and so I wondered whether I might write an app for HoloLens which would access 1 or 2 of these new streams and send the data from them on some frequency over the network to some other (desktop) app which would display them.

I thought I’d begin with a 2D app as I find the development time quicker there than working in 3D and so I spun up a new XAML based 2D UWP app on SDK version 17125 (I think 17133 may also be out by the time of writing so keep that in mind).

To speed things up a little further, I borrowed some socket code from this previous post;

Windows 10, UWP, HoloLens & A Simple Two-Way Socket Library

That post contained some code where I used Bluetooth LE advertising in order to connect sockets across 2 devices without any need to manually enter (or assume) IP addresses or ports – one device creates a socket and advertises its details over Bluetooth LE and the other device finds the advertisement and (assuming some common network) connects a socket to the address/port combination advertised. In that post, the main class that I wrote was named AutoConnectMessagePipe and I gave it some capability around sending raw byte arrays, strings and serialized objects but for my purposes in this experiment, I have stripped the code back to just send byte arrays back and forth.

In my new app for this post, that code ends up being run at start up time and ends up looking something like this;

            // We are going to advertise our socket details over bluetooth
            this.messagePipe = new AutoConnectMessagePipe(true);

            // We wait for someone else to see them and connect to us
            await this.messagePipe.WaitForConnectionAsync(
                TimeSpan.FromMilliseconds(-1));

Once the call to WaitForConnectionAsync completes, we should have a connected client ready to talk down the socket to our app on HoloLens and receive some media frames from the device.

To use these pieces means that my HoloLens project would need capabilities specified in its application manifest for bluetooth and probably internet (client/server) and private networks. I also figured that it might well need the webcam capability and the spatial perception capability too.

With that added to my manifest, I started to write some code that would let me get access to all the media frame source groups on the device and you can see in the screenshot below that code coming back with the new “Sensor Streaming” media frame source group;

Capture2

and that seemed fine but when I came to code which tried to create a MediaCapture using this source, I hit a bit of a snag – the device was raising a dialog asking for access to the camera but then it was crashing;

piccy

and I figured that it must be that having the spatial perception capability in my app manifest mustn’t be enough to switch on access to these streams and so, perhaps, there was some new capability that allowed access?

I checked out the list of capabilities in the docs;

App capability declarations

and couldn’t find anything there – that doc is really good and partitions capabilities into different groups but it maybe hasn’t been updated yet for the preview and, as far as I know, that set of capabilities maps fairly literally onto the registry key;

image

and so I had a look at the capabilityClass_Restricted key on my RS4 preview machine and compared the contents of the key named MemberCapability to the one on my Fall Creators Update machine and the list looks to contain some new restricted capabilities;

broadFileSystemAccess, deviceIdentityManagement, lpacIME, lpacPackageManagerOperation, perceptionSensorsExperimental, smbios, systemDialog, thumbnailCache, timezone, userManagementSystem, webPlatformMediaExtension

and so I figured that the one that I would need was likely to be perceptionSensorsExperimental and so I added that to my app manifest within the restricted section (as per that earlier doc on how to add restricted capabilities) as below;

  <Capabilities>
    <Capability Name="internetClient" />
    <Capability Name="internetClientServer" />
    <Capability Name="privateNetworkClientServer" />
    <uap2:Capability Name="spatialPerception" />
    <rescap:Capability Name="perceptionSensorsExperimental" />
    <DeviceCapability Name="microphone" />
    <DeviceCapability Name="webcam" />
    <DeviceCapability Name="bluetooth" />
  </Capabilities>
That manifest is probably overkill for what I need here but adding that extra capability allowed my MediaCapture to initialise ok;
cap
and so I can make progress but I wasn’t quite “ready” to write code which would handle all of the available streams and so I decided that I would try and access a single depth stream and a single infrared stream as a starting point and so my code has an array of the stream types that it wants to access;
            var frameSourceKinds = new MediaFrameSourceKind[]
            {
                MediaFrameSourceKind.Depth,
                MediaFrameSourceKind.Infrared
            };

and I wrote this little class;

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Windows.Media.Capture.Frames;

namespace App1
{
    static class MediaSourceFinder
    {
        public static async Task<MediaFrameSourceGroup> FindGroupsWithAllSourceKindsAsync(
            params MediaFrameSourceKind[] sourceKinds)
        {
            var groups = await MediaFrameSourceGroup.FindAllAsync();

            var firstGroupWithAllSourceKinds =
                groups.FirstOrDefault(
                    g => sourceKinds.All(k => g.SourceInfos.Any(si => si.SourceKind == k)));

            return (firstGroupWithAllSourceKinds);
        }
        public static List<string> FindSourceInfosWithMaxFrameRates(
            MediaFrameSourceGroup sourceGroup, params MediaFrameSourceKind[] sourceKinds)
        {
            var listSourceInfos = new List<string>();

            foreach (var kind in sourceKinds)
            {
                var sourceInfos =
                    sourceGroup.SourceInfos.Where(s => s.SourceKind == kind);

                var maxInfo = sourceInfos.OrderByDescending(
                    si => si.VideoProfileMediaDescription.Max(
                        msd => msd.FrameRate * msd.Height * msd.Width)).First();

                listSourceInfos.Add(maxInfo.Id);
            }
            return (listSourceInfos);
        }
    }
}

which provides some limited helpers which let me take that array of MediaFrameSourceKind[] (depth/infrared) and attempt to;

  • find the first MediaFrameSourceGroup which claims that it can do all of the types I’m interested in (i.e. depth + infrared).
  • from that MediaFrameSourceGroup find the media source Ids of the “best” sources for depth, infrared.
    • here, “best” is arbitrarily chosen as the highest multiplier of frame rate * width * height just so that I end up with one depth stream and one IR stream rather than many.

Those bits of code are enough to enable me to instantiate a MediaCapture for the source group;

        // Note2: I've gone with Cpu here rather than Gpu because I ultimately
                // want a byte[] that I can send down a socket. If I go with Gpu then
                // I get an IDirect3DSurface but (AFAIK) there's not much of a way
                // to get to a byte[] from that other than to copy it into a 
                // SoftwareBitmap and then to copy that SoftwareBitmap into a byte[]
                // which I don't really want to do. Hence - Cpu choice here.
                await this.mediaCapture.InitializeAsync(
                    new MediaCaptureInitializationSettings()
                    {
                        SourceGroup = firstSourceGroupWithSourceKinds,
                        MemoryPreference = MediaCaptureMemoryPreference.Cpu,
                        StreamingCaptureMode = StreamingCaptureMode.Video
                    }
                );

and once I have a MediaCapture I can then use it to open MediaFrameReader instances for the sources that I am interested in and I get frame readers for each of the streams.

I initially tried to do this by using MediaCapture.CreateMultiSourceFrameReader in order to have a single reader which gathered all the frames but this seemed to throw exceptions on me and so I switched to using the regular CreateFrameReaderAsync() on each of the sources separately which seemed to work fine for me although it doesn’t have the ability to ‘synchronise’ the frames which the multi frame reader might have.

Once I had readers open on a couple of streams, I quickly realised that they were going to fire back “quite a lot of data” and that simply to handling the FrameArrived event and passing the frame data over the network would eat my WiFi bandwidth.

Specifically, it seemed that I had selected depth streams firing at 30fps with either 8 or 16 bits per pixel at a resolution of 448*450 pixels. That meant that even with just 2 streams I would be trying to copy maybe ~20MB a second over the network which didn’t seem like a great idea.

Based on that, I decided that rather than try to handle every FrameArrived event, I would instead just install a timer which ticked on some interval, attempted to get the latest frame from each of the readers and sent it over the network.

This seemed to work out “ok” although the code I have was put together pretty quickly and so is rough and not very resilient to failure and it lives in this App in the solution;

pic

there is largely just a XAML based UI which displays a frame count of how many IR and how many Depth frames it thinks it has sent over the network and there’s some code behind plus a couple of supporting classes along with a dependency on the code in the SharedCode project which provides the routines for establishing the socket communications along with some common code around manipulating the buffers.

The “UI” ends up being a rather undramatic screen;

20180405_132355_HoloLens

In terms of the buffers, I make no attempt to compress them or anything like that and I simply send them over the network pre-fixed with a header including the size, buffer type (depth/infrared), width, height. I do not attempt to encode them as PNG/JPEG or similar but just leave them in their raw format which for these 2 streams is Gray8 and Gray16.

A Companion 2D Desktop App

On the desktop side, I made a second UWP XAML based app and added it to the solution and gave it a dependency on the SharedCode folder so that it could also use the socket and buffer-access routines.

sketch

This app displays a blank UI with a couple of XAML Images backed by having their Source property set to instances of SoftwareBitmapSource.

On start up, the app waits for a Bluetooth LE advertisement such that it can automatically connect to the socket listening on the HoloLens.

Once connected, the app picks up the frames sent down the wire, interprets them as depth/infrared and turns them into SoftwareBitmap instances in BGRA8 format such that it can update the XAML Images with the new bitmaps by simply using SoftwareBitmapSource.SetBitmapAsync().

There’s not too much going on in this app and it could do with a little more “UI” and some resilience around the socket connection dropping but it seems to fundamentally “work” in that frames come over the network and get displayed Smile

Here’s a quick screenshot – the depth data on the left is (I think) coming from the 30fps near-depth camera and it’s perhaps only just visible here so maybe I need to process it to brighten it up a little for display but I can see what it’s showing on my monitor;

shot

and the IR on the right is much clearer to see.

So, it’s not going to win any UX or implementation awards Winking smile but it seems to “just about work”.

What’s Next?

I’ve only really had a chance to glance at this and take a first-step but I’m pleased that I was able to grab frames so easily. It would be “nice” to put a communication protocol between the 2 apps here such that the desktop app could “ask” for different streams and perhaps at different intervals and it’d also be “nice” to display some of the other streams so perhaps I’ll look into that for subsequent posts and follow up with some modifications.

Where’s the Code?

The code for this post is pretty rough and experimental so please keep that in mind but I shared it here on github;

http://github.com/mtaulty/ExperimentalSensorApps

so feel free to take, explore and fix etc. Smile

Windows 10 Anniversary Update Preview–Composition and Connected Animations

A short post on the connected animation service that I looked at in Windows Composition on the 14367 preview build and the new 14366 SDK today.

In case you’re curious, I do realise that with these posts on composition, I’m slowly working my way through some of the scenarios that the excellent GitHub samples from the composition team already cover with much more rigour and style than I do.

However, I find that there’s a big difference between opening up a sample, seeing the output and thinking “ok, I get that” versus starting from scratch and putting something together yourself and, generally, I learn better doing the latter because I make a bunch of mistakes that cause me to think about what I’m actually doing and how things might be working.

While I’d seen the “Connected Animations” topic in the newsletter here, I didn’t realise until yesterday that it was an actual thing surfaced by the ConnectedAnimationService class living in Windows.UI.Xaml.Media.Animation.

I knocked together a quick app which displays some pictures on one page and allows the user to click a picture to navigate to a second page which displays it full screen. Here’s that app running;

I then added a connected animation such that the image on page 1 animates to its new position on page 2 and it then runs like this (assuming the animations come out reasonably on the screen capture here);

It was pretty easy to add that animation here even though it deals with content across the “page” boundary in the XAML application although I think it’s useful to remember that;

  • UWP applications (XAML or not) don’t have to be made up of Pages hosted within Frames.
  • It’s always possible to grab the Window/Frame (or some other container that you have parented them off) in your application and play with the layering to achieve results that cross page boundaries with or without the composition APIs.

Putting that to one side, the app I made here is very, very simple. It just has a main page UI;

<Page
  x:Class="App5.MainPage"
  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
  xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
  xmlns:local="using:App5"
  xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
  xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
  mc:Ignorable="d">

  <Grid
    Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
    <GridView>
      <GridView.Items>
        <x:String>http://www.disney.co.uk/muppets/cms_res/images/download_pics/wallpapers/kermit-wallpaper-1920x1200.jpg</x:String>
        <x:String>http://www.disney.co.uk/muppets/cms_res/images/download_pics/wallpapers/animal-wallpaper-1920x1200.jpg</x:String>
        <x:String>http://www.disney.co.uk/muppets/cms_res/images/download_pics/wallpapers/fozzie-wallpaper-1920x1200.jpg</x:String>
        <x:String>http://www.disney.co.uk/muppets/cms_res/images/download_pics/wallpapers/gonzo-wallpaper-1920x1200.jpg</x:String>
      </GridView.Items>
      <GridView.ItemTemplate>
        <DataTemplate>
          <Button
            Template="{x:Null}"
            Click="OnNavigate"
            Tag="{Binding}">
            <Image
              Source="{Binding}"
              Width="300"
              Stretch="Uniform" />
          </Button>
        </DataTemplate>
      </GridView.ItemTemplate>
    </GridView>

  </Grid>
</Page>

and I know that it’s pretty nasty to bind up the Tag in the way that I have but it’s “easy” so I did it and then the code behind just navigates based on that;

using System;
using System.Numerics;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Hosting;
using Windows.UI.Xaml.Media.Animation;

namespace App5
{
  public sealed partial class MainPage : Page
  {
    public MainPage()
    {
      this.InitializeComponent();
    }
    void OnNavigate(object sender, RoutedEventArgs e)
    {
      var button = sender as Button;
      var parameter = button.Tag as String;
      this.Frame.Navigate(typeof(PicturePage), parameter);      
    }
  }
}

and then I have a PicturePage with an Image in it;

<Page
  x:Class="App5.PicturePage"
  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
  xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
  xmlns:local="using:App5"
  xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
  xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
  mc:Ignorable="d">

  <Grid
    Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
    <Image
      x:Name="image"
         Source="{Binding}" />
  </Grid>
</Page>

and a bit of code behind;

using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Media.Animation;
using Windows.UI.Xaml.Navigation;

namespace App5
{
  public sealed partial class PicturePage : Page
  {
    public PicturePage()
    {
      this.InitializeComponent();
    }
    protected override void OnNavigatedTo(NavigationEventArgs e)
    {
      base.OnNavigatedTo(e);
      this.DataContext = (string)e.Parameter;
    }
  }
}

and that’s pretty much that (ugly though it may be Smile).

In order to then add in the notion that the image from the first page should animate across to the second page I don’t have to do very much. On the first page, I simply add some code to the click handler of the button before the navigation is done;

   void OnNavigate(object sender, RoutedEventArgs e)
    {
      var button = sender as Button;
      var parameter = button.Tag as String;

      var service = ConnectedAnimationService.GetForCurrentView();

      service.PrepareToAnimate("SelectedMuppet", (Image)button.Content);

      this.Frame.Navigate(typeof(PicturePage), parameter);      
    }

and so we use the static ConnectedAnimationService.GetForCurrentView() to get a service and then we tell it which piece of content is going to animate from this page to the next page – in this case that’s the image which is the content of the button.

On the ‘receiving’ end I just need to do a little more work in the OnNavigatedTo override;

protected override void OnNavigatedTo(NavigationEventArgs e)
    {
      base.OnNavigatedTo(e);
      this.DataContext = (string)e.Parameter;

      var animation = ConnectedAnimationService.GetForCurrentView();
      animation.GetAnimation("SelectedMuppet").TryStart(this.image);
    }

so I get the service again and tell it to start the specific animation (so that I can have more than one) and give it the element that represents the “destination” here.

Or…so I thought. That doesn’t actually work Smile and this is where those samples become really useful to someone like me as a quick look at the relevant sample code illustrated that I might have to wait until the image was actually opened before trying to start that animation. There’s a comment in the sample that this won’t be the case in later previews but, for now, I added an ImageOpened handler to my Image and moved the bottom two lines of code above into that handler and all was good.

I like this. It’s short, it’s sweet, it’s clear what it does and there’s not much to get your head around to enable an effect which makes even the simplest app feel better. It’s worth saying that the duration of the animation here and the easing function applied to it can also be tweaked very easily.