Windows 10 Creators Update, UWP Apps–An Experiment with Streaming Installations

I’ve been slowly trying to catch up with what happened at //build via the videos on Channel9 focusing mainly on the topics around Windows 10 UWP and Mixed Reality with a sprinkling of what’s going on in .NET, C# and some of the pieces around Cognitive Services, identity and so on.

Given that //build was a 3 day conference, it generates a video wall worth of content which then takes a very long time to try and catch up with and so I expect I’ll still be doing some of this catching up over the coming weeks and months but I’m slowly making progress.

One of the many sessions that caught my eye was this one on changes to the packaging of UWP apps;

image

which talks about some of the changes that have been made in the Creators Update around breaking up UWP packages into pieces such that they can be installed more intelligently, dynamically and flexibly than perhaps they can today.

It’s well worth watching the session but if I had to summarise it I’d say that it covers;

  • How packages have always been able to be broken into pieces containing different types of resources using the “Modern Resource Technology” (MRT) such that (e.g.) only the resources that are relevant to the user’s language or scale or DirectX level are downloaded for the app.
  • How packages in Creators Update can be broken apart into “Content Groups” and partitioned into those which are required for the application to start up and those which can be deferred and downloaded from the Store at a later point in order to improve the user’s experience. There are APIs to support the developer being aware of which parts of the package are present on the system, to monitor and control download priority, etc.
  • How optional packages can be authored for Creators Update such that one or more apps can optionally make use of a separate package from the Store which can install content (and (native) code) into their application.

As you might expect, there’s lots of additional levels of detail here so if you’re interested in these bits then some links below will provide some of that detail;

and there’s more generally on the App Installer Blog and additional interesting pieces in that //build session around possible future developments and how Microsoft Office ™ is making use of these pieces in order to be deliverable from the Windows Store.

The idea of ‘streaming installations’ seemed immediately applicable to me but I need to spend some more time thinking about optional packages because I was struck by some of the similarities between them and app extensions (more here) and I haven’t quite figured out the boundaries there beyond the ability of an optional package to deliver additional code (native) to an application which extensions can’t do as far as I’m aware.

Having got my head around streaming installations, I wanted to experiment with them and that’s where the rest of this post is going.

I needed an app to play with and so I went and dug one out of the cupboard…

A Simple Pictures App

I wrote this tiny little “app” around the time of the UK “Future Decoded” show in late 2016 in order to demonstrate app extensions.

The essential idea was that I have this app which displays pictures from a group;

image

and there is 1 set of pictures built in – some film posters but I have two more sets of pictures under groupings of ‘Albums’ and ‘BoxSets’.

The original app used app extensions and so the ‘Albums’ and ‘BoxSets’ collections lived in another project providing an ‘extension’ to the content such that when the extension was installed on the system all of the 3 sets of content are loaded and the app looks as below;

image

This was pretty easy to put together using app extensions and it’s similar to what I wrote up in this blog post about app extensions where I used extensions and App Services together to build out a similarly extensible app.

So, having this code kicking around it seemed like an obvious simple project that I could use to try out streaming installations on Creators Update.

Defining Content Groups

Firstly, I brought all 3 of my content folders into the one project (i.e. Posters, Albums, BoxSets) as below;

image

and then I set about authoring a SourceAppxContentGroupMap.xml file as covered in this MSDN article;

Create and convert a source content group map

and I learned a couple of things there which were to firstly make sure that you set the right build action for that XML file;

image

and secondly to make sure that you’re running the right version of makeappx if you expect it to have the new /convertCGM option Smile That right version on my system would come from;

image

at the time of writing although I ultimately let Visual Studio build the content group map and only used makeappx as part of experimenting.

My content group map looked as below – I essentially just define that everything for the application is required apart from the two folders named Albums and BoxSets which are not required to start the application and so can be downloaded post-installation by the system as it sees fit;

<?xml version="1.0" encoding="utf-8"?>
<ContentGroupMap xmlns="http://schemas.microsoft.com/appx/2016/sourcecontentgroupmap" xmlns:s="http://schemas.microsoft.com/appx/2016/sourcecontentgroupmap" >
  <Required>
    <ContentGroup Name="Required">
      <File Name="*"/>
      <File Name="WinMetadata\*"/>
      <File Name="Properties\*"/>
      <File Name="Assets\*"/>
      <File Name="Posters\**"/>
    </ContentGroup>
  </Required>
  <Automatic>
    <ContentGroup Name="BoxSets">
      <File Name="BoxSets\**"/>
    </ContentGroup>
    <ContentGroup Name="Albums">
      <File Name="Albums\**"/>
    </ContentGroup>
  </Automatic>
</ContentGroupMap>

This file is then an input to produce the actual AppxContentGroupMap.xml file and I just used the Visual Studio menu to generate it as per the docs;

image

and after a couple of initial gremlins caused by me, that seemed to work out fine.

Writing Code to Load Content Groups

If the application is going to be installed “in pieces” then my code is going to have to adapt such that it can dynamically load up folders of pictures as they appear post-install.

Because I’d previously written the code to support a similar scenario using app extensions and because the code is very simple it wasn’t particularly difficult to do this. I have a function which attempts to figure out whether the content groups for the Albums and BoxSets have been installed and, if so, it adds them to what the application is displaying. This snippet of code covers it;

    async Task AddStreamedPictureSourcesAsync()
    {
      // Handle any streamed packages that are already installed.
      var groups = await Package.Current.GetContentGroupsAsync();

      // TBD - unsure exactly of the state to check for here in order
      // to be sure that the content group is present.
      foreach (var group in groups.Where(
        g => !g.IsRequired && g.State == PackageContentGroupState.Staged))
      {
        await this.AddPictureSourceAsync(group.Name, group.Name);
      }

      // Now set up handlers to wait for any others to arrive
      this.catalog = PackageCatalog.OpenForCurrentPackage();
      this.catalog.PackageInstalling += OnPackageInstalling;
    }
    async void OnPackageInstalling(
      PackageCatalog sender,
      PackageInstallingEventArgs args)
    {
      if (args.IsComplete)
      {
        await this.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
          async () =>
          {
            // Warning - untested at time of writing, I need to check
            // whether FullName is the right property here because 
            // I really want the *content group name*.
            await this.AddPictureSourceAsync(args.Package.Id.FullName,
              args.Package.Id.FullName);
          }
        );
      }
    }
    PackageCatalog catalog;

 

and this is making use of APIs that come from either SDK 14393 or 15063 on the PackageCatalog class in order to check what content groups are available and if I find that my Albums/BoxSets groups are available then I have code which goes and adds all the pictures from those folders to the collections which live behind the UI.

The code is also attempting to handle the PackageInstalling event to see if I can dynamically respond to the 2 non-required packages being added while the application is running and note the comment in there about me not actually having seen that code run just yet and I’ll come back to why that is in just one second as it’s the wrong code Smile

Testing…

How to try this out?

In the //build session, there’s a few options listed around how you can test/debug a streaming install without actually putting your application into the Store. One method makes use of the PackageCatalog APIs to programmatically change the installation status of the content groups, another makes use of the Windows Device Portal (although I’m unsure as to whether this one is implemented yet) and there’s an option around using the regular PowerShell add-appxpackage command.

Testing via PowerShell

I thought I’d try the PowerShell option first and so I made a .APPX package for my application via the Store menu in Visual Studio;

image

and then made sure that I wasn’t making an APPX bundle;

image

and then I got hold of the temporary certificate that this generates and made it trusted on my system before going to install the .APPX file via PowerShell;

image

and so the key part here is the new –RequiredContentGroupOnly parameter to the Add-AppxPackage command. With that command executed, I can see that the app only has access to the Posters collection of images from its required content group and so that all seems good;

image

I also found it interesting to go and visit the actual folder on disk where the application is installed and to see what the Albums/BoxSets folders representing the ‘automatic’ content groups look like.

The first thing to say is that those folders do exist and here’s what the content looks like  at this point in the process;

image

so there are “marker files” present in the folders and so (as advised in the //build session) code would have to be careful not to confuse the presence of the folders/files with the content group’s installation status.

I’d hoped to then be able to use the add-appxpackage command again to add the other two content groups (Albums/BoxSets) while the application was running but when I tried to execute that, I saw;

image

Now, this was “very interesting” Smile in that I was reading the section of this page titled “Sideloaded Stream-able App” and it suggested that;

With the debugger attached, you can install the automatic content groups by:

Add-AppxPackage –Path C:\myapp.appx

Which is the exact same command but without the flag (what happens is that the platform will see that the app is already installed and will only stage the files that are missing).

So I attached my debugger to the running app and ran the command again and, sure enough, I could see that the debugger hit a first-chance exception in that piece of untested code that I’d listed earlier;

image

and so, sure enough, my code was being called here as the package started to install but that code wasn’t working because it was confusing the content group name with the application’s full package name.

That didn’t surprise me too much, it had been a bit of a ‘wild guess’ that I might use the PackageCatalog.PackageInstalling event in this way and I was clearly wrong so I went and reworked that code to make use of the far more sensible sounding PackageContentGroupStaging event as below;

 async Task AddStreamedPictureSourcesAsync()
    {
      // Handle any streamed packages that are already installed.
      var groups = await Package.Current.GetContentGroupsAsync();

      // TBD - unsure exactly of the state to check for here in order
      // to be sure that the content group is present.
      foreach (var group in groups.Where(
        g => !g.IsRequired && g.State == PackageContentGroupState.Staged))
      {
        await this.AddPictureSourceAsync(group.Name, group.Name);
      }

      // Now set up handlers to wait for any others to arrive
      this.catalog = PackageCatalog.OpenForCurrentPackage();
      this.catalog.PackageInstalling += OnPackageInstalling;
      this.catalog.PackageContentGroupStaging += OnContentGroupStaging;
    }

    async void OnContentGroupStaging(
      PackageCatalog sender, PackageContentGroupStagingEventArgs args)
    {
      if (args.IsComplete)
      {
          await this.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
            async () =>
            {
              await this.AddPictureSourceAsync(
                args.ContentGroupName,
                args.ContentGroupName);
            }
          );
      }
    }

    async void OnPackageInstalling(
      PackageCatalog sender,
      PackageInstallingEventArgs args)
    {
      // TODO: Remove this handler, don't think it's useful but leaving
      // it for the moment for debugging.
      Debugger.Break();
    }

This looked like it was far more likely to work but what I found was;

  1. The Add-AppxPackage command would still fail when I tried to add the non-required content groups to the already running app.
  2. From the debugger, I could see that the PackageInstalling event was still firing but the PackageContentGroupStaging event wasn’t. I suspect that the Add-AppxPackage command is quitting out between those 2 stages and so the first event fires and the second doesn’t.

This means that I haven’t been able to use this method just yet to test what happens when the app is running and the additional content groups are installed.

The best that I could find to do here was to install the required content group using the –RequiredContentGroupOnly and then, with the application running, I could install the other groups using the –ForceApplicationShutdown option and, sure enough, the app would go away and come back with all 3 of my content groups rather than just the required one;

image

and so that shows that things are working across app executions but it doesn’t test out how they work when the application is up and running which might well be the case if the user gets the app from Store, runs it and then additional packages show up over the first few minutes of the user’s session with the app.

Testing via the Streaming Install Debugging App

At this point, I went back to this blog post and tried out the steps under the heading of “Using the Streaming Install Debugging App”. This involves going off to download this app from github which then uses the APIs to manipulate the installation status of the content groups within my app.

I uninstalled my app from the system and then reinstalled it by hitting F5 in Visual Studio and then I ran up the debugging app and, sure enough, it showed me the details of my app;

image

and so I can now use this UI to change the status of my two content groups BoxSets and Albums to be ‘not staged’;

image

and then run up my app alongside this one and it correctly just shows the ‘Film Posters’ content;

image

and if I dynamically now switch a content group’s state to Staged then my app updates;

image

and I can repeat that process with the Albums content group;

image

and so that all seems to be working nicely Smile

Wrapping Up

I really like these new sorts of capabilities coming to UWP packaging and the APIs here seem to make it pretty easy to work with although, clearly, you’d need to give quite a lot of early-stage thought to which pieces of your application’s content should be packaged into which content groups.

I put the code that I was playing with here onto github if you’re interested in picking up this (admittedly very simple) sample.

Using OCR to Read an IP Address in a Holographic UWP App

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Just a short post. I’ve been playing with a situation recently where I needed to get an IP address into a holographic app. There’s lots of different ways that you might do this depending on how frequently the IP address might change;

  • Hard-code it into the app.
  • Put it into a config file (perhaps loaded from some other server).
  • Type it into the app’s UI although I think it’s fair to say that typing IP addresses on HoloLens isn’t so much fun.
  • Magically download it via some sort of bluetooth service
  • Speak it to the application through the microphone and UWP speech recognition
  • Put the IP address into a QR code and have the app scan it through the camera

and I’m sure there’s many more but today I thought I’d experiment with what seemed like a fairly natural idea – when I want to give another person an IP address, I usually write it onto a piece of paper and hand it to them or pin it up on the wall.

So, why not with the device?

Because I already have some code which runs on HoloLens and scans for QR codes (see this blog post and accompanying github) it felt like it would be very little effort to change the QR code scanning that happens in that blog post into some “IP Address” OCR recognition and see how well or badly that works out.

Here’s an example of how well/badly it worked out – I think it works quite well but it’s fair to say that I haven’t managed to get it to work with handwriting although that doesn’t surprise me as I’ve never found the OCR engine in UWP to process handwriting and maybe it’s not meant to so I’m not being critical of it when I say that.

Note – it’s “a bit tricky” to record an app on the HoloLens that is actively taking over the webcam so I’m having to show this with static images;

20170328_144256_HoloLens

20170328_144320_HoloLens

but it feels like a scan of an IP address like that can be done in about 1-2s once the camera has a view of it although, naturally, it’s possible for the camera to get a partial view such as 2.168.0.1 for the above address so a re-scan might be necessary in some circumstances.

In terms of the code here, I simply did some minimal changes to what I already had around QR code processing in that I removed the ZXing library from my project and then changed the public interface so that it looks as it does in the code below and clearly I should do something about renaming the namespace here but this call says “Hey, find me an IP address from the first camera on the system or timeout after 30s”;

MediaFrameQrProcessing.Wrappers.IPAddressScanner.ScanFirstCameraForIPAddress(
        result =>
        {
          UnityEngine.WSA.Application.InvokeOnAppThread(() =>
          {
            // result here is a System.Net.IPAddress...
            this.textMesh.text = result?.ToString() ?? "not found";
          }, 
          false);
        },
        TimeSpan.FromSeconds(30));

and it gives me back an IPAddress instance. In my underlying library, I replaced the QR code frame scanner with a (basic!) IP address frame scanner;

namespace MediaFrameQrProcessing.Processors
{
  using MediaFrameQrProcessing.VideoDeviceFinders;
  using System.Runtime.InteropServices.WindowsRuntime;
  using System.Threading.Tasks;
  using Windows.Devices.Enumeration;
  using Windows.Media.Capture;
  using Windows.Media.Capture.Frames;
  using Windows.Media.Ocr;
  using System;
  using System.Text.RegularExpressions;
  using System.Net;

  public class IPAddressFrameProcessor : MediaCaptureFrameProcessor
  {
    public IPAddress Result { get; private set; }

    public IPAddressFrameProcessor(
      MediaFrameSourceFinder mediaFrameSourceFinder, 
      DeviceInformation videoDeviceInformation, 
      string mediaEncodingSubtype, 
      MediaCaptureMemoryPreference memoryPreference = MediaCaptureMemoryPreference.Cpu) 

      : base(
          mediaFrameSourceFinder, 
          videoDeviceInformation, 
          mediaEncodingSubtype, 
          memoryPreference)
    {
    }
    protected override async Task<bool> ProcessFrameAsync(MediaFrameReference frameReference)
    {
      bool done = false;

      // doc here https://msdn.microsoft.com/en-us/library/windows/apps/xaml/windows.media.capture.frames.videomediaframe.aspx
      // says to dispose this softwarebitmap if you access it.
      using (var bitmap = frameReference.VideoMediaFrame.SoftwareBitmap)
      {
        try
        {
          if (this.ocrEngine == null)
          {
            this.ocrEngine = OcrEngine.TryCreateFromUserProfileLanguages();
            this.regex = new Regex(IP_ADDRESS_PATTERN);
          }
          var results = await this.ocrEngine.RecognizeAsync(bitmap);

          if (results != null)
          {
            var matchingResults = this.regex.Matches(results.Text);

            for (int i = 0; !done && (i < matchingResults.Count); i++)
            {
              IPAddress parsedAddress;

              done = IPAddress.TryParse(matchingResults[i].Value, out parsedAddress);

              if (done)
              {
                this.Result = parsedAddress;
              }
            }
          }
        }
        catch
        {
        }
      }
      return (done);
    }
    Regex regex;
    OcrEngine ocrEngine;

    // Taken from S.O. http://stackoverflow.com/questions/106179/regular-expression-to-match-dns-hostname-or-ip-address
    const string IP_ADDRESS_PATTERN =
      @"(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}";
  }
}

and that all seems to work reasonably well.

The code for this (the library and the 2D XAML test app) is here if anyone wants to play with it.

Windows 10, UWP, HoloLens & A Simple Two-Way Socket Library

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

This post is around some code that I wrote to enable a specific, simple demo scenario where I would have 2 Windows 10 devices (including HoloLens devices) find and talk to each other on a local WiFi network with fairly low minimal infrastructure/setup.

There are many other ways of achieving what I did here so this is just my own implementation of something fairly common to suit my scenario rather than some radical new invention and it’s far from production ready because it doesn’t deal with the suspend/resume cycle of UWP properly and attempt to do the right work to keep sockets running when the application is suspended (as per these docs).

I’m really just putting it together for a demo where I want to connect two HoloLens devices so that I can experiment with ‘sharing holograms’…

Sharing Holograms – Backdrop

One of the really interesting/amazing things that you can do in a HoloLens application is to use the built in connectivity options of the device (including Bluetooth and more specifically WiFi) to connect multiple devices such that a number of users can share an experience across devices being used in the same space.

There are many examples of this out there on the web including in videos like this keynote video below which shows a couple working on their kitchen design;

Sharing Holograms – HoloToolkit Support

This notion of ‘shared holograms’ is one that is common enough to have been built into the HoloToolkit with its sharing support which also features in this Holographic Academy lesson;

The toolkit’s method for sharing holograms is to set up a network configuration with a PC acting as a server in order to distribute holographic data between participating HoloLens devices and keep them in sync. This is a great solution and one that you should look at if you’re interested in sharing holograms in this or a similar way.

I’ve used this solution before and it works very well but I wanted to go with my own solution here for a couple of reasons;

  1. I wanted to make sure I understood the implementation by (essentially) writing small pieces of it myself from the ground up.
  2. I want to simplify the solution as I was only thinking about two-way comms.

A Simple Socket Library for Two-Way Comms

I had a simpler scenario in mind where I didn’t want to have to rely on a PC to act as a server and I only needed to connect 2 devices rather than many. I also didn’t want to have to enter host names, IP addresses or port numbers into some UI in order to get connectivity up and running.

Towards that end, I built out a simple library that I plan to write a little about in the rest of this post and then I’ll follow up with a subsequent post where that library gets used in some kind of basic demo that shares holograms across devices.

In order to do that, I first looked to the PeerFinder API in the Universal Windows Platform which I thought would provide the perfect solution in that those APIs are specifically about two devices running the same app on the same network discovering each other for further communication.

However, as I found in this blog post, those APIs don’t look like they function on HoloLens.

I might have also looked at something like UWP App Services with Project Rome but that didn’t suit my scenario because that’s specifically about communicating between apps on devices that belong to the same user whereas I wanted to communicate between the same app but on (potentially) any user’s device.

I wrote some code to try and enable the scenario that I wanted to hit with two devices operating in a peer manner such that;

  1. The user of a device selects whether they want to create/connect to a connection.
    1. This choice can be made before or after the companion device comes along.
  2. The creating device creates a TCP socket and advertises its details over Bluetooth LE (like a beacon).
  3. The connecting device looks for advertisements over Bluetooth LE.
  4. Once the devices find each other, they connect up the socket and stop advertising/listening over Bluetooth LE.
  5. Messages are interchanged over the TCP socket. At the time of writing, these can be;
    1. Strings.
    2. Byte arrays.
    3. Serialized objects (via JSON.NET).
  6. As/when the socket drops, the process can be repeated from step 1 again.

Note that this is just a set of choices that I made and that I could have gone in other directions. For example;

  1. Rather than use Bluetooth LE to advertise a TCP socket, I could have used a UDP multicast group.
  2. Rather than use a TCP socket, I could have used UDP sockets but I felt that my comms were best suited to a connected, streamed socket between 2 parties here.

Implementation – A library for both XAML UI and Unity UI

The implementation that I spent an hour or two on sits on UWP APIs and I wanted to try and ensure that it would work on any UWP device including the HoloLens and also that I could make use of it from regular UWP code as I might write in a XAML based 2D UI but also from 3D Unity applications for HoloLens as my primary purpose here is to share holograms from one instance of my app on one device to another instance on another device.

In writing this code, I made some choices around sticking with the coding pattern that I would usually use for UWP applications which ends up relying quite heavily on the async/await capabilities of C# that marry well with the async APIs that you see across the UWP although I debated quite a lot about whether I should go down this route because it isn’t perhaps the standard coding approach that you might take inside of the Unity environment.

Because I’m only targeting UWP, I feel that this is a reasonable trade-off although it does lead to a bit of;

#if UNITY_UWP

inside of my code to keep the Unity editor happy. If you’ve not bumped up against these limitations then the essence is that the Unity editor can’t cope with a script that looks like this;

using System;
using UnityEngine;

public class Placeholder : MonoBehaviour
{
  private async void Start()
  {
    await Task.Delay(TimeSpan.FromSeconds(1));
  }
}

because the C# language in Unity is V4 and so async/await etc. aren’t available there and the version of .NET doesn’t have Task.

However, because of the way that the build process works with Unity, if I accept that I’m only building for UWP and none of Unity’s other platforms I can write conditional code so that the Unity editor doesn’t know what I’m up to and I can still execute pretty much what I want at runtime. For example (and, yes, I know this example is ugly);

public class Placeholder : MonoBehaviour
{
  void Start()
  {
    this.InternalStartAsync();
  }
#if UNITY_UWP
  async
#endif
  void InternalStartAsync()
  {
#if UNITY_UWP

    // Thread before?

    await Task.Delay(TimeSpan.FromSeconds(1));
 
    // Thread after?
#endif
  }
}

That should satisfy both the editor and build out to do what I want at runtime. There’s more written about this elsewhere on the web but to my mind it also comes with two other implications;

  1. The implication of having async methods that no caller ever awaits – as in the code above where the Start() method does not await the InternalStartAsync() method.
  2. The implication of the (possible) thread switching between the 2 comments marked ‘Thread before?’ and ‘Thread after?’.
    1. This is unlike a UWP XAML scenario where, by default, using await on a UI thread would cause the continuation of the async function to be dispatched back onto the UI thread because the framework sets up a SynchronizationContext to make that happen.

With some of that in mind, I set off building something fairly simple that I called the AutoConnectMessagePipe

Connecting 2 Machines with the AutoConnectMessagePipe

I wanted the consumption of this API to be simple and I managed to boil down my API surface to offer an experience something like the code below in terms of getting a pipe connected between two devices;

    async Task<bool> ConnectPipeAsync()
    {
      // True if we want to advertise the pipe, false otherwise
      AutoConnectMessagePipe pipe = new AutoConnectMessagePipe(true);

      // We wait for the pipe to be connected providing a timeout if
      // we like.
      await pipe.WaitForConnectionAsync(TimeSpan.FromMilliseconds(-1));

      return (pipe.IsConnected);
    }

In order to function, the app is going to need the Bluetooth UWP capability and either/both of the ‘Private networks’ or ‘Internet Client/Server’ capabilities to be switched on.

Once the pipe is connected, the API makes it fairly simple to send a string or a byte array;

      await pipe.SendStringAsync("Hello");
      await pipe.SendBytesAsync(new byte[] { 1, 2, 3 });

and I made a somewhat arbitrary decision around sending strongly typed objects to the other end of the pipe in that I defined a simple (intended to be abstract) base class;

  public class MessageBase
  {
  }

and so sending a message would involve deriving it from this base class;

public class MyMessage : MessageBase
  {
    public string MyProperty { get; set; }
  }

and sending an instance over the network;

await pipe.SendObjectAsync(
        new MyMessage()
        {
          MyProperty = "Foo"
        }
      );

Note that it’s absolutely my intention here that the same process is on both ends of the wire so I don’t have to worry about the data types in question not being available for de-serialization, this library is really for an app to talk to another instance of itself on another machine.

Note also that I really would have liked the BinaryFormatter because it doesn’t, as far as I remember, need up-front knowledge of the types that it’s dealing with but it doesn’t exist in the UWP. I glanced at one or two of the available serialization libraries out there and I also tried to work with the DataContractSerializer for a while but I don’t think that works on HoloLens right now and so I fell back to using JSON.NET (not that this is a bad thing Smile).

In terms of reading messages, the pipe class offers the opportunity to read a single message or to sit in an async loop dispatching messages until the socket connection goes away and the code to do that looks something like;

      // We will not 'return' from this until the socket is closed, remembering what
      // 'return' means in an async setting!
      await pipe.ReadAndDispatchMessageLoopAsync(
        (messageType, body) =>
        {
          switch (messageType)
          {
            case MessageType.Buffer:
              byte[] bits = (byte[])body;
              break;
            case MessageType.String:
              string text = (string)body;
              break;
            case MessageType.SerializedObject:
              MyMessage typedObject = body as MyMessage;
              break;
            default:
              break;
          }
        }

and that’s pretty much the interface into my library.

I wrote a few classes that underpin this which probably need a bit of refinement and a lot more testing but they connect together in the manner below;

image

The source for this library is on github here .

With this in place, how would this look and operate inside of a 2D UWP app?

Usage – Inside 2D UWP App

I put together a simple, blank UWP app and added a user control with a UI which easily allows me to select the connect/advertise option;

image

and once the application is connected, it displays 3 coloured buttons;

image

and pressing the button of a particular colour changes the background colour of the app that’s on the other end of the connection;

image

That’s a fairly simple piece of XAML defining the UI;

<UserControl
    x:Class="XamlTestApp.MainControl"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:local="using:XamlTestApp"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    mc:Ignorable="d"
    d:DesignHeight="300"
    d:DesignWidth="400">
    <UserControl.Resources>
        <local:InverseVisibilityConverter
            x:Name="negate" />
    </UserControl.Resources>
    <Grid>
        <Grid.Resources>
            <Style
                TargetType="Rectangle">
                <Setter
                    Property="Width"
                    Value="24" />
                <Setter
                    Property="Height"
                    Value="24" />
                <Setter
                    Property="Stroke"
                    Value="Black" />
            </Style>
            <Style
                TargetType="Button">
                <Setter
                    Property="Margin"
                    Value="2" />
                <Setter
                    Property="BorderBrush"
                    Value="Black" />
                <Setter
                    Property="BorderThickness"
                    Value="1" />
                <Setter
                    Property="HorizontalAlignment"
                    Value="Center" />
            </Style>
        </Grid.Resources>
        <Grid.Background>
            <SolidColorBrush
                Color="{x:Bind BackgroundColour,Mode=OneWay,FallbackValue=White}" />
        </Grid.Background>
        <Viewbox
            Margin="96">
            <StackPanel
                HorizontalAlignment="Center"
                VerticalAlignment="Center">
                <StackPanel
                    HorizontalAlignment="Center"
                    VerticalAlignment="Center"
                    Orientation="Horizontal"
                    Visibility="{x:Bind Path=IsConnected, Mode=OneWay,Converter ={StaticResource ResourceKey=negate}}">
                    <StackPanel Margin="8">
                        <Button
                            Content=""
                            FontFamily="Segoe MDL2 Assets"
                            Click="{x:Bind OnAdvertise}"/>
                        <TextBlock
                            Text="advertise"
                            TextAlignment="Center" />
                    </StackPanel>
                    <StackPanel Margin="8">
                        <Button
                            Content=""
                            FontFamily="Segoe MDL2 Assets"
                            Click="{x:Bind OnConnect}" />
                        <TextBlock
                            Text="connect"
                            TextAlignment="Center" />
                    </StackPanel>
                </StackPanel>
                <StackPanel
                    Orientation="Horizontal"
                    Visibility="{x:Bind IsConnected,Mode=OneWay}"
                    Margin="0,8,0,0"
                    HorizontalAlignment="Center">
                    <Button
                        Template="{x:Null}"
                        Click="{x:Bind OnRed}">
                        <Rectangle
                            Fill="Red" />
                    </Button>
                    <Button
                        Template="{x:Null}"
                        Click="{x:Bind OnGreen}">
                        <Rectangle
                            Fill="Green" />
                    </Button>
                    <Button
                        Template="{x:Null}"
                        Click="{x:Bind OnBlue}">
                        <Rectangle
                            Fill="Blue" />
                    </Button>
                </StackPanel>
            </StackPanel>
        </Viewbox>
    </Grid>
</UserControl>

accompanied by (mostly) a single code-behind file which is pretty simple in its implementation;

//#define USE_OBJECTS
namespace XamlTestApp
{
  using SimpleUwpTwoWayComms;
  using System;
  using System.ComponentModel;
  using System.Runtime.CompilerServices;
  using System.Threading.Tasks;
  using Windows.UI;
  using Windows.UI.Xaml.Controls;

#if USE_OBJECTS
  public class ColourMessage : MessageBase
  {
    public byte Red { get; set; }
    public byte Green { get; set; }
    public byte Blue { get; set; }
  }
#endif

  public sealed partial class MainControl : UserControl, INotifyPropertyChanged
  {
    public event PropertyChangedEventHandler PropertyChanged;

    public MainControl()
    {
      this.InitializeComponent();
      this.Disconnect();
    }
    public Color BackgroundColour
    {
      get {
        return (this.backgroundColour);
      }
      set
      {
        if (this.backgroundColour != value)
        {
          this.backgroundColour = value;
          this.FirePropertyChanged();
        }
      }
    }
    Color backgroundColour;

    public bool IsConnected
    {
      get
      {
        return (this.isConnected);
      }
      set
      {
        if (this.isConnected != value)
        {
          this.isConnected = value;
          this.FirePropertyChanged();
        }
      }
    }
    bool isConnected;

    public void OnAdvertise()
    {
      this.OnInitialise();
    }
    public void OnConnect()
    {
      this.OnInitialise(false);
    }
    public async void OnInitialise(bool advertise = true)
    {
      this.pipe = new AutoConnectMessagePipe(advertise);

      await this.pipe.WaitForConnectionAsync(TimeSpan.FromMilliseconds(-1));

      this.IsConnected = this.pipe.IsConnected;

      if (this.IsConnected)
      {
        await this.pipe.ReadAndDispatchMessageLoopAsync(this.MessageHandler);
      }
    }
    void Disconnect()
    {
      this.IsConnected = false;
      this.BackgroundColour = Colors.White;
    }
    public async void OnRed()
    {
      await this.OnColourAsync(Colors.Red);
    }
    public async void OnGreen()
    {
      await this.OnColourAsync(Colors.Green);
    }
    public async void OnBlue()
    {
      await this.OnColourAsync(Colors.Blue);
    }
    async Task OnColourAsync(Color colour)
    {
#if USE_OBJECTS
      ColourMessage message = new ColourMessage()
      {
        Red = colour.R,
        Green = colour.G,
        Blue = colour.B
      };
      await this.pipe.SendObjectAsync(message);
#else
      await this.pipe.SendBytesAsync(
        new byte[] { colour.R, colour.G, colour.B });
#endif
    }
    void MessageHandler(MessageType messageType, object messageBody)
    {
#if USE_OBJECTS
      if (messageType == MessageType.SerializedObject)
      {
        var msg = messageBody as ColourMessage;

        if (msg != null)
        {
          this.BackgroundColour = Color.FromArgb(0xFF, msg.Red, msg.Green, msg.Blue);
        }
      }
#else
      // We just handle byte arrays here.
      if (messageType == MessageType.Buffer)
      {
        var bits = (byte[])messageBody;
        this.BackgroundColour = Color.FromArgb(0xFF, bits[0], bits[1], bits[2]);
      }
#endif
    }
    void FirePropertyChanged([CallerMemberName] string propertyName = null)
    {
      this.PropertyChanged?.Invoke(this,
        new PropertyChangedEventArgs(propertyName));
    }
    AutoConnectMessagePipe pipe;
  }
}

and I checked that project into the github repository as well and this 2D app works fine on HoloLens.

That code can be compiled in two different ways (depending on the USE_OBJECTS symbol) so that it either sends byte arrays over the network or a small object serialized with JSON.NET.

Usage – Inside 3D HoloLens Unity App

I made a blank 3D Unity project and set it up for HoloLens development pretty much as I do at the start of this video so as to configure the project and the scene for HoloLens development.

I also made sure that I had the UWP capabilities for Bluetooth and Private networks and I added my new simple comms library as an assembly to my project’s assets as I wrote about in this post and I took the same approach to Newtonsoft.Json;

image

With that set up, I made a basic UI as below;

image

and essentially what I’ve got here are;

  • An empty Placeholder GameObject.
  • A Basic Cursor (from the HoloToolkit – more or less as I do in the ‘Adding Gaze’ section of this post).
  • A Canvas holding two panels
    • One panel has advertise/connect buttons.
    • One panel has red/green/blue buttons.
  • A Cube that I can use to display colours.

and then I’ve added some scripts to the Placeholder here – 4 from the HoloToolkit to give me a tag-along behaviour and the next 3 scripts are dependencies for the Basic Cursor;

image

The last script listed above is my own Placeholder.cs script which is the interesting piece here and you might have noticed from the screenshot above that I have put public properties onto that script to make the two panels and the cube accessible to the code. Here’s that script;

using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
#if UNITY_UWP && !UNITY_EDITOR
using SimpleUwpTwoWayComms;
using System.Threading.Tasks;
#endif

public class Placeholder : MonoBehaviour
{
  public GameObject panelConnection;
  public GameObject panelColours;
  public GameObject cube;

  public void OnAdvertise()
  {
#if UNITY_UWP && !UNITY_EDITOR
    this.OnInitialiseAsync();
#endif
  }
  public void OnConnect()
  {
#if UNITY_UWP && !UNITY_EDITOR
    this.OnInitialiseAsync(false);
#endif
  }
  public void OnRed()
  {
    this.OnColour(Color.red);
  }
  public void OnGreen()
  {
    this.OnColour(Color.green);
  }
  public void OnBlue()
  {
    this.OnColour(Color.blue);
  }
  public void OnColour(Color colour)
  {
    // Convert 0 to 1 values into bytes so that we can be compatible with the 2D XAML
    // app.
    var message = new byte[]
    {
      (byte)(colour.r * 255.0f),
      (byte)(colour.g * 255.0f),
      (byte)(colour.b * 255.0f)
    };

#if UNITY_UWP && !UNITY_EDITOR
    this.OnColourAsync(message);
#endif
  }
  void Dispatch(Action action)
  {
    UnityEngine.WSA.Application.InvokeOnAppThread(() =>
    {
      action();
    },
    false);
  }
#if UNITY_UWP && !UNITY_EDITOR
  async Task OnColourAsync(byte[] bits)
  {
    await this.pipe.SendBytesAsync(bits);
  }

  async Task OnInitialiseAsync(bool advertise = true)
  {
    if (this.pipe == null)
    {
      this.pipe = new AutoConnectMessagePipe(advertise);
    }

    await this.pipe.WaitForConnectionAsync(TimeSpan.FromMilliseconds(-1));

    if (pipe.IsConnected)
    {
      this.TogglePanels(false);
      await this.pipe.ReadAndDispatchMessageLoopAsync(this.MessageHandler);
      this.TogglePanels(true);
    }
  }
  void TogglePanels(bool connectionPanel)
  {
    this.Dispatch(
      () =>
      {
        this.panelConnection.SetActive(connectionPanel);
        this.panelColours.SetActive(!connectionPanel);
      }
    );
  }
  void MessageHandler(MessageType messageType, object messageBody)
  {
    // We just handle byte arrays here.
    if (messageType == MessageType.Buffer)
    {
      var bits = (byte[])messageBody;

      if (bits != null)
      {
        this.Dispatch(() =>
          {
            this.cube.GetComponent<Renderer>().material.color =
              new Color(
                (float)(bits[0]) / 255.0f,
                (float)(bits[1]) / 255.0f,
                (float)(bits[2] / 255.0f));
          }
        );
      }
    }
  }
  AutoConnectMessagePipe pipe;
#endif
}

and I’ve committed that project to the github repository as well.

Wrapping Up

In as far as it goes, this code all seems to work reasonably well and here’s a quick test video of me using it to communicate from the 2D XAML app on my phone to the 3D Unity app on my HoloLens but the 2D app would also work on HoloLens as it does on PC and (presumably) other devices that I haven’t yet tried.

I’ll come back with a follow on post which is more about using this library to share holograms from one device to another but this post is already way too long so I’ll stop here Smile