Windows 10 Creators Update, UWP Apps–An Experiment with Streaming Installations

I’ve been slowly trying to catch up with what happened at //build via the videos on Channel9 focusing mainly on the topics around Windows 10 UWP and Mixed Reality with a sprinkling of what’s going on in .NET, C# and some of the pieces around Cognitive Services, identity and so on.

Given that //build was a 3 day conference, it generates a video wall worth of content which then takes a very long time to try and catch up with and so I expect I’ll still be doing some of this catching up over the coming weeks and months but I’m slowly making progress.

One of the many sessions that caught my eye was this one on changes to the packaging of UWP apps;

image

which talks about some of the changes that have been made in the Creators Update around breaking up UWP packages into pieces such that they can be installed more intelligently, dynamically and flexibly than perhaps they can today.

It’s well worth watching the session but if I had to summarise it I’d say that it covers;

  • How packages have always been able to be broken into pieces containing different types of resources using the “Modern Resource Technology” (MRT) such that (e.g.) only the resources that are relevant to the user’s language or scale or DirectX level are downloaded for the app.
  • How packages in Creators Update can be broken apart into “Content Groups” and partitioned into those which are required for the application to start up and those which can be deferred and downloaded from the Store at a later point in order to improve the user’s experience. There are APIs to support the developer being aware of which parts of the package are present on the system, to monitor and control download priority, etc.
  • How optional packages can be authored for Creators Update such that one or more apps can optionally make use of a separate package from the Store which can install content (and (native) code) into their application.

As you might expect, there’s lots of additional levels of detail here so if you’re interested in these bits then some links below will provide some of that detail;

and there’s more generally on the App Installer Blog and additional interesting pieces in that //build session around possible future developments and how Microsoft Office ™ is making use of these pieces in order to be deliverable from the Windows Store.

The idea of ‘streaming installations’ seemed immediately applicable to me but I need to spend some more time thinking about optional packages because I was struck by some of the similarities between them and app extensions (more here) and I haven’t quite figured out the boundaries there beyond the ability of an optional package to deliver additional code (native) to an application which extensions can’t do as far as I’m aware.

Having got my head around streaming installations, I wanted to experiment with them and that’s where the rest of this post is going.

I needed an app to play with and so I went and dug one out of the cupboard…

A Simple Pictures App

I wrote this tiny little “app” around the time of the UK “Future Decoded” show in late 2016 in order to demonstrate app extensions.

The essential idea was that I have this app which displays pictures from a group;

image

and there is 1 set of pictures built in – some film posters but I have two more sets of pictures under groupings of ‘Albums’ and ‘BoxSets’.

The original app used app extensions and so the ‘Albums’ and ‘BoxSets’ collections lived in another project providing an ‘extension’ to the content such that when the extension was installed on the system all of the 3 sets of content are loaded and the app looks as below;

image

This was pretty easy to put together using app extensions and it’s similar to what I wrote up in this blog post about app extensions where I used extensions and App Services together to build out a similarly extensible app.

So, having this code kicking around it seemed like an obvious simple project that I could use to try out streaming installations on Creators Update.

Defining Content Groups

Firstly, I brought all 3 of my content folders into the one project (i.e. Posters, Albums, BoxSets) as below;

image

and then I set about authoring a SourceAppxContentGroupMap.xml file as covered in this MSDN article;

Create and convert a source content group map

and I learned a couple of things there which were to firstly make sure that you set the right build action for that XML file;

image

and secondly to make sure that you’re running the right version of makeappx if you expect it to have the new /convertCGM option Smile That right version on my system would come from;

image

at the time of writing although I ultimately let Visual Studio build the content group map and only used makeappx as part of experimenting.

My content group map looked as below – I essentially just define that everything for the application is required apart from the two folders named Albums and BoxSets which are not required to start the application and so can be downloaded post-installation by the system as it sees fit;

<?xml version="1.0" encoding="utf-8"?>
<ContentGroupMap xmlns="http://schemas.microsoft.com/appx/2016/sourcecontentgroupmap" xmlns:s="http://schemas.microsoft.com/appx/2016/sourcecontentgroupmap" >
  <Required>
    <ContentGroup Name="Required">
      <File Name="*"/>
      <File Name="WinMetadata\*"/>
      <File Name="Properties\*"/>
      <File Name="Assets\*"/>
      <File Name="Posters\**"/>
    </ContentGroup>
  </Required>
  <Automatic>
    <ContentGroup Name="BoxSets">
      <File Name="BoxSets\**"/>
    </ContentGroup>
    <ContentGroup Name="Albums">
      <File Name="Albums\**"/>
    </ContentGroup>
  </Automatic>
</ContentGroupMap>

This file is then an input to produce the actual AppxContentGroupMap.xml file and I just used the Visual Studio menu to generate it as per the docs;

image

and after a couple of initial gremlins caused by me, that seemed to work out fine.

Writing Code to Load Content Groups

If the application is going to be installed “in pieces” then my code is going to have to adapt such that it can dynamically load up folders of pictures as they appear post-install.

Because I’d previously written the code to support a similar scenario using app extensions and because the code is very simple it wasn’t particularly difficult to do this. I have a function which attempts to figure out whether the content groups for the Albums and BoxSets have been installed and, if so, it adds them to what the application is displaying. This snippet of code covers it;

    async Task AddStreamedPictureSourcesAsync()
    {
      // Handle any streamed packages that are already installed.
      var groups = await Package.Current.GetContentGroupsAsync();

      // TBD - unsure exactly of the state to check for here in order
      // to be sure that the content group is present.
      foreach (var group in groups.Where(
        g => !g.IsRequired && g.State == PackageContentGroupState.Staged))
      {
        await this.AddPictureSourceAsync(group.Name, group.Name);
      }

      // Now set up handlers to wait for any others to arrive
      this.catalog = PackageCatalog.OpenForCurrentPackage();
      this.catalog.PackageInstalling += OnPackageInstalling;
    }
    async void OnPackageInstalling(
      PackageCatalog sender,
      PackageInstallingEventArgs args)
    {
      if (args.IsComplete)
      {
        await this.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
          async () =>
          {
            // Warning - untested at time of writing, I need to check
            // whether FullName is the right property here because 
            // I really want the *content group name*.
            await this.AddPictureSourceAsync(args.Package.Id.FullName,
              args.Package.Id.FullName);
          }
        );
      }
    }
    PackageCatalog catalog;

 

and this is making use of APIs that come from either SDK 14393 or 15063 on the PackageCatalog class in order to check what content groups are available and if I find that my Albums/BoxSets groups are available then I have code which goes and adds all the pictures from those folders to the collections which live behind the UI.

The code is also attempting to handle the PackageInstalling event to see if I can dynamically respond to the 2 non-required packages being added while the application is running and note the comment in there about me not actually having seen that code run just yet and I’ll come back to why that is in just one second as it’s the wrong code Smile

Testing…

How to try this out?

In the //build session, there’s a few options listed around how you can test/debug a streaming install without actually putting your application into the Store. One method makes use of the PackageCatalog APIs to programmatically change the installation status of the content groups, another makes use of the Windows Device Portal (although I’m unsure as to whether this one is implemented yet) and there’s an option around using the regular PowerShell add-appxpackage command.

Testing via PowerShell

I thought I’d try the PowerShell option first and so I made a .APPX package for my application via the Store menu in Visual Studio;

image

and then made sure that I wasn’t making an APPX bundle;

image

and then I got hold of the temporary certificate that this generates and made it trusted on my system before going to install the .APPX file via PowerShell;

image

and so the key part here is the new –RequiredContentGroupOnly parameter to the Add-AppxPackage command. With that command executed, I can see that the app only has access to the Posters collection of images from its required content group and so that all seems good;

image

I also found it interesting to go and visit the actual folder on disk where the application is installed and to see what the Albums/BoxSets folders representing the ‘automatic’ content groups look like.

The first thing to say is that those folders do exist and here’s what the content looks like  at this point in the process;

image

so there are “marker files” present in the folders and so (as advised in the //build session) code would have to be careful not to confuse the presence of the folders/files with the content group’s installation status.

I’d hoped to then be able to use the add-appxpackage command again to add the other two content groups (Albums/BoxSets) while the application was running but when I tried to execute that, I saw;

image

Now, this was “very interesting” Smile in that I was reading the section of this page titled “Sideloaded Stream-able App” and it suggested that;

With the debugger attached, you can install the automatic content groups by:

Add-AppxPackage –Path C:\myapp.appx

Which is the exact same command but without the flag (what happens is that the platform will see that the app is already installed and will only stage the files that are missing).

So I attached my debugger to the running app and ran the command again and, sure enough, I could see that the debugger hit a first-chance exception in that piece of untested code that I’d listed earlier;

image

and so, sure enough, my code was being called here as the package started to install but that code wasn’t working because it was confusing the content group name with the application’s full package name.

That didn’t surprise me too much, it had been a bit of a ‘wild guess’ that I might use the PackageCatalog.PackageInstalling event in this way and I was clearly wrong so I went and reworked that code to make use of the far more sensible sounding PackageContentGroupStaging event as below;

 async Task AddStreamedPictureSourcesAsync()
    {
      // Handle any streamed packages that are already installed.
      var groups = await Package.Current.GetContentGroupsAsync();

      // TBD - unsure exactly of the state to check for here in order
      // to be sure that the content group is present.
      foreach (var group in groups.Where(
        g => !g.IsRequired && g.State == PackageContentGroupState.Staged))
      {
        await this.AddPictureSourceAsync(group.Name, group.Name);
      }

      // Now set up handlers to wait for any others to arrive
      this.catalog = PackageCatalog.OpenForCurrentPackage();
      this.catalog.PackageInstalling += OnPackageInstalling;
      this.catalog.PackageContentGroupStaging += OnContentGroupStaging;
    }

    async void OnContentGroupStaging(
      PackageCatalog sender, PackageContentGroupStagingEventArgs args)
    {
      if (args.IsComplete)
      {
          await this.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
            async () =>
            {
              await this.AddPictureSourceAsync(
                args.ContentGroupName,
                args.ContentGroupName);
            }
          );
      }
    }

    async void OnPackageInstalling(
      PackageCatalog sender,
      PackageInstallingEventArgs args)
    {
      // TODO: Remove this handler, don't think it's useful but leaving
      // it for the moment for debugging.
      Debugger.Break();
    }

This looked like it was far more likely to work but what I found was;

  1. The Add-AppxPackage command would still fail when I tried to add the non-required content groups to the already running app.
  2. From the debugger, I could see that the PackageInstalling event was still firing but the PackageContentGroupStaging event wasn’t. I suspect that the Add-AppxPackage command is quitting out between those 2 stages and so the first event fires and the second doesn’t.

This means that I haven’t been able to use this method just yet to test what happens when the app is running and the additional content groups are installed.

The best that I could find to do here was to install the required content group using the –RequiredContentGroupOnly and then, with the application running, I could install the other groups using the –ForceApplicationShutdown option and, sure enough, the app would go away and come back with all 3 of my content groups rather than just the required one;

image

and so that shows that things are working across app executions but it doesn’t test out how they work when the application is up and running which might well be the case if the user gets the app from Store, runs it and then additional packages show up over the first few minutes of the user’s session with the app.

Testing via the Streaming Install Debugging App

At this point, I went back to this blog post and tried out the steps under the heading of “Using the Streaming Install Debugging App”. This involves going off to download this app from github which then uses the APIs to manipulate the installation status of the content groups within my app.

I uninstalled my app from the system and then reinstalled it by hitting F5 in Visual Studio and then I ran up the debugging app and, sure enough, it showed me the details of my app;

image

and so I can now use this UI to change the status of my two content groups BoxSets and Albums to be ‘not staged’;

image

and then run up my app alongside this one and it correctly just shows the ‘Film Posters’ content;

image

and if I dynamically now switch a content group’s state to Staged then my app updates;

image

and I can repeat that process with the Albums content group;

image

and so that all seems to be working nicely Smile

Wrapping Up

I really like these new sorts of capabilities coming to UWP packaging and the APIs here seem to make it pretty easy to work with although, clearly, you’d need to give quite a lot of early-stage thought to which pieces of your application’s content should be packaged into which content groups.

I put the code that I was playing with here onto github if you’re interested in picking up this (admittedly very simple) sample.

Baby Steps with the Mixed Reality Portal & Simulator on Windows 10 Creators Update

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Creators Update and the Windows Mixed Reality Portal/Simulator

I’ve been trying out the Creators Update (get it here) on an older Surface Pro 3 for quite a while but I haven’t given it a tonne of focus just yet but with the release I thought it was time to upgrade both my home PCs and my work PCs and I did that in a few spare hours the other day.

One of the things that the Creators Update brings with it is the Windows Mixed Reality Portal and that portal also comes with the Windows Mixed Reality Simulator.

I’ve got very familiar with running an app like the ‘Holograms’ app on my HoloLens and using it to position objects in the real world – here’s a screenshot of the Spaceman floating in mid-air inside a huge atrium at Manchester Metropolitan University the other month;

C5VkK6GWAAEWLj2

While I don’t have an immersive Windows Mixed Reality headset available to me today, the Creators Update makes it possible for me to make use of the Holograms app in an entirely virtual world shown to me via the Windows Mixed Reality Portal. So, if I run up the portal on my device and set it up in Developer Mode for Simulation as detailed in this document then I can use the simulator to see the Spaceman in a whole new ‘world’;

image

So now I have my HoloLens which blends virtual objects into my real world and I have the Mixed Reality Simulator which puts virtual objects into a virtual world projected onto a flat 2D screen and they’re running what looks like a very similar Holograms app Smile

Can I Play RoboRaid?

My first thought was ‘Hey, maybe I can now go and play RoboRaid in this environment?’ but I don’t see that listed today in the Store that I access on my device whether from inside or outside of the Windows Mixed Reality Simulator.

That makes sense to me because I could see the RoboRaid developers perhaps needing to make some changes to their app in order to run inside of this environment rather than on HoloLens.

There are, clearly, differences between HoloLens hardware and immersive headset hardware with the grids on this page listing out different features including Gestures, Motion Controllers, Spatial Mapping and so it wouldn’t be surprising if an app like RoboRaid which makes extensive use of spatial mapping and gestures needed some tweaks to run on an immersive headset.

From looking into the Store and the docs, my curiosity was sparked enough to want to have a think about UWP device families, contracts, APIs for detecting headsets and so on and I wrote a few notes around those below based purely on my experiments in the debugger with the public bits. I have no ‘special’ knowledge here and, clearly, if I did then I wouldn’t have to conduct these types of experiments in a debugger to try and figure things out Smile

What Device (Family) am I Running On?

I wondered what device family I was running on when I run an app inside of the Windows Mixed Reality Simulator on my PC. I think I know but I wanted to check.

If I write a basic ‘Hello World’ app that I run on the HoloLens emulator (Windows 14393) and the Mixed Reality Simulator (Windows 15063) choosing my platforms a little carefully;

image

then I can query the device family property (NB: this is rarely the right way to code for device specific code but it’s a cheap/cheerful thing to do here) then I see this from the Mixed Reality Simulator;

image

and this from the HoloLens emulator;

image

which lines up with what I’d expect – in the HoloLens case, the device is its own computer whereas in the other case it’s my PC that’s the computer. This was the output from this line of code;

            this.txtDeviceFamily.Text = AnalyticsInfo.VersionInfo.DeviceFamily;

Where Do My APIs Come From?

From the point of view of the UWP, this would suggest that when I’m running on the Mixed Reality Simulator I should have access to the Universal API Contract and the Desktop API Extensions.

What about on HoloLens? I would have access to the Universal API Contract alone – as far as I’m aware, there isn’t a set of ‘Mixed Reality’ extension APIs for the UWP and when I look into the folder on my machine I see;

image

although I’m not 100% certain that this is an exhaustive list but if I go and check the documentation then I see that APIs like these ones;

https://docs.microsoft.com/en-us/uwp/api/windows.graphics.holographic

reside in the Universal API contract and so have the potential to be called on any device family albeit with the developer understanding that they need to check for support prior to calling those APIs.

How then does a developer make a call to know whether they are running inside of a mixed reality headset environment?

Do I Have a Mixed Reality Headset?

I can check whether I have a mixed reality headset or not by making a call to the APIs IsSupported and IsAvailable on this HolographicSpace class.

The former tells me (from the docs) whether the OS supports headsets and the latter tells me whether one is attached to the PC and ready for rendering (post set-up by the user).

I was a bit puzzled because the docs seemed to suggest that the IsSupported API was present in 10586 and yet Lucian’s super-helpful plugin told me different;

image

and the code crashed on the HoloLens emulator running 14393 so that seemed to suggest that these APIs weren’t part of 14393.

This wouldn’t be a problem for me except that my HoloLens and emulator are on 14393 and so I ended up with code that looked like this for the moment;

        void UpdateHolographicHeadset()
        {
            var text = "Holographic not supported";

            if (Windows.Foundation.Metadata.ApiInformation.IsPropertyPresent(
                "Windows.Graphics.Holographic.HolographicSpace", "IsSupported") &&
                HolographicSpace.IsSupported)
            {
                text =
                    HolographicSpace.IsAvailable ?
                    "Holographic space available" :
                    "Holographic space not available";

                HolographicSpace.IsAvailableChanged += OnHeadsetAvailabilityChanged;
            }
            else if (AnalyticsInfo.VersionInfo.DeviceFamily == "Windows.Holographic")
            {
                text = "Holographic space supported on HoloLens";
            }
            this.txtHeadset.Text = text;
        }
        async void OnHeadsetAvailabilityChanged(object sender, object e)
        {
            await this.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
                () =>
                {
                    this.UpdateHolographicHeadset();
                }
            );
        }

and that seems to display reasonable results on the HoloLens emulator (14393);

image

and gives the same answer on my PC whether I launch the app inside/outside of the Mixed Reality Simulator;

image

and I could get this flag to change by switching simulation on/off and then running the app on my desktop;

image

although I’m not sure that I saw the app update dynamically from the “not available” –> “available” state when I switched Simulation back on but I’d have to test that again.

So, assuming that I was running everything on 15063 and could simply call these APIs, how would I differentiate between immersive/HoloLens headsets?

Immersive Headset?

There’s another flag to be tested on the HolographicDisplay class called IsOpaque which gives information about the type of display present. I could make a call to this (once again, factoring in the missing API on 14393);

     void UpdateDisplayType()
        {
            var displayType = "no display";

            if (Windows.Foundation.Metadata.ApiInformation.IsTypePresent(
                "Windows.Graphics.Holographic.HolographicDisplay"))
            {
                displayType =
                    (HolographicDisplay.GetDefault()?.IsOpaque == true) ?
                    "opaque lenses" : "transparent lenses";
            }
            else if (AnalyticsInfo.VersionInfo.DeviceFamily == "Windows.Holographic")
            {
                displayType = "transparent lenses";
            }
            this.txtDisplayType.Text = displayType;
        }

and on the Mixed Reality Simulator I see;

image

and I get the expected result on the HoloLens emulator;

image

but this could be a little confusing because I don’t actually have an immersive headset and I’m just running this app on a flat screen rather than with stereoscopic projection but the IsStereo flag can help me with that too;

        void UpdateStereoDisplay()
        {
            var stereoType = "unknown display";

            if (Windows.Foundation.Metadata.ApiInformation.IsTypePresent(
                "Windows.Graphics.Holographic.HolographicDisplay"))
            {
                stereoType = HolographicDisplay.GetDefault()?.IsStereo == true ?
                    "stereoscopic display" : "2D display";
            }
            this.txtStereoDisplay.Text = stereoType;
        }

but I don’t yet understand the output here because I somewhat expected the app inside of the simulator to say “2D” and it doesn’t seem to do that;

image

and it doesn’t when I run it purely on the desktop either;

image

so perhaps I need to think on that one a little bit more.

A Unity Example?

I wanted to see how this worked with a simple Unity example and so I made a ‘blank’ project with the HoloToolkit and just had a red cube and a wall and the idea is that when the user clicks on the cube, the red cube goes green.

I have this scene;

image

and then a script on the cube tries to handle a click to change the cube to be green and it also tries to hide the wall on headsets with transparent lenses;

using HoloToolkit.Unity.InputModule;
using UnityEngine;
using Windows.Graphics.Holographic;
using Windows.System.Profile;

public class CubeScript : MonoBehaviour, IInputClickHandler
{
    private void Awake()
    {
        var wall = GameObject.Find("Wall");

        if (wall != null)
        {
#if UNITY_UWP && !UNITY_EDITOR
            var opaque = true;

            if (Windows.Foundation.Metadata.ApiInformation.IsTypePresent(
                "Windows.Graphics.Holographic.HolographicDisplay"))
            {
                opaque = (HolographicDisplay.GetDefault()?.IsOpaque == true);
            }
            else if (AnalyticsInfo.VersionInfo.DeviceFamily == "Windows.Holographic")
            {
                opaque = false;
            }
            wall.SetActive(opaque);
#endif // UNITY_UWP
        }
    }
    public void OnInputClicked(InputClickedEventData eventData)
    {
        this.gameObject.GetComponent<Renderer>().material.color = Color.green;
    }
}

This seemed to work out fine on the HoloLens emulator (with the app manifest requesting a minimum platform of 14393, maximum of 15063 and targeting the Universal family);

image

with no wall which is what I was hoping for.

If I then deployed this to my local machine and ran it up inside of the Mixed Reality Portal using the simulator then I saw;

MR

and so I get the wall and the cube running there although it is worth saying that I did experience some glitches (drivers perhaps?) around getting this to display in that sometimes it would display but on a few occasions I noticed the Mixed Reality Portal seemed to reset itself before I got to the Unity splash screen for my app.

Naturally, I’m sure there are better ways of doing what I’m doing here but I felt like I learned a few things and so thought I’d share as part of experimentation…it’s exciting to see these bits in the Creators Update and it’ll be even more exciting to get my hands on a headset and see what the experience is like there.

Don’t forget that you can sign up for news on those headsets here.