Experiments with Shared Holographic Experiences and Photon Unity Networking

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Backdrop – Shared Holographic Experiences (or “Previously….”)

Recently, I seem to have adopted this topic of shared holographic experiences and I’ve written quite a few posts that relate to it and I keep returning to it as I find it really interesting although most of what I’ve posted has definitely been experimental rather than any kind of finished/polished solution.

One set of posts began quite a while ago with this post;

Windows 10, UWP, HoloLens & A Simple Two-Way Socket Library

where I experimented with writing my own comms library between two HoloLens devices on a local network with the initial network discovery being handled by Bluetooth and with no server or cloud involved.

That had limits though and I moved on to using the sharing service from the HoloToolkit-Unity culminating (so far) in this post;

Hitchhiking the HoloToolkit-Unity, Leg 13–Continuing with Shared Experiences

although I did recently go off on another journey to see if I could build a shared holographic experience on top of the AllJoyn protocol in this post;

Experiments with Shared Holographic Experiences and AllJoyn (Spoiler Alert- this one does not end well)

I should really have got this out of my system by now but I’m returning to it again in this post for another basic experiment.

That recent AllJoyn experiment had a couple of advantages including;

  • Performing automatic device discovery (i.e. letting AllJoyn handle the discovery)
  • Not requiring a cloud connection
  • Easy programming model (using the UWP tooling)

but the disadvantages came in that I ended up having to introduce some kind of ‘server’ app when I didn’t really intend to plus there was pretty bad performance when it came to passing around what are often large world anchor buffers.

That left me wanting to try out a few other options and I spent a bit of time looking at Unity networking (or UNET) but didn’t progress it too far because I couldn’t get the discovery mechanisms (based on UDP multicasting) to work nicely for me across a single HoloLens device and the HoloLens emulator and so I let that drop although, again, it looks to offer a server-less solution with a single device being able to operate as both ‘client’ and ‘host’ and the programming model seemed pretty easy.

Photon Unity Networking

Putting that to one side for the moment, I turned my attention to “Photon Unity Networking” (or PUN) to see if I could make use of that to build out the basics of a shared holographic experience and this post is a write up of my first experiment there.

PUN seems to involve a server which can either be run locally or in the cloud and Photon provide a hosted version of it and I figured that had to be the easiest starting point and so I went with that although, as you’ll see later, it brought with it a limitation that I could have avoided if I’d decided to host the server myself.

Getting started with cloud-hosted PUN is easy. I went for the free version of this cloud hosted model which seems to offer me up to 20 concurrent users and it was very easy to;

  1. Sign up for the service
  2. Use the portal to create an application as my first app and get an ID that can be fed into the SDK
  3. Download the SDK pieces from the Unity asset store and bring them into a Unity project

and so from there I thought it would be fun to see if I could get some basic experiment with shared holograms up and running on PUN and that’s what the rest of this post is about.

The Code

The code that I’m referring to here is all hosted on Github and it’s very basic in that all that it does (or tries to do) is to let the user use 3 voice commands;

  • “create”
  • “open debug log”
  • “close debug log”

and the keyword “create” creates a cube which should be visible across all the devices that are running the app and in the same place in the same physical location.

That’s it Smile I haven’t yet added the ability to move, manipulate holograms or show the user’s head positions as I’ve done in earlier posts. Perhaps I’ll return to that later.

But the code is hosted here;

Code on Github

and I’m going to refer to classes from it through the rest of the post.

It’s important to realise that the code is supplied without the Photon Application ID (you’d need to get your own) and without the storage access keys for my Azure storage account (you’d need to get your own).

The Blank Project

I think it’s fair to say that Photon has quite a lot of functionality that I’m not even going to attempt to make use of around lobbies and matchmaking – I really just wanted the simplest solution that I could make use of and so I started a new Unity project and added 4 sets of code to it straight off the bat as shown below;

image

Those pieces are;

  1. The HoloToolkit-Unity
  2. The Mixed Reality Design Labs
  3. The Photon Unity Networking Scripts
  4. A StorageServices library

I’ll return to the 4th one later in the post but I’m hoping that the other 3 are well understood and, if not, you can find reference to them on this blog site in many places;

Posts about Mixed Reality

I made sure that my Unity project was set up for Holographic development using the HoloToolkit menu options to set up the basic scene settings, project settings;

image

and specifically that my app had the capability to access both the microphone (for voice commands) and spatial perception (for world anchoring).

From there, I created a scene with very little in it other than a single empty Root object along with the HoloLens prefab from the Mixed Reality Design Labs (highlighted orange below) which provides the basics of getting that library into my project;

image

and I’m now “ready to go” in the sense of trying to make use of PUN to get a hologram shared across devices. Here’s the steps I undertook.

Configuring PUN

PUN makes it pretty easy to specify the details of your networking setup including your app key in that they have an option to use a configuration file which can be edited in the Unity editor and so I went via that route.

I didn’t change too much of the setup here other than to add my application id, specify TCP (more on that later) and a region of EU and then specify that I didn’t want to auto-join a lobby or enable stats as I’m hoping to avoid lobbies.

image

Making a Server Connection

I needed to make a connection to the server and PUN makes that pretty simple.

There’s a model in PUN of deriving your class from a PunBehaviour which then has a set of overrides that you can use to run code as/when certain networking events happen like a server connection or a player joining the game. I wrapped up the tiny bit of code needed to make a server connection based on a configuration file into a simple component that I called PhotonConnector which essentially takes the override-model of PUN and turns it into an event based model that suited me better. Here’s that class;

The PhotonConnector Class

and so the idea here is that I just use the OnConnectedToMaster override to wait for a connection and then I fire an event (FirstConnection) that some other piece of my code can pick up.

I dropped an instance of this component onto my Root object;

image

So, that’s hopefully my code connected to the PUN cloud server.

Making/Joining a Room

Like many multiplayer game libraries, PUN deals with the notion of a bounded set of users inside of a “room” (joined from a “lobby”) and I wanted to keep this as simple as possible for my experiment here and so I tried to bypass lobbies in as much as possible and tried to avoid building UI for the user to select a room.

Instead, I just wanted to hard-wire my app such that it would attempt to join (or create if necessary) a room given a room name and so I wrote a simple component which would attempt to either create or join a room given the room name;

The PhotoRoomJoiner Class

and so this component is prepared to look for the PhotonConnector, wait for it to connect to the network before then attempting to join/create a room on the server. Once done, like the PhotonConnector it fires an event to signify that it has completed.

I dropped an instance of this component onto my Root object leaving the room name setting as “Default Room”;

image

and by this point I was starting to realise that I was lacking any way of visualising Debug.Log calls on my device and that was starting to be a limiting factor…

Visualising Debug Output

I’ve written a few ugly solutions to displaying debug output on the HoloLens and I wanted to avoid writing yet another one and so I finally woke up and realised that I could make use of the DebugLog prefab from the Mixed Reality Design Labs;

image

and I left its configuration entirely alone but now I can see all my Debug.Log output by simply saying “open debug log” inside of my application which is a “very useful thing indeed” given how little I paid for it! Smile

image

One World Anchor Per App or Per Hologram?

In order to have holograms appear in a consistent position across devices, those devices are going to have to agree on a common coordinate system and that’s done by;

  • Creating an object at some position on one device
  • Applying a world anchor to that object to lock it in position in the real world
  • Obtaining (‘exporting’) the blob representing that world anchor
  • Sending the blob over the network to other devices
  • On those additional devices
    • Receiving the blob over the network
    • Creating the same type of object
    • Importing the world anchor blob onto the device
    • Applying (‘locking’) the newly created object with the imported world anchor blob so as to position it in the same position in the physical world as the original

It’s a multi-step process and, naturally, there’s many things that can go wrong along the way.

One of the first decisions to make is whether to apply a world anchor to every hologram shared or to perhaps apply one world anchor across the whole scene and parent all holograms from it. The former is likely to have great accuracy but the latter is a lot less expensive in terms of how many bytes need to be shipped around the network.

For this experiment, I decided to go with a halfway house. The guidance suggests that;

“A good rule of thumb is to ensure that anything you render based on a distant spatial anchor’s coordinate system is within about 3 meters of its origin”

and so I decided to go with that and to essentially create and share a new world anchor any time a hologram is created more than 3m from an existing world anchor.

In order to do that, I need to track where world anchors have been placed and I do that locally on the device.

Rather than use a hologram itself as a world anchor, I create an empty object as the world anchor and then any hologram within 3m of that anchor would be parented from that anchor.

Tracking World Anchor Positions

In order to keep track of the world anchors that a device has created or which it has received from other devices I have each device maintain a simple list of world anchors with a GUID-based naming scheme to ensure that I can refer to these world-anchors across devices. It’s a fairly simple thing and it’s listed here;

The AnchorPositionList Class

Importing/Exporting World Anchors

The business of importing or exporting world anchors takes quite a few steps and I’ve previously written code which wraps this up into a (relatively) simple single method call where I can hand a GameObject over to a method which will;

  • For export
    • Add a WorldAnchor component to the GameObject
    • Wait for that WorldAnchor component to flag that it isLocated in the world
    • Export the data for that WorldAnchor using the WorldAnchorTransferBatch
    • Return the byte[] array exported
  • For import
    • Take a byte[] array and import it using the WorldAnchorTransferBatch
    • Apply the LockObject call to the GameObject

That code is all wrapped up in a class I called SpatialAnchorHelpers

The SpatialAnchorHelpers class

One thing I’d add about this class is that it is very much “UWP” specific in that I made no attempt to make this code particularly usable from the Unity Editor and to avoid getting tied up in lots of asynchronous callbacks I just wrote code with async/await which Unity can’t make sense of but, for me, makes for much more readable code.

This code also needs to “wait” for the isLocated flag on a WorldAnchor component to signal ‘true’ and so I needed to make an awaitable version of this and I used this pretty ugly class that I’ve used before;

The PredicateLoopWatcher class

I’m not too proud of that and it perhaps needs a rethink but it’s “kind of working” for me for now although if you look at it you’ll realise that there’s a strong chance that it might loop forever and so some kind of timeout might be a good idea!

Using async/await without a suitable SynchronizationContext can mean that code can easily end up on the wrong thread for interacting with Unity’s UI objects and so I added a Dispatcher component which I try to use to help with marshalling code back onto Unity’s UI thread;

The Dispatcher Class

and so that’s part of the scripts I wrote here too and I just added an instance of it to my root script so that I’d be able to get hold of it;

image

Passing World Anchor Blobs Around the Network

For even the simplest, most basic solution like this one there comes a time when one device needs to ‘notify’ another device that either;

  • a new world anchor has been created
  • a new hologram has been created relative to an existing world anchor

and so there’s a need for some kind of ‘network notification’ which carries some data with it. The major decision though is how much data and initially what I was hoping to achieve here was for the notification to carry all of the data.

To put that into plainer English, I was hoping to use PUN’s RPC feature to enable me to send out an RPC from one device to another saying

“Hey, there’s a new world anchor called {GUID} and here’s the 1-10MB of data representing it”

Now, I must admit that I suspected that this would cause me problems (like it did when I tried it with AllJoyn) and it did Smile

Firstly, the default protocol for PUN is UDP and, naturally, it’s not a great idea to try and send MB over UDP this way and so I switched the protocol for my app to be TCP via the configuration screen that I screenshotted earlier.

Making an RPC method in PUN is simple, I just need to make sure that there’s a PhotonView component on my GameObject and then I can just add an [PunRPC] attribute and make sure that the parameters can be serialized by PUN or by my custom code if necessary.

Invoking the RPC method is also simple – you grab hold of the PhotonView component and use the RPC() method on it and there’s a target parameter on there which was really interesting to me.

In my scenario, I only really need two RPCs, something like;

  • NewWorldAnchorAdded( anchor identifier, anchor byte array )
  • NewHologramAdded( anchor identifier, hologram position relative to anchor )

Given that I was hoping to pass the entire world anchor blob over the RPC call, I didn’t want that mirrored back to the originating client by the server because that client already had that blob and so it would be wasteful.

Consequently, I used the Targets.OthersBuffered option to try and send the RPC to all the other devices in the room.

The other nice aspect around this option is the Buffered part in the sense that the server will keep the RPC details around and deliver it (and others) to new clients as they join the room.

Perfect.

It didn’t work for me though because, although PUN doesn’t place size limits on parameters to an RPC call, the cloud-hosted version of PUN does and the server bounced my RPCs straight back at me and after a little online discussion I was pointed to this article which flags that the server limit is 0.5MB for a parameter.

So, using RPCs for these large blobs wasn’t going to work much like it didn’t really work very nicely for me when I looked at doing something similar over AllJoyn.

What next? Use a blob store…

Putting Blobs in…a Blob Store!

I decided that I’d stick with the RPC mechanism for signalling the details of new world anchors and new holograms but I wouldn’t try and pass all of the bytes of the blob representing the world anchor across that boundary.

Instead, given that I’d already assumed a cloud connection to the PUN server I’d use the Azure cloud to store the blobs for my world anchors.

The next question is then how to best make use of Azure blob storage from Unity without having to hand-crank a bunch of code and set up HTTP headers etc. myself.

Fortunately, my colleague Dave has done some work around calling into Azure app services and blob storage from Unity and he has a blog post around it here;

Unity 3D and Azure Blog Storage

which points to a github repo over here;

Unity3DAzure on Github

and so I lifted this code into my project and wrote my own little BlobStorageHelper class around it so as to make it relatively easy to use in my scenario;

The AzureBlobStorageHelper class

There’s not a lot to it on top of what Dave already wrote – I just wrap it up for my use and add a little bit of code to download a blob directly from blob storage.

Naturally, to set this up I needed an Azure storage account (I already had one) and I just made a container within it (named ‘sharedholograms’) and made sure that it allowed public reads and authenticated writes and I copied out the access key such that the code would be able to make use of it.

I can then set up an instance of this component on my root game object;

image

so it’s available any time I want it from that script.

Back to RPCs

With my issue around what to do with large byte array parameters out of the way, I could return to my RPCs being as simple as their final signatures ended up being;

  [PunRPC]
  void WorldAnchorCreatedRemotely(string sessionId, string anchorId)
  {
  }
  
  [PunRPC]
  void CubeCreatedRemotely(string sessionId, string anchorId, Vector3 relativePosition)
  {

  }

because the name of the blob on the blob store can be derived from the anchorId and so it’s enough just to distribute that id.

However, what’s this sessionId parameter? This goes back to the earlier idea that I would dispatch my RPC calls using the Targets.OtherBuffered flag to notify all devices apart from the current one that something had changed.

However, what I seemed to find was that if DeviceA created one world anchor and three holograms and then quit/rejoined the server it didn’t seem to receive those 4 buffered RPCs from the server which would tell it to recreate those objects.

I’m unsure how PUN makes the distinction of “Other” but I decided that perhaps the best idea was to switch OtherBuffered to AllBuffered and then just my own mechanism to ignore RPCs which originated on the current device. Because I’m no longer sending large byte arrays over the network this didn’t feel like a particularly wasteful thing to do and so I stuck with it but it could do with a little more investigation on my part.

The other thing that I played with here was the way in which the room is originally created by my PhotoRoomJoiner component in that, initially, I wasn’t setting the RoomOptions.CleanUpCacheOnLeave which I think means that the buffered RPCs left by a player would disappear when they left the room.

However, I still seemed to find that even when I asked the room to keep around RPCs for a player that left the room the OtherBuffered option didn’t seem to deliver those RPCs back to that player when they connected again and hence me sticking with the AllBuffered option for the moment. Again, it needs more investigation.

Those big blob buffers though still cause me another problem…

Ordering of RPCs

I saw this one coming Smile Now that the upload/download of the blob representing a world anchor is done asynchronously through the cloud in a manner that’s outside the bounds of the RPCs being delivered by Photon it’s fairly easy to see a sequence of events where an RPC is delivered to create a hologram relative to a world anchor that has not yet been downloaded to the device – it’s a race and it’s pretty much certain to happen and especially if a device connected to a room with buffered RPCs containing a sequence of anchors and holograms.

Consequently, I simply keep a little lookaside list of holograms that a client has been asked to create when the world anchor that they are parented off has not yet been created. The assumption is that the world anchor will show up at some point in the future and this list can be consulted to check for all the pending holograms that then need to be created.

The AnchorCubeList Class

Bringing it All Together

All of these components are ultimately brought together by a simple “co-ordinating” script on my (almost) empty GameObject named Root that has been in the scene all along;

image

The only component that I haven’t mentioned there is the use of a KeywordManager from the HoloToolkit-Unity which sends the voice keyword “create” through to a function on my Root script which kicks off the whole process of creating a world anchor if necessary before creating a hologram (cube) 3m along the user’s gaze vector.

That Root script is longer than I’d like it to be at the moment so I could tidy that up a little but here it is in its completeness;

The Root Class

Testing and Carrying On…

I’ve left it to the end of the blog post to admit that I haven’t tested this much at the time of writing – it’s a bit of an experiment and so don’t expect too much from it Smile

One of the reasons for that is that I’m currently working with one HoloLens and the emulator and so importing/exporting of world anchors can be a bit of a challenge as it’s hard to know in the emulator whether things are working correctly or not and it’s much easier to test with multiple devices for that reason.

I’ll try that out in the coming days/weeks and will update the post or add to another post. I’d also like to add a little more into the code to make it possible to manipulate the holograms, show the user’s position as an avatar and so on as I’ve done in other posts around this topic so I’ll create a branch and keep working on that.

Beyond that, it might be “nice” to take away the dependency on PUN here and just build out a solution using nothing but standard pieces from Azure like service bus + blob storage as I don’t think that’d be a long way from what I’ve got here – that might be another avenue for a future post…

Exploring the Mixed Reality Design Labs–Experiment #2

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following up from this previous post;

Exploring the Mixed Reality Design Labs–Introduction and Experiment #1

I wanted to continue to explore some of the things present in the Mixed Reality Design Labs work and, since my last post, I’d revisited the github and had found this doc page which I hadn’t read last time I’d visited the site and it’s a great read as without it I’d felt a little like I was wandering without a map. I’m not quite sure how I missed it the first time around;

MRDL – Examples Write Up Including Interactable Objects, Object Collection, Progress, App Bar and Bounding Box

That’s definitely a good read and I’d also missed this document about including the MRDL as a submodule;

https://github.com/Microsoft/MRDesignLabs_Unity_Tools

and yet another thing that I’d missed was that the MRDL inserts a custom menu into Unity;

image

which can be used to insert the HoloLens prefab I mentiond in the previous post (from the Interace> menu) and to create the other areas of functionality listed there on the menu including quite a few buttons, receivers and cursors.

Exploring

The rest of this post is just what I’ve written down as rough notes while exploring one area of the MRDL and I chose to experiment with buttons as UIs often seem to end up with one type of button or another and I figured that I would poke around in the code and then start with the Button type;

Button on github

and that told me that there’s an abstract base class here which has (at least);

  • ButtonState (pressed, targeted, disabled, etc)
  • Whether the button requires the gaze to be on it or not
  • Events for when the state changes, when it is pressed, held, released, cancelled

along with a few private/implementation pieces. It all feels fairly ‘expected’ but there’s a relationship here with an InteractionManager;

InteractionManager on github

which looks to be a singleton handling things like tapping, manipulation, navigation events and somehow routing them (via Unity’s SendMessage) on via an AFocuser object.

AFocuser on github

This looks to be a perhaps more developed form of what’s present in the HoloToolkit-Unity done by types there like the GazeManager and so on and so it’s “interesting” that this framework looks to be reworking these particular wheels rather than picking up those bits from the HoloToolkit.

There would be quite a lot to explore here and I didn’t dig into all of it, that’ll have to be for another day. For today, I went back to exploring buttons and the types derived look to be;

  • KeyButton
  • AnimButton
  • SpriteButton
  • MeshButton
  • CompoundButton
  • AnimControllerButton
  • BoundingBoxHandle
  • ObjectButton

and I went back to reading the document on these and also had a good poke around the Interactable Object sample;

image

and I think I started to get a little bit of a grip of what was going on but I daresay I’ve got a bit more to learn here!

I tentatively added an empty parent object and a cube to my scene;

image

and then added the Compound Button script to my GameObject and it moaned at me (in a good way);

image

So I took away the box collider that comes by default with my cube and it said;

image

and so I added a box collider to the empty parent game object and the button became ‘happy’ Smile

image

I then got a bit adventurous, having noticed the notion of ‘receivers’ which look to be a form of event relay and I added a sphere to my scene and set up a “Color Receiver” on my empty game object;

image

and, sure enough, when I click on my cube my sphere toggles red/white;

image

but, equally, I think I could just handle this event by either writing code – e.g.

  private void Start()
  {
    var button = this.GetComponent<CompoundButton>();
    button.OnButtonPressed += this.OnPressed;
  }

and that seems to work just fine. I did then wonder whether I could create some hierarchy like this in my scene;

image

and then could I handle the button press by adding a script to the GrandParent object? I tried adding something like this;

using HUX.Interaction;

public class Startup : InteractibleObject
{
  private void Start()
  {
  }
  protected void FocusEnter()
  {
  }
  protected void FocusExit()
  {
  }
  protected void OnTapped(InteractionManager.InteractionEventArgs eventArgs)
  {
  }
}

but the debugger didn’t suggest that my OnTapped method was called. However, the FocusEnter and FocusExit calls do happen at this ‘grand parent’ level and this seems to be in line with the comments inside of the source code;

InteractibleObject on github

which says;

/// FocusEnter() & FocusExit() will bubble up through the hierarchy, starting from the Prime Focus collider.

///

/// All other messages will only be sent to the Prime Focus collider

and this notion of the ‘Prime Focus collider’ led me to go and take a look at the source for;

AFocuser on github

where the UpdateFocus method actually walks the hierarchy to build up the list of parent objects that will need to be notified of focus loss/gain while it updates its notion of the PrimeFocus and so (from a quick look) that all seems to tie up.

I think I could achieve what I wanted though by making by grand parent script an InteractionReceiver (as the sample does) and then I can pick up the button press that way – i.e.

public class Startup : InteractionReceiver
{
  private void Start()
  {
  }
  protected override void OnTapped(GameObject obj, InteractionManager.InteractionEventArgs eventArgs)
  {
    base.OnTapped(obj, eventArgs);
  }
}

and if I marry this up with the right settings in the UI to tell that script which interactible objects I want it to receive from;

image

then that seems to work out fine.

Quite commonly in a Mixed Reality app, I’d like to use speech in addition to moving my gaze and air-tapping and so it looks like the MRDL makes that easy in that I can add;

image

although I found that when I did this, I hit a snag in that the ColorReceiver that I’d previously added seemed to work fine when invoked by an air-tap but didn’t work when invoked by the speech command ‘click’ and that seemed to come down to this runtime error;

Failed to call function OnTapped of class ColorReceiver
Calling function OnTapped with no parameters but the function requires 2.

so maybe that’s a bug or maybe I’m misunderstanding how it’s meant to work but if I take the ColorReceiver away and handle the button OnButtonPressed event myself then I still see something similar – i.e. my code runs when I tap on the button but when I say “click” it doesn’t run but, instead, I see the debug output saying;

Keyword handler called in GameObject for keyword click with confidence level High

and I saw the same thing if I went back to having my code be an InteractionReceiver in that the air-tap seems to result in one call whereas the voice command “click” seems to result in another as below;

public class Startup : InteractionReceiver
{
  protected override void OnTapped(GameObject obj, InteractionManager.InteractionEventArgs eventArgs)
  {
    // This is called when I air-tap
    base.OnTapped(obj, eventArgs);
  }
  void OnTapped()
  {
    // This is called when I say 'click'
  }
}

and, again, I’m unsure whether that’s my understanding or whether it’s not quite working right but I figured I’d move on as I’d noticed that the “Compound Button Speech” script took two keyword sources – one was the local override I’ve used above where I can simply set the text but the other looks for a Compound Button Text;

image

and so I added one of those in, chose the provided profile and fed it a 3DTextMesh and then I selected that I wanted to override the Offset property and just dragged my text mesh around a little in Unity to try and position it ‘sensibly’;

image

and that all seemed to work fine. It’d be great to have my button give audible cues when the user interacted with it and so I also added in a Compound Button Sounds script which then wants a ButtonSoundProfile and I played with creating my own versus using the one that ships in the library;

image

and that worked fine once I’d managed to figure out how to get the sounds to come out properly over the holographic remoting app from Unity.

At this point, I’d added quite a lot of scripts to my original cube and so I reset things and went and grabbed a 3D object from Remix3D, this firefighter;

image

and dropped it into my scene as a child of my GameObject;

image

and then added back the Compound Button script and a Box Collider and then went and added the Compound Button Mesh script and tried to set up some scale and colour changes based on the states within;

image

and that seemed to work out fine – i.e. when I pressed on the button, the fireman got scaled up and the mesh got rendered in red;

image

so, that’s all really useful.

I then threw away my scene again, went back to just having a cube and set up a couple of animations – one which rotated the cube by 45 degrees and another which put it back to 0 and I built an animator around those with the transitions triggered by a change in the Targeted boolean parameter;

image

and then dragged an Animator and a Compound Button Anim component onto my GameObject;

image

and that seemed to give me the basics of having my cube animate into rotation when I focus on it and animate back from rotation when I take the focus away from it – seemed like a very useful tool to have in the toolbox Smile I noticed that Object Button seems to do something similar except it looks to model the various states via a set of different prefabs – i.e.

image

The last one of these Compound Button X types that I wanted to get my head around for this post was the Compound Button Icon type. This feels a little bit like the Text variant in that I can create an empty GameObject and then make it into a Compound Button (+Icon) as below;

image

and this seems to be driven off a ButtonIconProfile which can either be font based or texture based so I set up one that was font based;

image

and then there’s a need here for something to render the icon and I found it “interesting” to add a Cube as a child of my button object and then toggle the dropdown here to select my Cube as the rendering object. The component made a few changes on my behalf!

Here’s the before picture of my cube;

image

and this is what happens to it when I choose it as the renderer for my icon;

image

so – the mesh filter has gone and the material/shader has been changed for me and I can then go back to the Compound Button Icon component and choose the icon;

image

Very cool.

Wrapping Up

Having done a bit of exploring, I can now start to get some idea of what the tooling is doing if I use an option like;

image

and create myself a “Rectangle Button” which gives me all the glory of;

image

and so I’ve got compound, mesh, sounds, icon, text and speech all in one go and ready to be used and it takes me only a second or two in order to get buttons created;

image

and there’s a lot of flexibility in there.

As I said at the start of the post, I’m just experimenting here and I may well be getting things wrong so feel free to let me know and I’ll carry on my exploring in later posts…

Windows 10 Creators Update, UWP Apps–An Experiment with Streaming Installations

I’ve been slowly trying to catch up with what happened at //build via the videos on Channel9 focusing mainly on the topics around Windows 10 UWP and Mixed Reality with a sprinkling of what’s going on in .NET, C# and some of the pieces around Cognitive Services, identity and so on.

Given that //build was a 3 day conference, it generates a video wall worth of content which then takes a very long time to try and catch up with and so I expect I’ll still be doing some of this catching up over the coming weeks and months but I’m slowly making progress.

One of the many sessions that caught my eye was this one on changes to the packaging of UWP apps;

image

which talks about some of the changes that have been made in the Creators Update around breaking up UWP packages into pieces such that they can be installed more intelligently, dynamically and flexibly than perhaps they can today.

It’s well worth watching the session but if I had to summarise it I’d say that it covers;

  • How packages have always been able to be broken into pieces containing different types of resources using the “Modern Resource Technology” (MRT) such that (e.g.) only the resources that are relevant to the user’s language or scale or DirectX level are downloaded for the app.
  • How packages in Creators Update can be broken apart into “Content Groups” and partitioned into those which are required for the application to start up and those which can be deferred and downloaded from the Store at a later point in order to improve the user’s experience. There are APIs to support the developer being aware of which parts of the package are present on the system, to monitor and control download priority, etc.
  • How optional packages can be authored for Creators Update such that one or more apps can optionally make use of a separate package from the Store which can install content (and (native) code) into their application.

As you might expect, there’s lots of additional levels of detail here so if you’re interested in these bits then some links below will provide some of that detail;

and there’s more generally on the App Installer Blog and additional interesting pieces in that //build session around possible future developments and how Microsoft Office ™ is making use of these pieces in order to be deliverable from the Windows Store.

The idea of ‘streaming installations’ seemed immediately applicable to me but I need to spend some more time thinking about optional packages because I was struck by some of the similarities between them and app extensions (more here) and I haven’t quite figured out the boundaries there beyond the ability of an optional package to deliver additional code (native) to an application which extensions can’t do as far as I’m aware.

Having got my head around streaming installations, I wanted to experiment with them and that’s where the rest of this post is going.

I needed an app to play with and so I went and dug one out of the cupboard…

A Simple Pictures App

I wrote this tiny little “app” around the time of the UK “Future Decoded” show in late 2016 in order to demonstrate app extensions.

The essential idea was that I have this app which displays pictures from a group;

image

and there is 1 set of pictures built in – some film posters but I have two more sets of pictures under groupings of ‘Albums’ and ‘BoxSets’.

The original app used app extensions and so the ‘Albums’ and ‘BoxSets’ collections lived in another project providing an ‘extension’ to the content such that when the extension was installed on the system all of the 3 sets of content are loaded and the app looks as below;

image

This was pretty easy to put together using app extensions and it’s similar to what I wrote up in this blog post about app extensions where I used extensions and App Services together to build out a similarly extensible app.

So, having this code kicking around it seemed like an obvious simple project that I could use to try out streaming installations on Creators Update.

Defining Content Groups

Firstly, I brought all 3 of my content folders into the one project (i.e. Posters, Albums, BoxSets) as below;

image

and then I set about authoring a SourceAppxContentGroupMap.xml file as covered in this MSDN article;

Create and convert a source content group map

and I learned a couple of things there which were to firstly make sure that you set the right build action for that XML file;

image

and secondly to make sure that you’re running the right version of makeappx if you expect it to have the new /convertCGM option Smile That right version on my system would come from;

image

at the time of writing although I ultimately let Visual Studio build the content group map and only used makeappx as part of experimenting.

My content group map looked as below – I essentially just define that everything for the application is required apart from the two folders named Albums and BoxSets which are not required to start the application and so can be downloaded post-installation by the system as it sees fit;

<?xml version="1.0" encoding="utf-8"?>
<ContentGroupMap xmlns="http://schemas.microsoft.com/appx/2016/sourcecontentgroupmap" xmlns:s="http://schemas.microsoft.com/appx/2016/sourcecontentgroupmap" >
  <Required>
    <ContentGroup Name="Required">
      <File Name="*"/>
      <File Name="WinMetadata\*"/>
      <File Name="Properties\*"/>
      <File Name="Assets\*"/>
      <File Name="Posters\**"/>
    </ContentGroup>
  </Required>
  <Automatic>
    <ContentGroup Name="BoxSets">
      <File Name="BoxSets\**"/>
    </ContentGroup>
    <ContentGroup Name="Albums">
      <File Name="Albums\**"/>
    </ContentGroup>
  </Automatic>
</ContentGroupMap>

This file is then an input to produce the actual AppxContentGroupMap.xml file and I just used the Visual Studio menu to generate it as per the docs;

image

and after a couple of initial gremlins caused by me, that seemed to work out fine.

Writing Code to Load Content Groups

If the application is going to be installed “in pieces” then my code is going to have to adapt such that it can dynamically load up folders of pictures as they appear post-install.

Because I’d previously written the code to support a similar scenario using app extensions and because the code is very simple it wasn’t particularly difficult to do this. I have a function which attempts to figure out whether the content groups for the Albums and BoxSets have been installed and, if so, it adds them to what the application is displaying. This snippet of code covers it;

    async Task AddStreamedPictureSourcesAsync()
    {
      // Handle any streamed packages that are already installed.
      var groups = await Package.Current.GetContentGroupsAsync();

      // TBD - unsure exactly of the state to check for here in order
      // to be sure that the content group is present.
      foreach (var group in groups.Where(
        g => !g.IsRequired && g.State == PackageContentGroupState.Staged))
      {
        await this.AddPictureSourceAsync(group.Name, group.Name);
      }

      // Now set up handlers to wait for any others to arrive
      this.catalog = PackageCatalog.OpenForCurrentPackage();
      this.catalog.PackageInstalling += OnPackageInstalling;
    }
    async void OnPackageInstalling(
      PackageCatalog sender,
      PackageInstallingEventArgs args)
    {
      if (args.IsComplete)
      {
        await this.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
          async () =>
          {
            // Warning - untested at time of writing, I need to check
            // whether FullName is the right property here because 
            // I really want the *content group name*.
            await this.AddPictureSourceAsync(args.Package.Id.FullName,
              args.Package.Id.FullName);
          }
        );
      }
    }
    PackageCatalog catalog;

 

and this is making use of APIs that come from either SDK 14393 or 15063 on the PackageCatalog class in order to check what content groups are available and if I find that my Albums/BoxSets groups are available then I have code which goes and adds all the pictures from those folders to the collections which live behind the UI.

The code is also attempting to handle the PackageInstalling event to see if I can dynamically respond to the 2 non-required packages being added while the application is running and note the comment in there about me not actually having seen that code run just yet and I’ll come back to why that is in just one second as it’s the wrong code Smile

Testing…

How to try this out?

In the //build session, there’s a few options listed around how you can test/debug a streaming install without actually putting your application into the Store. One method makes use of the PackageCatalog APIs to programmatically change the installation status of the content groups, another makes use of the Windows Device Portal (although I’m unsure as to whether this one is implemented yet) and there’s an option around using the regular PowerShell add-appxpackage command.

Testing via PowerShell

I thought I’d try the PowerShell option first and so I made a .APPX package for my application via the Store menu in Visual Studio;

image

and then made sure that I wasn’t making an APPX bundle;

image

and then I got hold of the temporary certificate that this generates and made it trusted on my system before going to install the .APPX file via PowerShell;

image

and so the key part here is the new –RequiredContentGroupOnly parameter to the Add-AppxPackage command. With that command executed, I can see that the app only has access to the Posters collection of images from its required content group and so that all seems good;

image

I also found it interesting to go and visit the actual folder on disk where the application is installed and to see what the Albums/BoxSets folders representing the ‘automatic’ content groups look like.

The first thing to say is that those folders do exist and here’s what the content looks like  at this point in the process;

image

so there are “marker files” present in the folders and so (as advised in the //build session) code would have to be careful not to confuse the presence of the folders/files with the content group’s installation status.

I’d hoped to then be able to use the add-appxpackage command again to add the other two content groups (Albums/BoxSets) while the application was running but when I tried to execute that, I saw;

image

Now, this was “very interesting” Smile in that I was reading the section of this page titled “Sideloaded Stream-able App” and it suggested that;

With the debugger attached, you can install the automatic content groups by:

Add-AppxPackage –Path C:\myapp.appx

Which is the exact same command but without the flag (what happens is that the platform will see that the app is already installed and will only stage the files that are missing).

So I attached my debugger to the running app and ran the command again and, sure enough, I could see that the debugger hit a first-chance exception in that piece of untested code that I’d listed earlier;

image

and so, sure enough, my code was being called here as the package started to install but that code wasn’t working because it was confusing the content group name with the application’s full package name.

That didn’t surprise me too much, it had been a bit of a ‘wild guess’ that I might use the PackageCatalog.PackageInstalling event in this way and I was clearly wrong so I went and reworked that code to make use of the far more sensible sounding PackageContentGroupStaging event as below;

 async Task AddStreamedPictureSourcesAsync()
    {
      // Handle any streamed packages that are already installed.
      var groups = await Package.Current.GetContentGroupsAsync();

      // TBD - unsure exactly of the state to check for here in order
      // to be sure that the content group is present.
      foreach (var group in groups.Where(
        g => !g.IsRequired && g.State == PackageContentGroupState.Staged))
      {
        await this.AddPictureSourceAsync(group.Name, group.Name);
      }

      // Now set up handlers to wait for any others to arrive
      this.catalog = PackageCatalog.OpenForCurrentPackage();
      this.catalog.PackageInstalling += OnPackageInstalling;
      this.catalog.PackageContentGroupStaging += OnContentGroupStaging;
    }

    async void OnContentGroupStaging(
      PackageCatalog sender, PackageContentGroupStagingEventArgs args)
    {
      if (args.IsComplete)
      {
          await this.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
            async () =>
            {
              await this.AddPictureSourceAsync(
                args.ContentGroupName,
                args.ContentGroupName);
            }
          );
      }
    }

    async void OnPackageInstalling(
      PackageCatalog sender,
      PackageInstallingEventArgs args)
    {
      // TODO: Remove this handler, don't think it's useful but leaving
      // it for the moment for debugging.
      Debugger.Break();
    }

This looked like it was far more likely to work but what I found was;

  1. The Add-AppxPackage command would still fail when I tried to add the non-required content groups to the already running app.
  2. From the debugger, I could see that the PackageInstalling event was still firing but the PackageContentGroupStaging event wasn’t. I suspect that the Add-AppxPackage command is quitting out between those 2 stages and so the first event fires and the second doesn’t.

This means that I haven’t been able to use this method just yet to test what happens when the app is running and the additional content groups are installed.

The best that I could find to do here was to install the required content group using the –RequiredContentGroupOnly and then, with the application running, I could install the other groups using the –ForceApplicationShutdown option and, sure enough, the app would go away and come back with all 3 of my content groups rather than just the required one;

image

and so that shows that things are working across app executions but it doesn’t test out how they work when the application is up and running which might well be the case if the user gets the app from Store, runs it and then additional packages show up over the first few minutes of the user’s session with the app.

Testing via the Streaming Install Debugging App

At this point, I went back to this blog post and tried out the steps under the heading of “Using the Streaming Install Debugging App”. This involves going off to download this app from github which then uses the APIs to manipulate the installation status of the content groups within my app.

I uninstalled my app from the system and then reinstalled it by hitting F5 in Visual Studio and then I ran up the debugging app and, sure enough, it showed me the details of my app;

image

and so I can now use this UI to change the status of my two content groups BoxSets and Albums to be ‘not staged’;

image

and then run up my app alongside this one and it correctly just shows the ‘Film Posters’ content;

image

and if I dynamically now switch a content group’s state to Staged then my app updates;

image

and I can repeat that process with the Albums content group;

image

and so that all seems to be working nicely Smile

Wrapping Up

I really like these new sorts of capabilities coming to UWP packaging and the APIs here seem to make it pretty easy to work with although, clearly, you’d need to give quite a lot of early-stage thought to which pieces of your application’s content should be packaged into which content groups.

I put the code that I was playing with here onto github if you’re interested in picking up this (admittedly very simple) sample.