Experiments with Shared Holographic Experiences and Photon Unity Networking

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Backdrop – Shared Holographic Experiences (or “Previously….”)

Recently, I seem to have adopted this topic of shared holographic experiences and I’ve written quite a few posts that relate to it and I keep returning to it as I find it really interesting although most of what I’ve posted has definitely been experimental rather than any kind of finished/polished solution.

One set of posts began quite a while ago with this post;

Windows 10, UWP, HoloLens & A Simple Two-Way Socket Library

where I experimented with writing my own comms library between two HoloLens devices on a local network with the initial network discovery being handled by Bluetooth and with no server or cloud involved.

That had limits though and I moved on to using the sharing service from the HoloToolkit-Unity culminating (so far) in this post;

Hitchhiking the HoloToolkit-Unity, Leg 13–Continuing with Shared Experiences

although I did recently go off on another journey to see if I could build a shared holographic experience on top of the AllJoyn protocol in this post;

Experiments with Shared Holographic Experiences and AllJoyn (Spoiler Alert- this one does not end well)

I should really have got this out of my system by now but I’m returning to it again in this post for another basic experiment.

That recent AllJoyn experiment had a couple of advantages including;

  • Performing automatic device discovery (i.e. letting AllJoyn handle the discovery)
  • Not requiring a cloud connection
  • Easy programming model (using the UWP tooling)

but the disadvantages came in that I ended up having to introduce some kind of ‘server’ app when I didn’t really intend to plus there was pretty bad performance when it came to passing around what are often large world anchor buffers.

That left me wanting to try out a few other options and I spent a bit of time looking at Unity networking (or UNET) but didn’t progress it too far because I couldn’t get the discovery mechanisms (based on UDP multicasting) to work nicely for me across a single HoloLens device and the HoloLens emulator and so I let that drop although, again, it looks to offer a server-less solution with a single device being able to operate as both ‘client’ and ‘host’ and the programming model seemed pretty easy.

Photon Unity Networking

Putting that to one side for the moment, I turned my attention to “Photon Unity Networking” (or PUN) to see if I could make use of that to build out the basics of a shared holographic experience and this post is a write up of my first experiment there.

PUN seems to involve a server which can either be run locally or in the cloud and Photon provide a hosted version of it and I figured that had to be the easiest starting point and so I went with that although, as you’ll see later, it brought with it a limitation that I could have avoided if I’d decided to host the server myself.

Getting started with cloud-hosted PUN is easy. I went for the free version of this cloud hosted model which seems to offer me up to 20 concurrent users and it was very easy to;

  1. Sign up for the service
  2. Use the portal to create an application as my first app and get an ID that can be fed into the SDK
  3. Download the SDK pieces from the Unity asset store and bring them into a Unity project

and so from there I thought it would be fun to see if I could get some basic experiment with shared holograms up and running on PUN and that’s what the rest of this post is about.

The Code

The code that I’m referring to here is all hosted on Github and it’s very basic in that all that it does (or tries to do) is to let the user use 3 voice commands;

  • “create”
  • “open debug log”
  • “close debug log”

and the keyword “create” creates a cube which should be visible across all the devices that are running the app and in the same place in the same physical location.

That’s it Smile I haven’t yet added the ability to move, manipulate holograms or show the user’s head positions as I’ve done in earlier posts. Perhaps I’ll return to that later.

But the code is hosted here;

Code on Github

and I’m going to refer to classes from it through the rest of the post.

It’s important to realise that the code is supplied without the Photon Application ID (you’d need to get your own) and without the storage access keys for my Azure storage account (you’d need to get your own).

The Blank Project

I think it’s fair to say that Photon has quite a lot of functionality that I’m not even going to attempt to make use of around lobbies and matchmaking – I really just wanted the simplest solution that I could make use of and so I started a new Unity project and added 4 sets of code to it straight off the bat as shown below;


Those pieces are;

  1. The HoloToolkit-Unity
  2. The Mixed Reality Design Labs
  3. The Photon Unity Networking Scripts
  4. A StorageServices library

I’ll return to the 4th one later in the post but I’m hoping that the other 3 are well understood and, if not, you can find reference to them on this blog site in many places;

Posts about Mixed Reality

I made sure that my Unity project was set up for Holographic development using the HoloToolkit menu options to set up the basic scene settings, project settings;


and specifically that my app had the capability to access both the microphone (for voice commands) and spatial perception (for world anchoring).

From there, I created a scene with very little in it other than a single empty Root object along with the HoloLens prefab from the Mixed Reality Design Labs (highlighted orange below) which provides the basics of getting that library into my project;


and I’m now “ready to go” in the sense of trying to make use of PUN to get a hologram shared across devices. Here’s the steps I undertook.

Configuring PUN

PUN makes it pretty easy to specify the details of your networking setup including your app key in that they have an option to use a configuration file which can be edited in the Unity editor and so I went via that route.

I didn’t change too much of the setup here other than to add my application id, specify TCP (more on that later) and a region of EU and then specify that I didn’t want to auto-join a lobby or enable stats as I’m hoping to avoid lobbies.


Making a Server Connection

I needed to make a connection to the server and PUN makes that pretty simple.

There’s a model in PUN of deriving your class from a PunBehaviour which then has a set of overrides that you can use to run code as/when certain networking events happen like a server connection or a player joining the game. I wrapped up the tiny bit of code needed to make a server connection based on a configuration file into a simple component that I called PhotonConnector which essentially takes the override-model of PUN and turns it into an event based model that suited me better. Here’s that class;

The PhotonConnector Class

and so the idea here is that I just use the OnConnectedToMaster override to wait for a connection and then I fire an event (FirstConnection) that some other piece of my code can pick up.

I dropped an instance of this component onto my Root object;


So, that’s hopefully my code connected to the PUN cloud server.

Making/Joining a Room

Like many multiplayer game libraries, PUN deals with the notion of a bounded set of users inside of a “room” (joined from a “lobby”) and I wanted to keep this as simple as possible for my experiment here and so I tried to bypass lobbies in as much as possible and tried to avoid building UI for the user to select a room.

Instead, I just wanted to hard-wire my app such that it would attempt to join (or create if necessary) a room given a room name and so I wrote a simple component which would attempt to either create or join a room given the room name;

The PhotoRoomJoiner Class

and so this component is prepared to look for the PhotonConnector, wait for it to connect to the network before then attempting to join/create a room on the server. Once done, like the PhotonConnector it fires an event to signify that it has completed.

I dropped an instance of this component onto my Root object leaving the room name setting as “Default Room”;


and by this point I was starting to realise that I was lacking any way of visualising Debug.Log calls on my device and that was starting to be a limiting factor…

Visualising Debug Output

I’ve written a few ugly solutions to displaying debug output on the HoloLens and I wanted to avoid writing yet another one and so I finally woke up and realised that I could make use of the DebugLog prefab from the Mixed Reality Design Labs;


and I left its configuration entirely alone but now I can see all my Debug.Log output by simply saying “open debug log” inside of my application which is a “very useful thing indeed” given how little I paid for it! Smile


One World Anchor Per App or Per Hologram?

In order to have holograms appear in a consistent position across devices, those devices are going to have to agree on a common coordinate system and that’s done by;

  • Creating an object at some position on one device
  • Applying a world anchor to that object to lock it in position in the real world
  • Obtaining (‘exporting’) the blob representing that world anchor
  • Sending the blob over the network to other devices
  • On those additional devices
    • Receiving the blob over the network
    • Creating the same type of object
    • Importing the world anchor blob onto the device
    • Applying (‘locking’) the newly created object with the imported world anchor blob so as to position it in the same position in the physical world as the original

It’s a multi-step process and, naturally, there’s many things that can go wrong along the way.

One of the first decisions to make is whether to apply a world anchor to every hologram shared or to perhaps apply one world anchor across the whole scene and parent all holograms from it. The former is likely to have great accuracy but the latter is a lot less expensive in terms of how many bytes need to be shipped around the network.

For this experiment, I decided to go with a halfway house. The guidance suggests that;

“A good rule of thumb is to ensure that anything you render based on a distant spatial anchor’s coordinate system is within about 3 meters of its origin”

and so I decided to go with that and to essentially create and share a new world anchor any time a hologram is created more than 3m from an existing world anchor.

In order to do that, I need to track where world anchors have been placed and I do that locally on the device.

Rather than use a hologram itself as a world anchor, I create an empty object as the world anchor and then any hologram within 3m of that anchor would be parented from that anchor.

Tracking World Anchor Positions

In order to keep track of the world anchors that a device has created or which it has received from other devices I have each device maintain a simple list of world anchors with a GUID-based naming scheme to ensure that I can refer to these world-anchors across devices. It’s a fairly simple thing and it’s listed here;

The AnchorPositionList Class

Importing/Exporting World Anchors

The business of importing or exporting world anchors takes quite a few steps and I’ve previously written code which wraps this up into a (relatively) simple single method call where I can hand a GameObject over to a method which will;

  • For export
    • Add a WorldAnchor component to the GameObject
    • Wait for that WorldAnchor component to flag that it isLocated in the world
    • Export the data for that WorldAnchor using the WorldAnchorTransferBatch
    • Return the byte[] array exported
  • For import
    • Take a byte[] array and import it using the WorldAnchorTransferBatch
    • Apply the LockObject call to the GameObject

That code is all wrapped up in a class I called SpatialAnchorHelpers

The SpatialAnchorHelpers class

One thing I’d add about this class is that it is very much “UWP” specific in that I made no attempt to make this code particularly usable from the Unity Editor and to avoid getting tied up in lots of asynchronous callbacks I just wrote code with async/await which Unity can’t make sense of but, for me, makes for much more readable code.

This code also needs to “wait” for the isLocated flag on a WorldAnchor component to signal ‘true’ and so I needed to make an awaitable version of this and I used this pretty ugly class that I’ve used before;

The PredicateLoopWatcher class

I’m not too proud of that and it perhaps needs a rethink but it’s “kind of working” for me for now although if you look at it you’ll realise that there’s a strong chance that it might loop forever and so some kind of timeout might be a good idea!

Using async/await without a suitable SynchronizationContext can mean that code can easily end up on the wrong thread for interacting with Unity’s UI objects and so I added a Dispatcher component which I try to use to help with marshalling code back onto Unity’s UI thread;

The Dispatcher Class

and so that’s part of the scripts I wrote here too and I just added an instance of it to my root script so that I’d be able to get hold of it;


Passing World Anchor Blobs Around the Network

For even the simplest, most basic solution like this one there comes a time when one device needs to ‘notify’ another device that either;

  • a new world anchor has been created
  • a new hologram has been created relative to an existing world anchor

and so there’s a need for some kind of ‘network notification’ which carries some data with it. The major decision though is how much data and initially what I was hoping to achieve here was for the notification to carry all of the data.

To put that into plainer English, I was hoping to use PUN’s RPC feature to enable me to send out an RPC from one device to another saying

“Hey, there’s a new world anchor called {GUID} and here’s the 1-10MB of data representing it”

Now, I must admit that I suspected that this would cause me problems (like it did when I tried it with AllJoyn) and it did Smile

Firstly, the default protocol for PUN is UDP and, naturally, it’s not a great idea to try and send MB over UDP this way and so I switched the protocol for my app to be TCP via the configuration screen that I screenshotted earlier.

Making an RPC method in PUN is simple, I just need to make sure that there’s a PhotonView component on my GameObject and then I can just add an [PunRPC] attribute and make sure that the parameters can be serialized by PUN or by my custom code if necessary.

Invoking the RPC method is also simple – you grab hold of the PhotonView component and use the RPC() method on it and there’s a target parameter on there which was really interesting to me.

In my scenario, I only really need two RPCs, something like;

  • NewWorldAnchorAdded( anchor identifier, anchor byte array )
  • NewHologramAdded( anchor identifier, hologram position relative to anchor )

Given that I was hoping to pass the entire world anchor blob over the RPC call, I didn’t want that mirrored back to the originating client by the server because that client already had that blob and so it would be wasteful.

Consequently, I used the Targets.OthersBuffered option to try and send the RPC to all the other devices in the room.

The other nice aspect around this option is the Buffered part in the sense that the server will keep the RPC details around and deliver it (and others) to new clients as they join the room.


It didn’t work for me though because, although PUN doesn’t place size limits on parameters to an RPC call, the cloud-hosted version of PUN does and the server bounced my RPCs straight back at me and after a little online discussion I was pointed to this article which flags that the server limit is 0.5MB for a parameter.

So, using RPCs for these large blobs wasn’t going to work much like it didn’t really work very nicely for me when I looked at doing something similar over AllJoyn.

What next? Use a blob store…

Putting Blobs in…a Blob Store!

I decided that I’d stick with the RPC mechanism for signalling the details of new world anchors and new holograms but I wouldn’t try and pass all of the bytes of the blob representing the world anchor across that boundary.

Instead, given that I’d already assumed a cloud connection to the PUN server I’d use the Azure cloud to store the blobs for my world anchors.

The next question is then how to best make use of Azure blob storage from Unity without having to hand-crank a bunch of code and set up HTTP headers etc. myself.

Fortunately, my colleague Dave has done some work around calling into Azure app services and blob storage from Unity and he has a blog post around it here;

Unity 3D and Azure Blog Storage

which points to a github repo over here;

Unity3DAzure on Github

and so I lifted this code into my project and wrote my own little BlobStorageHelper class around it so as to make it relatively easy to use in my scenario;

The AzureBlobStorageHelper class

There’s not a lot to it on top of what Dave already wrote – I just wrap it up for my use and add a little bit of code to download a blob directly from blob storage.

Naturally, to set this up I needed an Azure storage account (I already had one) and I just made a container within it (named ‘sharedholograms’) and made sure that it allowed public reads and authenticated writes and I copied out the access key such that the code would be able to make use of it.

I can then set up an instance of this component on my root game object;


so it’s available any time I want it from that script.

Back to RPCs

With my issue around what to do with large byte array parameters out of the way, I could return to my RPCs being as simple as their final signatures ended up being;

  void WorldAnchorCreatedRemotely(string sessionId, string anchorId)
  void CubeCreatedRemotely(string sessionId, string anchorId, Vector3 relativePosition)


because the name of the blob on the blob store can be derived from the anchorId and so it’s enough just to distribute that id.

However, what’s this sessionId parameter? This goes back to the earlier idea that I would dispatch my RPC calls using the Targets.OtherBuffered flag to notify all devices apart from the current one that something had changed.

However, what I seemed to find was that if DeviceA created one world anchor and three holograms and then quit/rejoined the server it didn’t seem to receive those 4 buffered RPCs from the server which would tell it to recreate those objects.

I’m unsure how PUN makes the distinction of “Other” but I decided that perhaps the best idea was to switch OtherBuffered to AllBuffered and then just my own mechanism to ignore RPCs which originated on the current device. Because I’m no longer sending large byte arrays over the network this didn’t feel like a particularly wasteful thing to do and so I stuck with it but it could do with a little more investigation on my part.

The other thing that I played with here was the way in which the room is originally created by my PhotoRoomJoiner component in that, initially, I wasn’t setting the RoomOptions.CleanUpCacheOnLeave which I think means that the buffered RPCs left by a player would disappear when they left the room.

However, I still seemed to find that even when I asked the room to keep around RPCs for a player that left the room the OtherBuffered option didn’t seem to deliver those RPCs back to that player when they connected again and hence me sticking with the AllBuffered option for the moment. Again, it needs more investigation.

Those big blob buffers though still cause me another problem…

Ordering of RPCs

I saw this one coming Smile Now that the upload/download of the blob representing a world anchor is done asynchronously through the cloud in a manner that’s outside the bounds of the RPCs being delivered by Photon it’s fairly easy to see a sequence of events where an RPC is delivered to create a hologram relative to a world anchor that has not yet been downloaded to the device – it’s a race and it’s pretty much certain to happen and especially if a device connected to a room with buffered RPCs containing a sequence of anchors and holograms.

Consequently, I simply keep a little lookaside list of holograms that a client has been asked to create when the world anchor that they are parented off has not yet been created. The assumption is that the world anchor will show up at some point in the future and this list can be consulted to check for all the pending holograms that then need to be created.

The AnchorCubeList Class

Bringing it All Together

All of these components are ultimately brought together by a simple “co-ordinating” script on my (almost) empty GameObject named Root that has been in the scene all along;


The only component that I haven’t mentioned there is the use of a KeywordManager from the HoloToolkit-Unity which sends the voice keyword “create” through to a function on my Root script which kicks off the whole process of creating a world anchor if necessary before creating a hologram (cube) 3m along the user’s gaze vector.

That Root script is longer than I’d like it to be at the moment so I could tidy that up a little but here it is in its completeness;

The Root Class

Testing and Carrying On…

I’ve left it to the end of the blog post to admit that I haven’t tested this much at the time of writing – it’s a bit of an experiment and so don’t expect too much from it Smile

One of the reasons for that is that I’m currently working with one HoloLens and the emulator and so importing/exporting of world anchors can be a bit of a challenge as it’s hard to know in the emulator whether things are working correctly or not and it’s much easier to test with multiple devices for that reason.

I’ll try that out in the coming days/weeks and will update the post or add to another post. I’d also like to add a little more into the code to make it possible to manipulate the holograms, show the user’s position as an avatar and so on as I’ve done in other posts around this topic so I’ll create a branch and keep working on that.

Beyond that, it might be “nice” to take away the dependency on PUN here and just build out a solution using nothing but standard pieces from Azure like service bus + blob storage as I don’t think that’d be a long way from what I’ve got here – that might be another avenue for a future post…

Exploring the Mixed Reality Design Labs–Experiment #2

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following up from this previous post;

Exploring the Mixed Reality Design Labs–Introduction and Experiment #1

I wanted to continue to explore some of the things present in the Mixed Reality Design Labs work and, since my last post, I’d revisited the github and had found this doc page which I hadn’t read last time I’d visited the site and it’s a great read as without it I’d felt a little like I was wandering without a map. I’m not quite sure how I missed it the first time around;

MRDL – Examples Write Up Including Interactable Objects, Object Collection, Progress, App Bar and Bounding Box

That’s definitely a good read and I’d also missed this document about including the MRDL as a submodule;


and yet another thing that I’d missed was that the MRDL inserts a custom menu into Unity;


which can be used to insert the HoloLens prefab I mentiond in the previous post (from the Interace> menu) and to create the other areas of functionality listed there on the menu including quite a few buttons, receivers and cursors.


The rest of this post is just what I’ve written down as rough notes while exploring one area of the MRDL and I chose to experiment with buttons as UIs often seem to end up with one type of button or another and I figured that I would poke around in the code and then start with the Button type;

Button on github

and that told me that there’s an abstract base class here which has (at least);

  • ButtonState (pressed, targeted, disabled, etc)
  • Whether the button requires the gaze to be on it or not
  • Events for when the state changes, when it is pressed, held, released, cancelled

along with a few private/implementation pieces. It all feels fairly ‘expected’ but there’s a relationship here with an InteractionManager;

InteractionManager on github

which looks to be a singleton handling things like tapping, manipulation, navigation events and somehow routing them (via Unity’s SendMessage) on via an AFocuser object.

AFocuser on github

This looks to be a perhaps more developed form of what’s present in the HoloToolkit-Unity done by types there like the GazeManager and so on and so it’s “interesting” that this framework looks to be reworking these particular wheels rather than picking up those bits from the HoloToolkit.

There would be quite a lot to explore here and I didn’t dig into all of it, that’ll have to be for another day. For today, I went back to exploring buttons and the types derived look to be;

  • KeyButton
  • AnimButton
  • SpriteButton
  • MeshButton
  • CompoundButton
  • AnimControllerButton
  • BoundingBoxHandle
  • ObjectButton

and I went back to reading the document on these and also had a good poke around the Interactable Object sample;


and I think I started to get a little bit of a grip of what was going on but I daresay I’ve got a bit more to learn here!

I tentatively added an empty parent object and a cube to my scene;


and then added the Compound Button script to my GameObject and it moaned at me (in a good way);


So I took away the box collider that comes by default with my cube and it said;


and so I added a box collider to the empty parent game object and the button became ‘happy’ Smile


I then got a bit adventurous, having noticed the notion of ‘receivers’ which look to be a form of event relay and I added a sphere to my scene and set up a “Color Receiver” on my empty game object;


and, sure enough, when I click on my cube my sphere toggles red/white;


but, equally, I think I could just handle this event by either writing code – e.g.

  private void Start()
    var button = this.GetComponent<CompoundButton>();
    button.OnButtonPressed += this.OnPressed;

and that seems to work just fine. I did then wonder whether I could create some hierarchy like this in my scene;


and then could I handle the button press by adding a script to the GrandParent object? I tried adding something like this;

using HUX.Interaction;

public class Startup : InteractibleObject
  private void Start()
  protected void FocusEnter()
  protected void FocusExit()
  protected void OnTapped(InteractionManager.InteractionEventArgs eventArgs)

but the debugger didn’t suggest that my OnTapped method was called. However, the FocusEnter and FocusExit calls do happen at this ‘grand parent’ level and this seems to be in line with the comments inside of the source code;

InteractibleObject on github

which says;

/// FocusEnter() & FocusExit() will bubble up through the hierarchy, starting from the Prime Focus collider.


/// All other messages will only be sent to the Prime Focus collider

and this notion of the ‘Prime Focus collider’ led me to go and take a look at the source for;

AFocuser on github

where the UpdateFocus method actually walks the hierarchy to build up the list of parent objects that will need to be notified of focus loss/gain while it updates its notion of the PrimeFocus and so (from a quick look) that all seems to tie up.

I think I could achieve what I wanted though by making by grand parent script an InteractionReceiver (as the sample does) and then I can pick up the button press that way – i.e.

public class Startup : InteractionReceiver
  private void Start()
  protected override void OnTapped(GameObject obj, InteractionManager.InteractionEventArgs eventArgs)
    base.OnTapped(obj, eventArgs);

and if I marry this up with the right settings in the UI to tell that script which interactible objects I want it to receive from;


then that seems to work out fine.

Quite commonly in a Mixed Reality app, I’d like to use speech in addition to moving my gaze and air-tapping and so it looks like the MRDL makes that easy in that I can add;


although I found that when I did this, I hit a snag in that the ColorReceiver that I’d previously added seemed to work fine when invoked by an air-tap but didn’t work when invoked by the speech command ‘click’ and that seemed to come down to this runtime error;

Failed to call function OnTapped of class ColorReceiver
Calling function OnTapped with no parameters but the function requires 2.

so maybe that’s a bug or maybe I’m misunderstanding how it’s meant to work but if I take the ColorReceiver away and handle the button OnButtonPressed event myself then I still see something similar – i.e. my code runs when I tap on the button but when I say “click” it doesn’t run but, instead, I see the debug output saying;

Keyword handler called in GameObject for keyword click with confidence level High

and I saw the same thing if I went back to having my code be an InteractionReceiver in that the air-tap seems to result in one call whereas the voice command “click” seems to result in another as below;

public class Startup : InteractionReceiver
  protected override void OnTapped(GameObject obj, InteractionManager.InteractionEventArgs eventArgs)
    // This is called when I air-tap
    base.OnTapped(obj, eventArgs);
  void OnTapped()
    // This is called when I say 'click'

and, again, I’m unsure whether that’s my understanding or whether it’s not quite working right but I figured I’d move on as I’d noticed that the “Compound Button Speech” script took two keyword sources – one was the local override I’ve used above where I can simply set the text but the other looks for a Compound Button Text;


and so I added one of those in, chose the provided profile and fed it a 3DTextMesh and then I selected that I wanted to override the Offset property and just dragged my text mesh around a little in Unity to try and position it ‘sensibly’;


and that all seemed to work fine. It’d be great to have my button give audible cues when the user interacted with it and so I also added in a Compound Button Sounds script which then wants a ButtonSoundProfile and I played with creating my own versus using the one that ships in the library;


and that worked fine once I’d managed to figure out how to get the sounds to come out properly over the holographic remoting app from Unity.

At this point, I’d added quite a lot of scripts to my original cube and so I reset things and went and grabbed a 3D object from Remix3D, this firefighter;


and dropped it into my scene as a child of my GameObject;


and then added back the Compound Button script and a Box Collider and then went and added the Compound Button Mesh script and tried to set up some scale and colour changes based on the states within;


and that seemed to work out fine – i.e. when I pressed on the button, the fireman got scaled up and the mesh got rendered in red;


so, that’s all really useful.

I then threw away my scene again, went back to just having a cube and set up a couple of animations – one which rotated the cube by 45 degrees and another which put it back to 0 and I built an animator around those with the transitions triggered by a change in the Targeted boolean parameter;


and then dragged an Animator and a Compound Button Anim component onto my GameObject;


and that seemed to give me the basics of having my cube animate into rotation when I focus on it and animate back from rotation when I take the focus away from it – seemed like a very useful tool to have in the toolbox Smile I noticed that Object Button seems to do something similar except it looks to model the various states via a set of different prefabs – i.e.


The last one of these Compound Button X types that I wanted to get my head around for this post was the Compound Button Icon type. This feels a little bit like the Text variant in that I can create an empty GameObject and then make it into a Compound Button (+Icon) as below;


and this seems to be driven off a ButtonIconProfile which can either be font based or texture based so I set up one that was font based;


and then there’s a need here for something to render the icon and I found it “interesting” to add a Cube as a child of my button object and then toggle the dropdown here to select my Cube as the rendering object. The component made a few changes on my behalf!

Here’s the before picture of my cube;


and this is what happens to it when I choose it as the renderer for my icon;


so – the mesh filter has gone and the material/shader has been changed for me and I can then go back to the Compound Button Icon component and choose the icon;


Very cool.

Wrapping Up

Having done a bit of exploring, I can now start to get some idea of what the tooling is doing if I use an option like;


and create myself a “Rectangle Button” which gives me all the glory of;


and so I’ve got compound, mesh, sounds, icon, text and speech all in one go and ready to be used and it takes me only a second or two in order to get buttons created;


and there’s a lot of flexibility in there.

As I said at the start of the post, I’m just experimenting here and I may well be getting things wrong so feel free to let me know and I’ll carry on my exploring in later posts…

Exploring the Mixed Reality Design Labs–Introduction and Experiment #1

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

On a recent project, I was trying to avoid having to write a simplified version of the control that the Holograms app offers on selected holograms for moving, rotating and scaling. This control here;

Image result for holograms app rotate scale

It’s essentially doing a job analogous to the resize handles that you get on a Window in the 2D world and I’d imagine that it took quite a bit of work to write, debug and get right.

I didn’t have time to try and reproduce it in full fidelity and so I really wanted to just re-use it because I knew that to write even a limited version would take me a few hours and, even then, it’d not be as good as the original.

I didn’t find such a control and so, between us, myself and Pete wrote one which ended up looking a little like this;


and it was pretty functional but it wasn’t as good as the one in the Holograms app and it took a bunch of time to write and debug.

However, just after it was completed someone inside of the company pointed me to a place where I could have taken a better control straight off-the-shelf and that’s what I wanted to write about in this post.

The Mixed Reality Design Labs

This came completely out-of-the-blue to me in that I hadn’t seen the Mixed Reality Design Labs before – here a quick screenshot from their home page on github;


I’m going to shorten “Mixed Reality Design Labs” to MRDL (rhymes with TURTLE) in this post because, otherwise, it gets too long.

Straight away I could see that on the home page are listed a number of very interesting sounding controls including these two, the Holobar and the Bounding Box.


and my first thought was as to whether those controls just draw the visuals or do they also implement the behaviour of scale/rotate/translate but, either way, they’d be saving me a tonne of work Smile

I noticed also that this repo is linked from the mixed reality development documentation over here;


but it doesn’t look like it’s a simple 1:1 between the MRDL and this document page because (e.g.) the guidance here around ‘Text in Unity’ actually points into pieces from the HoloToolkit-Unity rather than controls within the MRDL.

By contrast, I think the other 4 sections listed here do point into the MRDL so it’s nearly a 1:1 Smile

Including the MRDL

I wanted to try some of these things out and so I spun up a blank Unity project, added the HoloToolkit-Unity and set it up for mixed reality development (i.e. set the project, scene, capability settings using the toolkit) before going to look at how I could start to make use of these pieces.

As there didn’t seem to be any Unity packages or similar in the repo, I figured it was just a matter of copying the MRDesignLabs_Unity folder into my project and I went with that and after Unity had seen those files I got a nice helpful dialog around the HoloLens MDL2 Symbols font;


and so I went with downloading and installing that but then I realised that I was perhaps meant to import that .ttf file as an asset into Unity here and then (using the helpful button on the dialog) assign it as the font for buttons as per below;


That was easy enough but I then dropped off down a bit of a black hole as there seems to be quite a lot in this project;


and I find this quite regularly with Unity – when someone delivers you a bunch of bits, how do you figure out which pieces they intended you to use directly and which bits are just supporting infrastructure for what they have built – i.e. there seems to be a lack of [private/public/internal] type visibility which would guide the user as to what was going on.

Rather than get bewildered though, I figured I’d go back to trying to solve my original problem – selecting, rotating, scaling, translating a 3D object…

Manipulations, Bounding Boxes, App Bars

I must admit, I got stuck pretty quickly.

Like the HoloToolkit, there are a lot of pieces here and it’s not easy to figure out how they all sit together and so I also opened up the Examples project and had a bit of a look at the scene called ManipulationGizmo_Examples which gave me more than enough clues to feel confident to play around in my own blank scene.

I started off by dragging out the HoloLens prefab which brings with it a tonne of pieces;


I did have a little dig around into where these pieces were coming from and what they might do but I’m not going to attempt to write that up here. What I think I noticed is that these pieces look to be largely independent rather than taking dependencies on the HoloToolkit-Unity but I may be wrong there as I haven’t spent a lot of time on it just yet.

Where I did focus in was on the Manipulation Manager which looked like the piece that I was going to need in order to get my scale/rotate/translate functionality working and it’s a pretty simple script which simply instantiates the ‘Bounding Box Prefab’ and the ‘App Bar Prefab’ and keeps them around as singletons for later use which seems to imply that only one object would have the bounding box wrapped around it at a time (i.e. a single selection model) which seems reasonable to me. I also noticed in that script that it takes a dependency on the Singleton<T> class from the HoloToolkit-Unity so that starts to go against my earlier thoughts around dependencies;


To try this out, I added a quick cube into my scene;


and (from the example scene) I’d figured that I needed to add a Bounding Box Target component to my cube in order to apply the bounding box behaviour;


I like all the options here but I left them as their default values for now and I ran up the code on my HoloLens using the Unity’s Holographic Remoting feature and, sure enough, when I can tap on the cube to select it I get the bounding box and I can then use manipulations in order to translate, rotate and scale matching the functionality that I asked for in the editor.

This screenshot comes from Unity displaying what’s happening remotely on the HoloLens and you can see the AppBar with its Done/Remove buttons and the grab handles which are facilitating scale/rotate/translate. What’s not visible is that the control has animations and highlights which make for a quality, production feel rather than a “Mike just hacked this together in a hurry” feel Smile


I’m going to try and find some time to explore more of the MRDL and I’ll write some supplementary posts as/when those work out but if you’re building for Windows Mixed Reality then I think you should have an eye on this project as it has the potential to save you a tonne of work Smile