Rough Notes on Porting “glTF Viewer” from Mixed Reality Toolkit (MRTK) V1 to MRTK V2 (RC2.1)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens or Azure Mixed Reality other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Around 6 months ago, I wrote a simple application for HoloLens 1 and published it to the Windows Store.

It’s called “glTF Viewer” and it provides a way to view models stored in glTF format on the HoloLens with basic move, rotate, scale manipulations. It also provides a way via which one user can put such a model onto their HoloLens, open it up and then share it automatically to other users on the same local network such that they will also be able to see the same model and the manipulations performed on it. This includes downloading the files for the model from the originating device and caching them onto the requesting device.

You can find the application in the store here;

glTF Viewer in the Microsoft Store

and you can find the original blogpost that I wrote about the process of writing this application here;

A Simple glTF Viewer for HoloLens

and you can find the source code for the application over here;

glTF Viewer on GitHub

I’d like to keep this application up to date and so with the arrival of MRTK V2 (release candidates) I thought that it would be a good idea to port the application over to MRTK V2 such that the application was “more modern” and better suited to work on HoloLens 2 when the device becomes available.

In doing that work, I thought it might be helpful to document the steps that I have taken to port this application and that’s what this blog post is all about – it’s a set of ‘rough notes’ made as I go through the process of moving the code from V1 to V2.

Before beginning, though, I want to be honest about the way in which I have gone about this port. What I actually did was;

  1. Begin the port thinking that I would write it up as I went along.
  2. Get bogged down in some technical details.
  3. Complete the port.
  4. Realise that I had not written anything much down.

So it was a bit of a failure in terms of writing anything down.

Consequently, what I thought that I would do is to revisit the process and repeat the port from scratch but, this time, write it down Smile as I went along.

That’s what the rest of this post is for – the step-by-step process of going from MRTK V1 to MRTK V2 on this one application having done the process once already.

Before I get started though, I’d like to point out some links.

Some Links…

There are a number of links that relate to activities and reading that you can do if you’re thinking of getting started with a mixed reality application for HoloLens 2 and/or thinking of porting an existing application across from HoloLens 1. The main sites that I find myself using are;

Armed with those docs, it’s time to get started porting my glTF Viewer to MRTK V2.

Making a New Branch, Getting Versions Right

I cloned my existing repo from using a recursive clone and made sure that it would still build.

There are quite a few steps necessary to build this project right now described in the readme at

Specifically, the repo contains a sub-module which uses UnityGLTF from the Kronos Group. There’s nothing too unusual about that except that the original MRTK also included some pieces around GLTF which clashed with UnityGLTF and so I had to write some scripts to as to set a few things up and remove one or two toolkit files in order to get things to build.

I described this process in the original blog post under the section entitled ‘A Small Challenge with UnityGLTF’.

One of the expected benefits of porting to MRTK V2 with its built-in support for GLTF is to be able to get rid of the sub-module and the scripts needed to hack the build process and end up with a much cleaner project all round Smile 

I made a new branch for my work named V2WorkBlogPost as I already had the V2Work branch where I first tried to make a port and from which I intend to merge back into master at some later point.

With that branch in play, I made sure that I had the right prerequisites for what I was about to do, taking them from the ‘Getting Started’ page here;

  • Visual Studio 2017.
    • I have this although I’m actually working in 2019 at this point.
  • Unity 2018.4.x.
    • I have 2018.4.3f1 – I have a particular interest in this version because it is supposed to fix a (UWP platform) issue which I raised here where the UWP implementations of System.IO.File APIs got reworked in Windows SDK 16299 which broke existing code which used those file APIs. You can see more on that in the original blog post under the title “Challenge 3 – File APIs Change with .NET Standard 2.0 on UWP”. It’s nice that Unity has taken the effort to try and fix this so I’ll be super keen to try it out.
  • Latest MRTK release.
    • I took the V2.0.0 RC2.1 release and I only took the Foundation package rather than the examples as I do not want the examples in my project here. Naturally, I have the examples in another place so that I can try things out.
  • Windows SDK 18362+.
    • I have 18362 as the latest installed SDK on this machine.

It is worth noting at this point a couple of additional things about my glTF Viewer application as it is prior to this port;

  • It has already been built in a Unity 2018.* version. It was last built with 2018.3.2f1.
  • It is already building on the IL2CPP back-end

Why is my application already building for Il2CPP?

Generally, I would much prefer to work on the .NET back-end but it has to be acknowledged that IL2CPP is inevitable given that Unity 2019 versions no longer have the .NET back-end support but there is a bigger reason for my use of IL2CPP. My application using classes from .NET Standard 2.0 (specifically HttpListener) and due to the deprecation of the .NET back-end Unity did not add support for .NET Standard 2.0 into the .NET back-end. So, if I want to use HttpListener then I have to use IL2CPP. I wrote about this in gory detail at the time that I wrote the application so please refer back to the original blog post  (in the section entitled ‘Challenge Number 1 – Picking up .NET Standard 2.0 ’) if you want the blow-by-blow.

So, armed with the right software and an application that already builds in Unity 2018 on the IL2CPP back-end, I’m ready to make some changes.

Opening the Project in Unity

I opened up my project in the 2018.4.3f1 version of Unity and allowed it to upgrade it from 2018.3.2f1.

I didn’t expect to see problems in this upgrade but it did seem to get stuck on this particular error;


which says;

“Project has invalid dependencies:
    com.unity.xr.windowsmr.metro: Package [com.unity.xr.windowsmr.metro@1.0.10] cannot be found”

so my best thought was to use the Package Manager which offered to upgrade this to Version 1.0.12


and that seemed to do the trick. I had a look at my build settings as well and switched platform over to the UWP;


A quick note on the debugging settings here. For Il2CPP, you can either choose to debug the C# code or the generated C++ code and Unity has all the details over here.

UWP: Debugging on IL2CPP Scripting Backend

Take extra care to ensure that you have the right capabilities set in your project for this to work as mentioned in the first paragraph of that page.

Because of this, I generally build Release code from Visual Studio and attempt to use the Unity C# debugging first. If that doesn’t help me out, I tend to debug the generated C++ code using the native debugger in Visual Studio and, sometimes, I rebuild from Visual Studio in Debug configuration to help with that debugging on native code.

I’d note also that I do toggle “Scripts Only Build” when I think it is appropriate in order to try and speed up build times but it’s “risky” as it’s easy to leave it on when you should have turned it off so beware on that one Smile

With that done, Unity was opening my project in version 2018.4.3f1 and it would build a Visual Studio project for me and so I committed those changes and moved on.

The commit is here.

A Word on Scenes

An important thing to note about the glTF Viewer application is that it’s really quite simple. There’s a bit of code in there for messaging and so on but there’s not much to it and, as such, it’s built as a single scene in Unity as you can see below;


If you have a multi-scene application then you’re going to need to take some steps to work with the MRTK V2 across those multiple scenes to ensure that;

  1. The MRTK doesn’t get unloaded when scenes change
  2. More than one MRTK doesn’t get loaded when scenes change

I’ve seen a few apps where this can be a struggle and there’s an issue raised on the MRTK V2 around this over here with a long discussion attached which I think leads to the approach of having a “base” scene with the MRTK embedded into it and then loading/unloading scenes with the “additive” flag set but you might want to check out that whole discussion if this is an area of interest for you as it doesn’t impact my app here.

Adding the New Toolkit

This is much easier than the previous 2 steps in that I just imported the Unity package that represents MRTK V2 RC 2.1.

I hit one error;

“Assembly has reference to non-existent assembly ‘Unity.TextMeshPro’ (Assets/MixedRealityToolkit.SDK/MixedRealityToolkit.SDK.asmdef)”

but that was easily fixed by going back into the Package Manager and installing the Text Mesh Pro package into my project and I, once again, ensured that the project would build in Unity. It did build but it spat out this list of “errors” that I have seen many times working on these pieces so I thought I would include a screenshot here;


These errors all relate to the “Reference Rewriter” and all seem to relate to System.Numerics and I have seen these flagged as errors by Unity in many projects recently and yet the build is still flagged as Succeeded and seems to deploy and work fine on a device.

Consequently, I ignore them although the last error listed there about a failure to copy from the Temp folder to the Library folder is an actual problem that I have with Unity at the moment and I have to fix that one by restarting the editor and the hub until it goes away Confused smile

When it did go away, I then hit this error;

“Scripted importers UnityGLTF.GLTFImporter and Microsoft.MixedReality.Toolkit.Utilities.Gltf.Serialization.Editor.GlbAssetImporter are targeting the glb extension, rejecting both.

but I can fully understand why Unity is complaining here because I do have two versions of UnityGLTF in the project right now so I’m not surprised that Unity is a bit puzzled but I’m hoping to address this shortly and Unity seems to be tolerating the situation for now and so, with those caveats, I do now have a project that contains both the old MRTK V1 and the new MRTK V2 as below;


The big question for me at this point is whether to take a dependency on the MRTK V2 as a Git sub-module or whether to just include the code from the MRTK V2 in my Unity project.

I much prefer to take a dependency on it as a sub-module but I figure that while it is not yet finished I will have the code in my project and then I can do the sub-module step at a later point. Consequently, I had quite a lot of folders to add to my Git repo and it leaves my repo in a slightly odd state because the MRTK V1 is in there as a sub-module and the MRTK V2 is in there as code but I’m about to remove MRTK V1 anyway so it won’t be in this hybrid state for too much longer.

The commit is here.

Removing the MRTK V1 – Surgical Removal or the Doomsday Option?

I now have a project with both the MRTK V1 and the MRTK V2 within it but how do I go about removing the V1 and replacing it with the V2?

So far when I’ve worked on applications that are doing this it feels to me like there are 2 possibilities;

  1. The “Doomsday” option – i.e. delete the MRTK V1 and see what breaks.
  2. The “Surgical” option – i.e. make an inventory of what’s being used from the MRTK V1 and consider what replacement is needed.

For the blog post, I’m going to go with option 2 but I’ve seen developers try both approaches and I’m not convinced that one is any better than the other.

In my particular application, I did a survey of my scene to try and figure out what is being used from the toolkit.

Firstly, I had some objects in my scene which I think I used in their default configuration;

  • Cursor object
  • InputManager object
  • MixedRealityCameraParent object

I’m expecting all of these to be replaced by the MRTK V2 camera system and input system without too much effort on my part.

I also noticed that I had a ProgressIndicator. At the time of writing, I’m asking for this to be replaced into the MRTK V2 but it’s not there as far as I know and so my expectation here is to simply keep these pieces from the MRTK V1 in my application for now and continue to use the progress indicator as it is.

Having taken a look at my scene, I wanted to see where I was using the MRTK V1 from my own code. My first thought was to attempt to use the “Code Map” feature of Visual Studio but I don’t think there’s enough “differentiation” between my code and the code in the toolkit to be able to make sense of what’s going on.

Abandoning that idea, I looked at the entire set of my scripts that existed in the scripting project;


There are around 30 or so scripts there, it’s not huge and so I opened them all up in the editor and searched for HoloToolkit in all of them and I came up with a list of 8 files;


I then opened those files and did a strategic search to try and find types from the HoloToolkit and I found;

  • A use of the interface IFocusable in FocusWatcher.cs a class which was trying to keep track of which (if any) object has focus.
  • A use of the ObjectCursor in a class CursorManager.cs which tried to make the cursor active/inactive at suitable times, usually while something was asynchronously loading.
  • The ModelUpdatesManager class which adds the type TwoHandManipulatable to a GameObject such that it can be moved, rotated, scaled and this class needs a BoundingBox prefab in order to operate.
  • A use of the ProgressIndicator type which I use in order to show/hide progress when a long running operation is going on.

Additionally, I know that I am also using UnityGLTF from the Kronos repo in order to load GLTF models from files whether they be JSON/binary and whether they be an object packaged into a single file or into multiple files which all need loading.

The application also makes use of voice commands but I know that in the MRTK V1 I had to avoid the speech support as it caused me some issues. See back to the original blog post under the section entitled “Challenge 7” for the blow-by-blow on problems I had using speech as pre

While it’s probably not a perfect list, this then gives me some things to think about – note that I am mostly building this list by looking back at the porting guide and finding equivalents for the functionality that I have used;

  1. Input – Replace the Cursor, InputManager, MixedRealityCameraParent in the scene with the new MRTK systems.
  2. Speech – Look into whether speech support in MRTK V2 works better in my scenario than it did in MRTK V1.
  3. GLTF – Replace the Unity GLTF use from the Kronos repo with the new pieces built into MRTK V2.
  4. Focus – Replace the use of IFocusable with the use of IMixedRealityFocusHandler.
  5. Cursor – Come up with a new means for showing/hiding the cursor across the various pointers that are used by the MRTK V2.
  6. Manipluations – Replace the TwoHandManipulatable script with use of the new ManipulationHandler, NearInteractionGrabbable and BoundingBox scripts with suitable options set on them.
  7. Rework – Look into which pieces of the application could benefit from being reworked, re-architected based on the new service-based approach in MRTK V2.

That’s a little backlog to work on and I’ll work through them in the following sub-sections.


Firstly, I removed the InputManager, Cursor and MixedRealityCameraParent from my scene and then used the Mixed Reality Toolkit –> Add to Scene and Configure menu to add the MRTK V2 into the scene. At this point, the “Mixed Reality Toolkit” menu is a little confusing as both the MRTK V1 and V2 are contributing to it but, for now, I can live with that.

I chose the DefaultHoloLens2ConfigurationProfile for my toolkit profile as below;


A word about “profiles”. I think it’s great that a lot of behaviour is moving into “profiles” or what an old-fashioned person like me might call “configuration by means of a serialized object” Smile

The implication of this though is that if you were to lose these profiles then your application would break. I’ve seen these profiles be lost more than once by someone who allowed them to be stored in the MRTK folders (by default the MixedRealityToolkit.Generated folder) themselves & then deleted one version of the MRTK in order to add another losing the MixedRealityToolkit.Generated folder in the process.

Additionally, imagine that in one of today’s Default profiles a setting is “off”. What’s to say that a future profile won’t replace it with a value of “on” and change your application behaviour?

Maybe I’m just paranoid Winking smile but my way of managing these profiles is to create a “Profiles” folder of my own and then duplicate every single profile that is in use into that folder and give it a name that lines up with my app. That way, I know exactly where my profiles are coming from and I don’t run the risk of deleting them by mistake or having them overwritten by a newer toolkit.

While doing this, I noticed that the DefaultMixedRealityToolkitConfigurationProfile allows for “copy and customize”;


whereas the DefaultHoloLens2ConfigurationProfile doesn’t seem to;


but I might be missing how this is supposed to work. Regardless, I started with the DefaultMixedRealityToolkitConfigurationProfile and I cloned it to make a copy in Profiles\GLTFViewerToolkitConfigurationProfile.

I then went through that profile and;

  • Changed the Target Scale to be World.
  • Changed the Camera profile to be the DefaultHoloLens2CameraProfile before cloning that to make Profiles\GLTFViewerCameraProfile
  • Changed the Input profile to be the DefaultHoloLens2InputSystemProfile before cloning that to make Profiles\GLTFViewerInputSystemProfile
    • In doing this, I cloned all of the 8 sub-sections for Input Actions, Input Action Rules, Pointer, Gestures, Speech Commands, Controller Mapping, Controller Visualization, Hand Tracking
  • I switched off the Boundary system, leaving it configured with its default profile
  • I switched off the Teleport system, leaving it configured with its default profile
  • I switched off the Spatial Awareness system, leaving it with its default profile and removing the spatial observer (just in case!)
  • I cloned the DefaultMixedRealityDiagnosticsProfile to make my own and left it as it was.
  • I cloned the Extensions profile to make my own and left it as it was.
  • I left the editor section as it was.

With that in place, I then have all these profiles in my own folder and they feel like they are under my control.


At this point, I thought I’d risk pressing “Play” in the editor and I was surprised that I didn’t hear the welcome message that I had built into the app but, instead, spotted a “not implemented exception”.

Speech and Audio, Editor and UWP

I dug into this exception and realised that I had written a class AudioManager which decides whether to play voice clips or not and that class had been built to work only on UWP devices, not in the editor – i.e. it was making use of ApplicationData.Current.LocalSettings so I quickly tried to rewire that in order to use PlayerPrefs instead so that it could work both in editor and on device.

With that done, I got my audible welcome message on pressing play, I could see the framerate counter from the MRTK V2 and I seemed to be able to move around in the editor.

I couldn’t open any files though because I’d also written some more code which was editor specific.

My application uses voice commands but I had a major challenge with voice commands on the MRTK V1 in that they stopped working whenever the application lost/regained focus.

Worst of all this included when the application lost focus to make use of the file dialog so a user of the application was able to use the voice command “Open” to raise the file dialog, thereby breaking the voice commands before their model file had even been chosen.

I wrote about this in the original blog post under the section “Challenge 7”. The upshot is that I removed anything related to MRTK V1 speech or Unity speech from my application and I fell back to purely using SpeechRecognizer from the UWP for my application and that worked out fine but, of course, not in the Unity editor.

I only have 3 speech commands – open, reset, remove and so what I would ideally like to do is to work in the way of MRTK V2 by defining new input actions for these commands along with a profiler command to toggle the profile display as below in my input actions profile;


and then I could define some speech commands in my speech settings profile;


and then in my class which handles speech commands, I could add a property to map the MixedRealityInputAction (open etc.) to a handler using my own internal class ActionHandler because I don’t think Unity can serialize dictionaries for me;


and then configure them to their respective values in the editor…


and then I should be able to implement IMixedRealityInputActionHandler to invoke the actions here (rather than directly tie myself to those actions coming from only voice commands);


In doing so, I think I also need to register my GameObject as a “global” handler for these commands and so I need to add a call to do;


and that seemed to work really, really nicely.

That said, I am still pretty concerned that this isn’t going to work on the device itself reliably across invocations of the file dialog as I see the new WindowsSpeechInputProvider implementation using the KeywordRecognizer and I’m not sure that this type behaves well on the device when the application loses/gains focus.

Consequently, I figured that I would use all of this MRTK V2 infrastructure to deliver speech commands to me in the editor but, on the device, I would like to switch it off and rely on the mechanism that I’d previously built which I know works.

I edited my Input system profile in order to try and remove the WindowsSpeechInputProvider outside of the editor and I disabled the WindowsDicationInputProvider altogether;


and I then changed my startup code such that it did different things depending on whether it was in the editor or not;


and my own speech handling code is super, super simple and inefficient but I know that it works on a V1 device so I am trying to largely keep intact and here it is below – it essentially keeps creating a SpeechRecognizer (UWP not Unity) and using it for a single recognition before throwing it away and starting again;

    /// <summary>
    /// Why am I using my own speech handling rather than relying on SpeechInputSource and
    /// SpeechInputHandler? I started using those and they worked fine.
    /// However, I found that my speech commands would stop working across invocations of
    /// the file open dialog. They would work *before* and *stop* after.
    /// I spent a lot of time on this and I found that things would *work* under the debugger
    /// but not without it.
    /// That led me to think that this related to suspend/resume and perhaps HoloLens suspends
    /// the app when you move to the file dialog because I notice that dialog running as its
    /// own app on HoloLens.
    /// I tried hard to do work with suspend/resume but I kept hitting problems and so I wrote
    /// my own code where I try quite hard to avoid a single instance of SpeechRecognizer being
    /// used more than once - i.e. I create it, recognise with it & throw it away each time
    /// as this seems to *actually work* better than any other approach I tried.
    /// I also find that SpeechRecognizer.RecognizeAsync can get into a situation where it
    /// returns "Success" and "Rejected" at the same time & once that happens you don't get
    /// any more recognition unless you throw it away and so that's behind my approach.
    /// </summary>
    async void StartSpeechCommandHandlingAsync()
        while (true)
            var command = await this.SelectSpeechCommandAsync();

            if (command.Action != MixedRealityInputAction.None)
                // Just being paranoid in case we start spinning around here
                // My expectatation is that this code should never/rarely
                // execute.
                await Task.Delay(250);
    async Task<SpeechCommands> SelectSpeechCommandAsync()
        var registeredCommands = MixedRealityToolkit.InputSystem.InputSystemProfile.SpeechCommandsProfile.SpeechCommands;

        SpeechCommands command = default(SpeechCommands);

        using (var recognizer = new SpeechRecognizer())
                new SpeechRecognitionListConstraint(registeredCommands.Select(c => c.Keyword)));

            await recognizer.CompileConstraintsAsync();

            var result = await recognizer.RecognizeAsync();

            if ((result.Status == SpeechRecognitionResultStatus.Success) &&
                ((result.Confidence == SpeechRecognitionConfidence.Medium) ||
                 (result.Confidence == SpeechRecognitionConfidence.High)))
                command = registeredCommands.FirstOrDefault(c => string.Compare(c.Keyword, result.Text, true) == 0);
        return (command);

I suspect that I’ll be revisiting this code once I try and deploy to a device but, for now, it works in the editor and moves me onto my next little challenge.

I also switched off the frame rate profiler by default in the profile;


and implemented my handler to toggle it on/off;


Opening File Dialogs

My application has, initially, a single voice command, “Open”, which raises a file dialog in order to open a glTF model.

However, I’d only written the file open code in order to support opening the file dialog on a UWP device. I hadn’t done the work to make it open in the editor and I realised that this needed addressing so I quickly amended the method that I have to add an additional piece of code for the non-UWP platform case;

    async Task<string> PickFileFrom3DObjectsFolderAsync()
        var filePath = string.Empty;

        var known3DObjectsFolder = KnownFolders.Objects3D.Path.ToLower().TrimEnd('\\');

            filePath = await FileDialogHelper.PickGLTFFileAsync();

            if (!string.IsNullOrEmpty(filePath) &&
                filePath = string.Empty;
        } while (filePath == string.Empty);
        filePath = EditorUtility.OpenFilePanelWithFilters(
            "Select GLTF File",
            new string[] { "GLTF Files", "gltf,glb", "All Files", "*" });

        return (filePath);

but I found that even if I could raise the file dialog, I was still getting exceptions opening files…

Loading GLTF Models

The problem that I was hitting was that the GLTFParser was struggling to read the files that I was feeding it and so I decided to take the leap to stop using that code and start using the GLTF code bundled into the MRTK V2.

In the existing code, I make use of a class GLTFSceneImporter to load the one or more files that might make up a GLTF model. In my original blog post I had a few struggles using this in a deterministic way as it’s very coroutine based and I found it hard to be in control of a couple of things;

  • Knowing when it had finished
  • Knowing when it had thrown exceptions

I mentioned these challenges in the original post under the title of “A Small Challenge with Async/Await and CoRoutines” and also “Another Small Challenge with CoRoutines and Unity’s Threading Model”.

At the time, I largely worked around them by writing a base class named ExtendedMonoBehaviour which did some work for me in this regard. It’s in the repo so I won’t call it out in any detail here.

The GLTFSceneImporter delegated the responsibility for actually opening files to an implementation of an interface named ILoader which looks as below;

namespace UnityGLTF.Loader
	public interface ILoader
		IEnumerator LoadStream(string relativeFilePath);

		void LoadStreamSync(string jsonFilePath);

		Stream LoadedStream { get; }

		bool HasSyncLoadMethod { get; }

This was very useful for me as the user might choose to open a multi-file GLTF file with various separate material files etc. and this is the way in which my code gets to “know” which files have actually been opened. I need this list of files to be able to offer the model over HTTP to other devices that might request it in a shared experience.

In order to use this, I had a class RecordingFileLoader which implemented this ILoader interface and kept track of every file that it successfully opened on behalf of the loader and I passed this around into a couple of places that needed to know about the file list.

Looking at the new MRTK V2 support for GLTF, things seem much improved in that there is a new class GltfUtility which seems to offer an ImportGltfObjectFromPathAsync method. The built-in support for async makes my base class ExtendedMonoBehaviour redundant Smile but it does leave me with the challenge of trying to figure out how to know which files the code has actually loaded a model from.

That method returns a GltfObject and I wrote some code which attempts to work out which files loaded by interrogating the buffers property after it has been populated. I already had this class ImportedModelInfo which wrapped around my RecordingFileLoader and so I modified it to take on this extra functionality;

using Microsoft.MixedReality.Toolkit.Utilities.Gltf.Schema;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using UnityEngine;

public class ImportedModelInfo
    public ImportedModelInfo(
        string fullFilePath,
        GltfObject gltfObject)
        // Where were these files loaded from?
        this.BaseDirectoryPath = Path.GetDirectoryName(fullFilePath);

        // What's the name of the file itself?
        this.relativeLoadedFilePaths = new List<string>();

        // Note: At the time of writing, I'm unsure about what the URI property
        // might contain here for buffers and images given that the GLTF spec
        // says that it can be file URIs or data URIs and so what does the GLTF
        // reading code return to me in these cases?

        // I'm expecting Uris like 
        //  foo.bin
        //  subfolder/foo.bin
        //  subfolder/bar/foo.bin

        // and will probably fail if I encounter something other than that.
        var definedUris =
                .Where(b => !string.IsNullOrEmpty(b.uri))
                .Select(b => b.uri)
                    .Where(i => !string.IsNullOrEmpty(i.uri))
                    .Select(i => i.uri));


        this.GameObject = gltfObject.GameObjectReference;
    public string BaseDirectoryPath { get; private set; }
    public IReadOnlyList<string> RelativeLoadedFilePaths => this.relativeLoadedFilePaths.AsReadOnly();
    public GameObject GameObject { get; set; }

    List<string> relativeLoadedFilePaths;

with the reworking of one or two other pieces of code that then allowed me to delete my classes RecordingFileLoader and ExtendedMonoBehaviour which felt good Smile

I had to do another slight modification to code which had never been run in the editor before because it was expecting to export world anchors but, other than that, it was ok and I could now load at least one GLTF model in the editor as below;


What I couldn’t do was any kind of manipulations on the object so that was perhaps where I needed to look next although I suspect that depends on focus and I also suspect it relies on having a collider which might not be present…

The commit for these pieces is here.


The earlier code would attach this behaviour;

using HoloToolkit.Unity.InputModule;
using UnityEngine;

public class FocusWatcher : MonoBehaviour, IFocusable
    public void OnFocusEnter()
        focusedObject = this.gameObject;
    public void OnFocusExit()
        focusedObject = null;
    public static bool HasFocusedObject => (focusedObject != null);
    public static GameObject FocusedObject => focusedObject;
    static GameObject focusedObject;

to the models that had been loaded such that when voice commands like “reset” or “remove” were used, the code could check the HasFocusedObject property, get the FocusedObject value itself and then would typically look for some other component on that GameObject and make a method call on it to reset its position or remove it from the scene.

It’s questionable as to whether this behaviour should be attached to the objects themselves or whether it should just be a global handler for the whole scene but the effect is the same either way.

I need an equivalent in the new MRTK V2 and the natural thing to do would seem to be to reach into the MixedRealityToolkit.InputSystem.FocusProvider and make a call to GetFocusedObject() but that method expects that the caller knows which pointer is in use and I’m not sure that I do.

Instead, I chose to just update the existing class so as to implement IMixedRealityFocusHandler and keep doing what it had been doing before;

using HoloToolkit.Unity.InputModule;
using Microsoft.MixedReality.Toolkit.Input;
using UnityEngine;

public class FocusWatcher : MonoBehaviour, IMixedRealityFocusHandler
    public void OnFocusEnter(FocusEventData eventData)
        focusedObject = this.gameObject;
    public void OnFocusExit(FocusEventData eventData)
        focusedObject = null;
    public static bool HasFocusedObject => (focusedObject != null);
    public static GameObject FocusedObject => focusedObject;
    static GameObject focusedObject;

but I noticed that I still wasn’t able to interact with the duck – there’s still work to be done Smile

The commit for this stage is here.


My class which manipulates the cursor for me was still stubbed out and so I attempted to update that from what it had been;

using HoloToolkit.Unity.InputModule;
using UnityEngine;

public class CursorManager : MonoBehaviour
    private ObjectCursor cursor;

    public void Show()
    public void Hide()

to this version;

using Microsoft.MixedReality.Toolkit;
using Microsoft.MixedReality.Toolkit.Input;
using System.Collections.Generic;
using System.Linq;
using UnityEngine;

public class CursorManager : MonoBehaviour
    public CursorManager()
        this.hiddenPointers = new List<IMixedRealityPointer>();
    public void Hide()
        // TODO: I need to understand how you are supposed to do this on V2, I just want
        // to switch all cursors off when the user cannot do anything useful with them.
        foreach (var inputSource in MixedRealityToolkit.InputSystem.DetectedInputSources)
            foreach (var pointer in inputSource.Pointers)
                if ((pointer.IsActive) && (pointer.BaseCursor != null))
        MixedRealityToolkit.InputSystem.GazeProvider.Enabled = false;
    public void Show()
        foreach (var pointer in this.hiddenPointers)

        MixedRealityToolkit.InputSystem.GazeProvider.Enabled = true;
    List<IMixedRealityPointer> hiddenPointers;

I’m not sure whether this is “right” or not – once again I find myself puzzling a little over all these pointers and cursors and trying to figure which ones I’m meant to interact with but the code feels reasonably “safe” in that it attempts to put back what it did in the first place so, hopefully, I’m not breaking the toolkit with this.

That commit is here.


Up until now, I’ve left the code which attempts to handle manipulations as it was. That is, there is code in the application;


which attempts to add TwoHandManipulatable to a model which has been loaded from the disk (rather than one which has been received over the network where I don’t allow local manipulations). That TwoHandManipulatable wants a BoundingBoxPrefab and so you can see that my code here has passed such a thing through to it.

It’s probably not too surprising that this isn’t working as it’s mixing MRTK V1 classes with MRTK V2 in the scene so I wouldn’t really expect it to do anything.

Additionally, I’m not sure from looking at the objects in the editor that there is any type of collider being added by the glTF loading code so I probably need to deal with that too.

I suspect then that I’m going to need to add a few pieces here;

  • A BoxCollider to allow for interactions on the model.
  • ManipulationHandler to allow the model to be moved, rotated, etc.
  • NearInteractionGrabbable so that the manipulations cater for both near and far interactions on a HoloLens 2.
  • BoundingBox to provide some visualisation of the interactions with the model.

Additionally, I think that I’m going to want to be able to have quite a bit of control over the settings of some of the materials etc. on the BoundingBox and some of the axes of control on the other pieces and so it feels like it might be a lot easier to set this all up as a prefab that I can build in the editor and then just pass through to this code.

Previously, when loading a model my code took an approach of something like this;

  • load the GLTF model, giving a new GameObject with a collider already on it
  • create a new object to act as the model’s parent, parenting this object itself off some root parent within the scene
  • position the parent object 3m down the user’s gaze vector, facing the user
  • attach a world anchor to the parent object both for stability but also so it can be exported to other devices
  • add manipulation behaviours to the GLTF model itself so that it can be moved, rotated, scaled underneath its parent which is anchored

I decided to change this slightly for the new toolkit to;

  • load the GLTF model, giving a new GameObject ( M )
  • create a new object ( A ) to act as the anchored parent
  • create a new object to act as the model’s parent ( P ) from a prefab where BoxCollider, ManipulationHandler, NearInteractionGrabbable, BoundingBox are already present and configured on that prefab
  • parent M under P, P under A, A under R
  • add a world anchor to A

and that lets me slip this prefab into the hierarchy like adding an item into a linked-list so as to let the prefab bring a bunch of behaviour with it.

That prefab is as below;


and I tweaked a few materials and settings both on the BoundingBox largely based on examples that I looked at in the example scenes from the toolkit;




Changing the hierarchy of the components that are set up when a model is loaded into the scene had some impact on my scripts which create/access world anchors and on my scripts which tried to watch for object transformations to send/receive over the network and so I had to make a few related changes here to patch that up and pass a few objects to the right place but I’ll keep that detail out of the post.

It also broke my simplistic FocusWatcher class because that class expected that the GameObject which had focus would be the model itself with direct excess to various behaviours that I have added to it whereas, now, that object is buried in a bit of hierarchy and so I got rid of the FocusWatcher altogether at this point and tried to write this method which would hopefully return to me all focused objects which had a particular component within their hierarchy;

    IEnumerable<T> GetFocusedObjectWithChildComponent<T>() where T : MonoBehaviour
        // TODO: I need to figure whether this is the right way to do things. Is it right
        // to get all the active pointers, ask them what is focused & then use that as
        // the list of focused objects?
        var pointers = MixedRealityToolkit.InputSystem.FocusProvider.GetPointers<IMixedRealityPointer>()
            .Where(p => p.IsActive);

        foreach (var pointer in pointers)
            FocusDetails focusDetails;

            if (MixedRealityToolkit.InputSystem.FocusProvider.TryGetFocusDetails(
                pointer, out focusDetails))
                var component = focusDetails.Object?.GetComponentInChildren<T>();

                if (component != null)
                    yield return component;

whether this is a good thing to do or not, I’m not yet sure but for my app it’s only called on a couple of voice commands so it shouldn’t be executing very frequently.

I tried this out in the editor and I seemed to be at a place where I could open GTLF models and use near and far interactions to transform them as below;


the commit for this stage is here.

Removing the MRTK V1

At this point, I felt like I was done with the MRTK V1 apart from the ProgressRingIndicator which I am still using so I need to preserve it in my project for now.

I made a new folder named TookitV1 and I moved across the Progress related pieces which appeared to be;

  • Animations – the contents of the Progress folder
  • Fonts – I copied all of these
  • Materials – I copied only ButtonIconMaterial here
  • Prefabs – the contents of the Progress folder
  • Scripts – the contents of the Progress folder

I did a quick commit and then deleted the HoloToolkit folder and I also deleted the UnityGLTF folder as I should, at this point, not be using anything from those 2 places.

At this point, the ProgressIndicator blew up compiling and told me that it was missing the HoloToolkit.Unity namespace (easily fixed) and that it wanted to derive from Singleton<T> but I found that easy enough to fix by just changing the base class to MonoBehaviour and adding a static Instance property which was set to the first instance which spun up in the application.

I still had problems though in that I had a couple of missing scripts in the prefab for the ProgressIndicator and I tried to replicate what had been there previously with the SolverHandler and Orbital as below


and I had to patch a couple of materials but, other than that, the MRTK V1 was gone and the app seemed to continue to function in the editor.

The commit is here.

Removing MRTK V1 and UnityGLTF as Submodules

I had previously included the MRTK V1 and UnityGLTF github repos as submodules of my repo and I no longer need them so removing them would make the repo a lot cleaner.

Additionally, I had a setup.bat script which attempted to move a lot of files around, do some preliminary building of Unity GLTF etc. and I no longer need that either.

I should be in a state on this branch where the project can “simply” be pulled from github and built.

With that in mind, I attempted to remove both of those submodules following the procedure described here as I’ve done this once or twice but I can never remember how you’re meant to do it.

I also removed the setup.bat and altered the

Now, usually, when I do so many things at once some thing goes wrong so the next step was to…

Make a Clean Folder, Clone the Repo, Fix Problems

I cloned the repo again recursively into a new, clean folder with git clone –recursive and then switched to the V2WorkBlogPost and I noticed that git struggled to remove the MixedRealityToolkit-Unity and the UnityGLTF folders which had been created/populated as part of bringing down the recursive repo so I got rid of them manually (I’ll admit that the finer details of submodules are a bit of a mystery to me).

I reopened that project in Unity and, remarkably, all seemed to be fine – the project ran fine in the editor once I’d switched platforms & I didn’t seem to have missed files from my commits.

The commit is here.

Deploying to a Device

At this point, it felt like it was time to build for a device and see how the application was running as I find that there are often pieces of functionality that work ok in the editor but fail on a device.

I only have a HoloLens 1 device with me at the time of writing and so I used HoloLens 1, I can’t try on HoloLens 2 right now.

In trying to build for the device I hit an immediate failure;

“IOException: Sharing violation on path C:\Data\temp\blogpost\GLTF-Model-Viewer\GLTFModelViewer\Temp\StagingArea\Data\Managed\tempStrip\UnityEngine.AudioModule.dll”

but I see this quite frequently with Unity at the moment and so did a quick re-start (and shutdown Visual Studio) but then I got hit with the;

“Copying assembly from ‘Temp/Unity.TextMeshPro.dll’ to ‘Library/ScriptAssemblies/Unity.TextMeshPro.dll’ failed”

which is another transient error I see quite a lot so I did some more restarts (of both Unity and the Unity Hub) and managed to produce a successful VS build which seemed to deploy ok and run fine;


In deploying to the device, I also did some basic tests of the multi-user network sharing functionality which also seemed to be working fine.

Other Rework – Mixed Reality Extension Services

There are a few places in this code base where I make use of “services” which are really “global” across the project. As examples;

  • I have a class StorageFolderWebServer which, in a limited way, takes a UWP StorageFolder and makes some of its content available over HTTP via HttpListener
  • I have a NetworkMessageProvider which facilitates the shared experience by multicasting and receiving New Model, Transformed Model, Deleted Model messages around the local network.
    • This sits on top of a MessageService which simply knows how to Send/Receive messages having initially joined a multicast group.
  • I have a MessageDialogHelper which shows message boxes without blowing up the Unity/UWP threads.
  • I have a FileDialogHelper which shows a file dialog without blowing up the Unity/UWP threads.

Most of these are probably static classes but I feel that they are really providing services which may/not have some configurable element to them and which other pieces of code just need to look up somewhere in a registry and make use of thereby allowing them to be replaced at some point in the future.

As the MRTK V2 provides a form of service registry via the means of “extensions” to the toolkit, I thought it would make sense to try that out and see if I could refactor some code to work that way.

By way of example, I started with my MessageService class and extracted an interface from it deriving it from IMixedRealityExtensionService;

using Microsoft.MixedReality.Toolkit;
using System;

namespace MulticastMessaging
    public interface IMessageService : IMixedRealityExtensionService
        MessageRegistrar MessageRegistrar { get; set; }
        void Close();
        void Open();
        void Send<T>(T message, Action<bool> callback = null) where T : Message;

and then I defined a profile class for my service with the sorts of properties that I might want to set on it;

using Microsoft.MixedReality.Toolkit;
using UnityEngine;

namespace MulticastMessaging

        menuName = "Mixed Reality Toolkit/Message Service Profile",
        fileName = "MessageServiceProfile")]
    public class MessageServiceProfile : BaseMixedRealityProfile
        [Tooltip("The address to use for multicast messaging")]
        public string multicastAddress = "";

        [Tooltip("The port to use for multicast messaging")]
        public int multicastPort = 49152;

and then implemented that on my MessageService class deriving that from BaseExtensionService and marking it with a MixedRealityExtensionService attribute as you see below;

namespace MulticastMessaging
    using Microsoft.MixedReality.Toolkit;
    using Microsoft.MixedReality.Toolkit.Utilities;
    using System;
    using System.Diagnostics;
    using System.IO;
    using System.Net;
    using System.Net.Sockets;

    [MixedRealityExtensionService(SupportedPlatforms.WindowsUniversal | SupportedPlatforms.WindowsEditor)]
    public class MessageService : BaseExtensionService, IMessageService
        // Note: is the start of the UDP multicast addresses reserved for
        // private use.
        // Note: 49152 is the result I get out of executing;
        //      netsh int ipv4 show dynamicport udp
        // on Windows 10.
        public MessageService(
            IMixedRealityServiceRegistrar registrar,
            string name,
            uint priority,
            BaseMixedRealityProfile profile) : base(registrar, name, priority, profile)

        MessageServiceProfile Profile => base.ConfigurationProfile as MessageServiceProfile;

Clearly, that’s not the whole code but note the use of the MixedRealityExtensionService attribute and also the reach into the base class to get the ConfigurationProfile and cast it up as the concrete type of my actual profile.

With that in place, I can now use the editor to create one of those profiles;


and then I can add my new service to extensions of the toolkit;


and then change my code to grab hold of the instance via


whenever I want to get hold of the instance of that service.

In this branch, I only added two services this way – my networking provider and my messaging service but in my V2Work branch I’ve made more of these services and plan to rework a few more pieces in this way;


The commit at this point is here.

Wrapping Up

I wanted to go around the loop again on this exercise partly to make my own notes around things that I have perhaps forgotten and partly in case there were some pieces that others might pick up on and share.

I’m not planning to take this V2WorkBlogPost branch any further or release anything from it because I’ve already done the port in my V2Work branch and I want to move that forward and, ultimately, merge back into master from there but I did learn a few things by repeating the exercise, namely;

  1. I can do a better job of making speech work in the editor and at runtime.
  2. I should make more extension services for some of the other pieces of my app.
  3. I did a better job of leaving the MRTK V1 in the code base until I really no longer needed it whereas first time around I removed it too early and got in a bit of a mess Smile
  4. I realised that more of the app functionality needs to work in the editor and I can improve that but there’s still a way to go as I haven’t made attempts to have all of it work in the editor.

I hope that there was something useful in here for readers (if any get this far to the end of the post) and good luck in porting your own apps across to MRTK V2 Smile

A Simple glTF Viewer for HoloLens

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

A quick note – I’m still not sure whether I should bring this blog back to life having paused it but I had a number of things running around in my head that are easier if written down and so that’s what I’ve done Smile

Viewing 3D Files on a HoloLens

A few weeks ago, a colleague came to me with 2 3D models packaged in files and said “I just want to show these 2 models to a customer on a HoloLens”.

I said to him;

“No problem, open up the files in 3D Viewer on the PC, have a look at them and then transfer them over to HoloLens and view them in 3D Viewer there”

Having passed on this great advice, I thought I’d better try it out myself and, like much of my best advice, it didn’t actually work Winking smile

Here’s why it doesn’t work. I won’t use the actual models in this blog post so let’s assume that it was this model from Remix3D;


Now, I can open that model in Paint3D or 3D Viewer, both of which are free and built-in on Windows 10 and I can get a view something like this one;


which tells me that this model is 68,000 polygons so it’s not a tiny model but it’s not a particularly big one either and I’d expect that it would display fine on a mobile device which might not be the case if it was 10x or 100x times as big.

Now, knowing that there’s an application on my PC called “3D Viewer” and knowing that there’s one on my HoloLens called “3D Viewer” might lead me to believe that they are the same application with same set of capabilities and so I might just expect to be able to move to the HoloLens, run the Mixed Reality Viewer application and open the same model there.

But I can’t.

3D Viewer on PC

If you run up the 3D Viewer on a PC then you get an app which runs in a Window and which displays a 3D model with a whole range of options including being able to control how the model is rendered, interacting with animations, changing the lighting and so on;


The application lets you easily load up model files from the file system or from the Remix3D site;


You can also use this application to “insert” the model into the real-world via a “Mixed Reality” mode as below;


I’d say that (for me) this is very much at the “Augmented Reality” end of the spectrum in that while the model here might look like it’s sitting on my monitor, I can actually place it in mid-air so I’m not sure that it’s really identifying planes for the model to sit on. I can pick up my laptop and wander around the model and that works to some extent although I find it fairly easy to confuse the app.

One other thing that I’d say in passing is that I have no knowledge around how this application offers this experience or how a developer would build a similar experience – I’m unaware of any platform APIs that help you build this type of thing for a PC using a regular webcam in this way.

3D Viewer on HoloLens

3D Viewer on HoloLens also runs in a window as you can see here;


and you can also open up files from the file system or from the Remix3D site or from a pre-selected list of “Holograms” which line up with the content that used to be available in the original “Holograms” app going all the way back to when the device was first made available.

The (understandable) difference here is that when you open a model, it is not displayed as a 3D object inside of the application’s Window as that would be a bit lame on a HoloLens device.

Instead, the model is added to the HoloLens shell as shown below;


This makes sense and it’s very cool but on the one hand it’s not really an immersive viewing application – it’s a 2D application which is invoking the shell to display a 3D object.

As an aside, it’s easy to ask the Shell to display a 3D object using a URI scheme and I wrote about that here a while ago and I suspect (i.e. I don’t know) that this is what the 3D Viewer application is doing here;

Placing 3D Models in the Mixed Reality Home

The other aspect of this is that 3D models displayed by the Shell have requirements;

Create 3D models for use in the home

and so you can’t just display an arbitrary model here and I tend to find that most models that I try and use in this way don’t work.

For example, if we go back to the model of a Surface Book 2 that I displayed in 3D Viewer on my PC then I can easily copy that model across to my HoloLens using the built-in “Media Transfer Protocol” support which lets me just see the device’s storage in Explorer once I’ve connected it via USB and then open it up in 3D Viewer where I see;


and so I find that regardless of their polygon count most, general models, don’t open within the 3D Viewer on HoloLens – they tend to display this message instead and that’s understandable given that the application is trying to;

  • do the right thing by not having the user open up huge models that won’t then render well
  • integrate the models into the Shell experience which has requirements that presumably can’t just be ignored.

So, if you want a simple viewer which just displays an arbitrary model in an immersive setting then 3D Viewer isn’t quite so general purpose.

This left me stuck with my colleague who wanted something simple to display his models and so I made the classic mistake.

I said “I’ll write one for you” Winking smile

This Does Not Solve the Large/Complex Models Problem

I should stress that me setting off to write a simple, custom viewer is never going to solve the problem of displaying large, complex 3D models on a mobile device like a HoloLens and, typically, you need to think about some strategy for dealing with that type of complexity on a mobile device. There are guidelines around this type of thing here;

Performance recommendations for HoloLens apps

and there are tools/services out there to help with this type of thing including tools like;

My colleague originally provided me with a 40K polygon model and a 500K polygon model.

I left the 40K model alone and used 3DS Max to optimise the 500K poly model down to around 100K which rendered fine for me on HoloLens through the application that I ended up building.

It took a bit of experimentation in the different tools to find the right way to go about it as some tools failed to load the models, others produced results that didn’t look great, etc. but it didn’t take too long to decimate the larger one.

Building glTF Viewer Version 1.0

So, to help out with the promise I’d made to my colleague, I built a simple app. It’s in the Windows Store over here and the source for it is on Github over here.

It’s currently heading towards Version 2.0 when I merge the branches back together and get the Store submission done.

For version 1.0, what I wanted was something that would allow a user to;

  • open a 3D file in .GLB/.GLTF format from their HoloLens storage.
  • display the 3D model from it.
  • manipulate it by scaling, rotating and translating.
  • have as little UI as possible and drive any needed interactions through speech.

and that was pretty much all that I wanted – I wanted to keep it very simple and as part of that I decided to deliberately avoid;

  • anything to do with other 3D model file formats but was, instead, quite happy to assume that people would find conversion tools (e.g. Paint3D, 3D Builder, etc) that could generate single file (.GLB) or multi-file (.GLTF) model files for them to import.
  • any attempt to open up files from cloud locations via OneDrive etc.

With that in mind, I set about trying to build out a new Unity-based application and I made a couple of quick choices;

  • that I would use the Mixed Reality Toolkit for Unity and I chose to use the current version of the Toolkit rather than the vNext toolkit as that’s still “work in progress” although I plan to port at a later point.
    • this meant that I could follow guidance and use the LTS release of Unity – i.e. a 2017.4.* version which is meant to work nicely with the toolkit.
  • that I would use UnityGLTF as a way of reading GLTF files inside of Unity.
  • that I would use sub-modules in git as a way of bringing those two repos into my project as described by my friend Martin over here.

I also made a choice that I would use standard file dialogs for opening up files within my application. This might seem like an obvious choice but those dialogs only really work nicely once your HoloLens is running on the “Redstone 5” version of Windows 10 as documented here;

Mixed Reality Release Notes – Current Release Notes

and so I was limiting myself to only running on devices that are up-to-date but I don’t think that’s a big deal for HoloLens users.

In terms of how the application is put together, it’s a fairly simple Unity application using only a couple of features from the Mixed Reality Toolkit beyond the base support for cameras, input etc.

Generally, beyond a few small snags with Unity when it came to generating the right set of assets for the Windows Store I got that application built pretty quickly & submitted it to the Store.

However, I did hit a few small challenges…

A Small Challenge with UnityGLTF

I did hit a big of a snag because the Mixed Reality Toolkit makes some use of pieces from a specific version of UnityGLTF to provide functionality which loads the Windows Mixed Reality controller models when running on an immersive headset.

UnityGLTF (scripts and binaries) in the Mixed Reality Toolkit

I wanted to be able to bring all of UnityGLTF (a later version) into my project alongside the Mixed Reality Toolkit and so that caused problems because both scripts & binaries would be duplicated and Unity wasn’t very happy about that Smile

I wrote a little ‘setup’ script to remove the GLTF folder from the Mixed Reality Toolkit which was ok except it left me with a single script named MotionControllerVisualizer.cs which wouldn’t build because it had a dependency on UnityGLTF methods that were no longer part of the Unity GLTF code-base (i.e. I happened to have the piece of code which seemed to have an out-of-date dependency).

That was a little tricky for me to fix so I got rid of that script too and fixed up the scripts that took a dependency on it by adding my own, mock implementation of that class into my project knowing that nothing in my project was ever going to display a motion controller anyway.

It’s all a bit “hacky” but it got me to the point where I could combine the MRTK and UnityGLTF in one place and build out what I wanted.

A Small Challenge with Async/Await and CoRoutines

One other small challenge that I hit while putting together my version 1.0 application is the mixing of the C# async/await model with Unity’s CoRoutines.

I’ve hit this before and I fully understand where Unity has come from in terms of using CoRoutines but it still bites me in places and, specifically, it bit me a little here in that I had code which was using routines within the UnityGLTF which are CoRoutine based and I needed to get more information around;

  • when that code completed
  • what exceptions (if any) got thrown by that code

There’s a lot of posts out there on the web around this area including these examples;

and in my specific case I had to write some extra code to try and glue together running a CoRoutine, catching exceptions from it and tying it into async/await but it wasn’t too challenging, it just felt like “extra work” that I’m sure in later years won’t have to be done as these two models get better aligned. Ironically, this situation was possibly more clear-cut when async/await weren’t really available to use inside of Unity’s scripts.

Another Small Challenge with CoRoutines & Unity’s Threading Model

Another small challenge here is that the UnityGLTF code which loads a model needs to, naturally, create GameObjects and other UI constructs inside of Unity which aren’t aren’t thread-safe and have affinity to the UI thread. So, there’s no real opportunity to run this potentially expensive CoRoutine on some background thread but, rather, it hogs the UI thread a bit while it’s loading and creating GameObjects.

I don’t think that’s radically different from other UI frameworks but I did contemplate trying to abstract out the creation of the UI objects so as to defer it until same later point when it could all be done in one go but I haven’t attempted to do that and so, currently, while the GLTF loading is happening my UI is displaying a progress wheel which can miss a few updates Sad smile

Building glTF Viewer Version 2.0

Having produced my little Version 1.0 app and submitted it to the Store, the one thing that I really wanted to add was the support for a “shared holographic experience” such that multiple users could see the same model in the same physical place. It’s a common thing to want to do with HoloLens and it seems to be found more in large, complex, enterprise apps than in just simple, free tools from the Store and so I thought I would try and rectify that a little.

In doing so, I wanted to try and keep any network “infrastructure” as minimal as possible and so I went with the following assumptions.

  • that the devices that wanted to share a hologram were in the same space on the same network and that network would allow multicast packets.
  • sharing is assumed in the sense that the experience would automatically share holograms rather than the user having to take some extra steps.
  • that not all the devices would necessarily have the files for the models that are loaded on the other devices.
  • that there would be no server or cloud connectivity required.

The way in which I implemented this centres around a HoloLens running the glTF Viewer app acting as a basic web server which serves content out of its 3D Objects folder such that other devices can request that content and copy it into their own 3D Objects folder.

The app then operates as below to enable sharing;

  • When a model is opened on a device
    • The model is given a unique ID.
    • A list of all the files involved in the model is collected (as GLTF models can be packaged as many files) as the model is opened.
    • A file is written to the 3D Objects folder storing a relative URI for each of these files to be obtained remotely by another device.
    • A spatial anchor for the model is exported into another file stored in the 3D Objects folder.
    • A UDP message is multi-casted to announce that a new model (with an ID) is now available from a device (with an IP address).
    • The model is made so that it can be manipulated (scale, rotate, translate) and those manipulations (relative to the parent) are multi-cast over the network with the model identifier attached to them.
  • When a UDP message announcing a new model is received on a device
    • The device asks the user whether they want to access that model.
    • The device does web requests to the originating device asking for the URIs for all the files involved in that model.
    • The device downloads (if necessary) each model file to the same location in its 3D Objects folder.
    • The device downloads the spatial anchor file.
    • The device displays the model from its own local storage & attaches the spatial anchor to place it in the same position in the real world.
    • The model is made so that it cannot be manipulated but, instead, picks up any UDP multicasts with update transformations and applies them to the model (relative to its parent which is anchored).

and that’s pretty much it.

This is all predicated on the idea that I can have a HoloLens application which is acting as a web server and I had in mind that this should be fairly easy because UWP applications (from 16299+) now support .NET Standard 2.0 and HttpListener is part of .NET Standard 2.0 and so I could see no real challenge with using that type inside of my application as I’d written about here;

UWP and .NET Standard 2.0–Remembering the ‘Forgotten’ APIs 🙂

but there were a few challenges that I met with along the way.

Challenge Number 1 – Picking up .NET Standard 2.0

I should say that I’m long past the point of being worried about being seen to not understand something and am more at the point of realising that I don’t really understand anything  Smile

I absolutely did not understand the ramifications of wanting to modify my existing Unity project to start making use of HttpListener Smile

Fairly early on, I came to a conclusion that I wasn’t going to be able to use HttpListener inside of a Unity 2017.4.* project.

Generally, the way in which I’ve been developing in Unity for HoloLens runs something like this;

  • I am building for the UWP so that’s my platform.
  • I use the .NET scripting backend.
  • I write code in the editor and I hide quite a lot of code from the editor behind ENABLE_WINMD_SUPPORT conditional compilation because the editor runs on Mono and it doesn’t understand the UWP API surface.
  • I press the build button in Unity to generate a C#/.NET project in Visual Studio.
  • I build that project and can then use it to deploy, debug my C#/UWP application and generate store packages and so on.

It’s fairly simple and, while it takes longer than just working in Visual Studio, you get used to it over time.

One thing that I haven’t really paid attention to as part of that process is that even if I select the very latest Windows SDK in Unity as below;


then the Visual Studio project that Unity generates doesn’t pick up the latest .NET packages but, instead, seems to downgrade my .NET version as below;


I’d struggled with this before (in this post under the “Package Downgrade Issue”) without really understanding it but I think I came to a better understanding of this as part of trying to get HttpListener into my project here.

In bringing in HttpListener, I hit build problems and I instantly assumed that I needed to upgrade Unity because Unity 2017.* does not offer .NET Standard 2.0 as an API Compatibility Level as below;


and I’d assumed that I’d need to move to a Unity 2018.* version in order to pick up .NET Standard 2.0 as I’d seen that Unity 2018.* had started to support .NET Standard 2.0.

Updated scripting runtime in Unity 2018.1: What does the future hold?

and so needing to pick up a Unity 2018.* version and switch in there to use .NET Standard 2.0 didn’t surprise me and so I got version 2018.2.16f1 and I opened up my project in there and switched to .NET Standard 2.0 and that seemed like a fine thing to do;


but it left me with esoteric build failures as I hadn’t realised that Unity’s deprecation of the .NET Scripting Backend as per this post;

Deprecation of support for the .Net Scripting backend used by the Universal Windows Platform

had a specific impact in that it meant that new things which came along like SDK 16299 with its support for .NET Standard 2.0 didn’t get implemented in the .NET Scripting Backend for Unity.

They are only present in the IL2CPP backend and I presume that’s why my generated .NET projects have been downgrading the .NET package used.

So, if you want .NET Standard 2.0 then you need SDK 16299+ and that dictates Unity 2018.+ and that dictates moving to the IL2CPP backend rather than the .NET backend.

I verified this over here by asking Unity about it;

2018.2.16f1, UWP, .NET Scripting Backend, .NET Standard 2.0 Build Errors

and that confirms that the .NET Standard 2.0 APIs are usable from the editor and from the IL2CPP back-end but they aren’t going to work if you’re using .NET Scripting Backend.

I did try. I hid my .NET code in libraries and referenced them but, much like the helpful person told me on the forum – “that didn’t work”.

Challenge Number 2 – Building and Debugging with IL2CPP on UWP/HoloLens

Switching to the IL2CPP back-end really changed my workflow around Unity. Specifically, it emphasised that I need to spend as much time in the editor because I find that the two phases of;

  • building inside of the Unity editor
  • building the C++ project generated by the Unity editor

is a much lengthier process than doing the same thing on the .NET backend and Unity has an article about trying to improve this;

Optimizing IL2CPP build times

but I didn’t really find that I could get my build times to come down much and I’d find that maybe a one-line change could take me into a 20m+ build cycle.

The other switch in my workflow was around debugging. There are a couple of options here. It’s possible to debug the generated C++ code and Unity has an article on it here;

Universal Windows Platform: Debugging on IL2CPP Scripting Backend

but I’d have to say that it’s pretty unproductive trying to find the right piece of code and then step your way through generated C++ which looks like;

but you can do it and I’ve had some success with it and one aspect of it is “easy” in that you just open the project, point it at a HoloLens/emulator for deployment & then press F5 and it works.

The other approach is to debug the .NET code because Unity does have support for this as per this thread;

About IL2CPP Managed Debugger

and the details are given again in this article;

Universal Windows Platform: Debugging on IL2CPP Scripting Backend

although I would pay very close attention to the settings that control this as below;


and I’d also pay very close attention to the capabilities that your application must have in order to operate as a debuggee. I had to question how to get this working on the Unity Forums;

Unity 2018.2.16f1, UWP, IL2CPP, HoloLens RS5 and Managed Debugging Problems

but I did get it to work pretty reliably on HoloLens in the end but I’d flag a few things that I found;

  • sometimes the debugger wouldn’t attach to my app & I’d have to restart the app. It would be listed as a target in Unity’s “Attach To” dialog in Visual Studio but attaching just did nothing.
  • that the debugger can be very slow – sometimes I’d wait a long time for breakpoints to become active.
  • that the debugger quite often seems to step into places where it can’t figure out the stack frame. Pressing F10 seemed to fix that.
  • that the debugger’s step-over/step-into sometimes didn’t seem to work.
  • that the debugger’s handling of async/await code could be a bit odd – the instruction pointer would jump around in Visual Studio as though it had got lost but the code seemed to be working.
  • that hovering over variables and putting them into the watch windows was quite hit-and-miss.
  • that evaluating arbitrary .NET code in the debugger doesn’t seem to work (I’m not really surprised).
  • breaking on exceptions isn’t a feature as far as I can tell – I think the debugger tells you so as you attach but I’m quite a fan on stopping on first-chance exceptions as a way of seeing what code is doing.

I think that Unity is working on all of this and I’ve found them to be great in responding on their forums and on Twitter, it’s very impressive.

In my workflow, I tended to use both the native debugger & the managed debugger to try and diagnose problems.

One other thing that I did find – I had some differences in behaviour between my app when I built it with “script debugging” and when I didn’t. It didn’t affect me too much but it did lower my overall level of confidence in the process.

Putting that to one side, I’d found that I could move my existing V1.0 project into Unity 2018.* and change the backend from .NET to IL2CPP and I could then make use of types like HttpListener and build and debug.

However, I found that the code stopped working Smile

Challenge 3 – File APIs Change with .NET Standard 2.0 on UWP

I hadn’t quite seen this one coming. There’s a piece of code within UnityGLTF which loads files;


In my app, I open a file dialog, have the user select a file (which might result in loading 1 or many files depending on whether this is a single-file or multi-file model) and it runs through a variant of this FileLoader code.

That code uses File.Exists() and File.OpenRead() and, suddenly, I found that the code was no longer working for files which did exist and which my UWP app did have access to.

It’s important to note that the file in question would be a brokered file for the UWP app (i.e. one which it accesses via a broker to ensure it has the right permissions) rather than just say a file within the app’s own package or it’s own dedicated storage. In particular, my file would reside within the 3D Objects folder.

How could that break? It comes back to .NET Standard 2.0 because these types of File.* functions work differently for UWP brokered files depending on whether you are on SDK 16299+ with .NET Standard 2.0 or on an earlier SDK before .NET Standard 2.0 came along.

The thorny details of that are covered in this forum post;

File IO operations not working when broadFileSystemAccess capability declared

which gives some of the detail but, essentially, for my use case File.Exists and File.OpenRead were now causing me problems and so I had to replace some of that code which brings me back to…

Challenge 4 – Back to CoRoutines, Enumerators and Async

As I flagged earlier, mixing and matching an async model based around CoRoutines in Unity (which is mostly AFAIK about asynchronous rather than concurrent code) with one based around Tasks can be a bit of a challenge.

With the breaking change to File.OpenRead(), I had to revisit the FileLoader code and modify it such that it still presented an IEnumerator-based pattern to the rest of the UnityGLTF code while, internally, it needed to move from using the synchronous File.OpenRead() to the asynchronous StorageFile.OpenReadAsync().

It’s not code that I’m particularly proud of and wouldn’t like to highlight it here but it felt like one of those situations where I got boxed into a corner and had to make the best of what I had to work with Smile

Challenge 5 – ProgressRings in the Mixed Reality Toolkit

I’m embarrassed to admit that I spent a lot longer trying to get a ProgressRing from the Mixed Reality Toolkit to work than I should have.

I’ve used it before, there’s an example over here;

Progress Example

but could I get it to show up? No.

In the end, I decided that there was something broken in the prefab that makes up the progress ring and I switched from using the Solver Radial View to using the Solver Orbital script to manage how the progress ring moves around in front of the user & that seemed to largely get rid of my problems.

Partially, this was a challenge because I hit it at the time when I was struggling to get used to my new mode of debugging and I just couldn’t get this ring to show up.

In the end, I solved it by just making a test scene and watching how that behaved in the editor at runtime before applying that back to my real scene which is quite often how I seem to solve these types of problems in Unity.

Challenge 6 – UDP Multicasting on Emulators and HoloLens

I chose to use UDP multicasting as a way for one device to notify others on the same network that it had a new model for them to potentially share.

This seemed like a reasonable choice but it can make it challenging to debug as I have a single HoloLens and have never been sure whether a HoloLens emulator can/can’t participate in UDP multicasting or whether there’s any settings that can be applied to the virtual machine to make that work.

I know when I wrote this post that I’d failed to get multicasting working on the emulator and this time around I tried a few combinations before giving up and writing a test-harness for my PC to act as a ‘mock’ HoloLens from the point of view of being able to generate/record/playback messages it received from the real HoloLens.

I’ve noticed over time a number of forum posts asking whether a HoloLens can receive UDP traffic at all such as;

and there are more.

I can certainly verify that a UWP app on HoloLens can send/receive UDP multicast traffic but I’d flag that I have seen situations where my current device (running RS5) has got into a situation where UDP traffic seems to fail to be delivered into my application until I reboot the device. I’ve seen it very occasionally but more than once so I’d flag that this can happen on the current bits & might be worth bearing in mind for anyone trying to debug similar code on similar versions.

Closing Off

I learned quite a lot in putting this little test application together – enough to think it was worth opening up my blog and writing down some of the links so that I (or others) can find them again in the future.

If you’ve landed here via search or have read the whole thing ( ! ) then I hope you found something useful.

I’m not sure yet whether this one-off-post is the start of me increasing the frequency of posting here so don’t be too surprised if this blog goes quiet again for a while but do feel very free to reach out if I can help around these types of topics and, of course, feel equally free to point out where I’ve made mistakes & I’ll attempt to fix them Smile

Update – One Last Thing (Challenge 7), FileOpenPicker, Suspend/Resume and SpeechRecognizer

Finding a Suspend/Resume Problem with Speech

I’d closed off this blog post and published it to my blog and I’d shipped version 2.0 of my app to the Store when I came across an extra “challenge” in that I noticed that my voice commands seemed to be working only part of the time and, given that the app is driven by voice commands, that seemed like a bit of a problem.

It took me a little while to figure out what was going on because I took the app from the Store and installed it and opened up a model using the “open” command and all was fine but then I noticed that I couldn’t then use the “open” command for a second time or the “reset” command for a first time.

Naturally, I dusted the code back off and rebuilt it in debug mode and tried it out and it worked fine.

So, I rebuilt in release mode and I got mixed results in finding that sometimes things worked and other times they didn’t and it took me a while to realise that it was the debugger which was making the difference. With the debugger attached, everything worked as I expected but when running outside of the debugger, I would find that the voice commands would only work until the FileOpenPicker had been on the screen for the first time. Once that dialog had been on the screen the voice commands no longer worked and that was true whether a file had been selected or whether the dialog had simply been cancelled.

So, what’s going on? Why would putting a file dialog onto the screen cause the application’s voice commands to break and only when the application was not running under a debugger?

The assumption that I made was that the application was suffering from a suspend/resume problem and that the opening of the file dialog was causing my application to suspend (and somehow break its voice commands) before choosing a file such that when my application resumed the voice commands were broken.

Why would my app suspend/resume just to display a file picker? I’d noticed previously that there is a file dialog process running on HoloLens so perhaps it’s fair to assume/guess that opening a file involves switching to another app altogether and, naturally, that might mean that my application suspends during that process.

I remember that this was also possible under phone implementations and (if I remember correctly) the separate-process model on phones was the reason why the UWP ended up with AndContinue() style APIs in the early days when the phone and PC platforms were being unified.

Taking that assumption further – it’s well known that when you are debugging a UWP app in Visual Studio the “Process Lifecycle Management” (PLM) events are disabled by the debugger. That’s covered in the docs here and so I could understand why my app might be working in the debugger and not working outside of the debugger.

That said, I did find that my app still worked when I manually used the debugger’s capability to suspend/resume (via the toolbar) which was a bit of a surprise as I expected it to break but I was fairly convinced by now that that my problem was due to suspend/resume.

So, it seems like I have a suspend/resume problem. What to do about it?

Resolving the Suspend/Resume Problem with Speech

My original code was using speech services provided by the Mixed Reality Toolkit’s SpeechInputSource.cs and SpeechInputHandler.cs utilities and I tried quite a few experiments around enabling/disabling these around suspend/resume events from the system but I didn’t find a recipe that made them work.

I took away my use of that part of the MRTK and started directly using SpeechRecognizer myself so that I had more control of the code & I kept that code as minimal as possible.

I still hit problems. My code was organised around spinning up a single SpeechRecognizer instance, keeping hold of it and repeatedly asking it via the RecognizeAsync() method to recognise voice commands.

I would find that this code would work fine until the process had suspended/resumed and then it would break. Specifically, the RecognizeAsync() code would return Status values of Success and Confidence values of Rejected.

So, it seemed that having a SpeechRecognizer kicking around across suspend/resume cycles wasn’t the best strategy and I moved to an implementation which takes the following approach;

  • instantiate SpeechRecognizer
  • add to its Constraints collection an instance of SpeechRecognitionListConstraint
  • compile the constraints via CompileConstraintsAsync
  • call RecognizeAsync making a note of the Text result if the API returns Success and confidence is Medium/High
  • Dispose of the SpeechRecognizer and repeat regardless of whether RecognizeAsync returns a relevant value or not

and the key point seemed to be to avoid keeping a SpeechRecognizer instance around in memory and repeatedly calling RecognizeAsync on it expecting that it would continue to work across suspend/resume cycles.

I tried that out, it seems to work & I shipped it off into Store as a V3.0.

I have to admit that it didn’t feel like a very scientific approach to getting something to work – it was more trial and error so if someone has more detail here I’d welcome it but, for the moment, it’s what I settled on.

One last point…

Debugging this Scenario

One interesting part of trying to diagnose this problem was that I found the Unity debugger to be quite helpful.

I found that I could do a “script debugging” build from within Unity and then run that up on my device. I could then use my first speech command to open/cancel the file picker dialog before attaching Unity’s script debugger to that running instance in order to take a look around the C# code and see how opening/cancelling the file dialog had impacted my code that was trying to handle speech.

In some fashion, I felt like I was then debugging the app (via Unity) without really debugging the app (via Visual Studio). It could be a false impression but, ultimately, I think I got it working via this route Smile

“Hello World” Mixed Reality Demo from the UK TechKnowDay Event 2018

I had the privilege to be invited to speak at the UK TechKnowDay Event today as part of International Women’s Day;

and I went along with my colleague, Pete, and talked to the attendees about Windows Mixed Reality.

As part of that, I’d put together a very simple “Hello World” demo involving taking a 3D model of an avatar who appeared when air-tapped on a HoloLens and then fell with a parachute to the floor. This is really just a way of showing the basics of using the Unity toolkit, the Mixed Reality Toolkit and Visual Studio to make something that runs on HoloLens and which blends the digital with the physical.

At the event, we shortened the demo because we were running a little low on time and so I promised to include the materials on the web somewhere and that’s what this post is about.

First, I made 3 models using Paint3D and so I wanted to include that little video here – it’s intended to be spoken over so there’s no audio on it;

and then there’s a little video showing me working through in Unity to bring in the assets from Paint3D and add some very, very limited interactivity to them using Unity and the Mixed Reality Toolkit.

The way the app is supposed to work is that an air tap will cause the creation of an instance of the avatar. She will then fall under (reduced) gravity landing on a surface when her parachute should disappear and then she might sort of ‘snowboard’ to a stop where her snowboard should also disappear Smile

I’m not sure that anyone would want this coding masterpiece Smile but if they did then it’s on github over here;

Feel very free to re-use, share or whatever you like with this if it’s of use to you.