Simple Shared Holograms with Photon Networking (Part 2)

Following on from the previous post, it’s time to get some holograms onto the screen and make it such that they are moveable by the user.

The “easiest” way to do this would seem to be add a voice command such that some hologram is created when the user issues a keyword and the easiest thing to create is (always !) a cube and so I started there.

Adding a Voice Command

Adding voice commands is pretty easy with the MRTK.

I went to the Input section of my MRTK profile, cloned the speech commands profile and added in a new “Cube” keyword as below;

and then I added an instance of Speech Input Handler to my Root object as below and wired it up to a new empty method on my PhotonScript named OnCreateCube;

Representing the Root Object

When developing with anchors, it’s always “nice” to have a representation of “where” the anchor is in space and whether it’s been created, located etc.

In my previous post, my anchor was simply represented by a blue beam running through the centre of the anchor location so I improved this slightly have that Root object now contain some 3D axes;

and I also changed the code to add materials such that I could change the colour of the sphere to indicate the anchor status. It starts off white but then;

  • if the anchor is created, it turns blue
  • if the anchor is located, it turns green
  • if there’s an error, it turns red

I was surprised by how useful it is to run the app, see the axes appear at 0,0,0 on my head and then watch the sphere turn green and the axes jump around in space to their previously anchored location – it’s really handy to have a cheap visualisation.

Creating Objects

Now I just need something to create so I made a simple prefab which is a cube scaled to 0.2 x 0.2 x 0.2 along with some MRTK scripts to make it moveable, namely BoundingBox, ManipulationHandler and NearInteractionGrabbable;

Note that the prefab also has the PhotonView component on it so as to make it possible to instantiate this prefab with Photon as a “networked” object.

With that in place, I can add a field to my PhotonScript to store this prefab and then instantiate it in response to the “Cube!” voice command;

    public void OnCreateCube()
    {
        // Position it down the gaze vector
        var position = Camera.main.transform.position + Camera.main.transform.forward.normalized * 1.2f;

        // Create the cube
        var cube = PhotonNetwork.InstantiateSceneObject(this.cubePrefab.name, position, Quaternion.identity);
    }

and that all works quite nicely and I’m creating cubes and my intention with using InstantiateSceneObject is to have those cubes be “owned” by the scene rather than by a particular player so I’m hoping that they will stick around when the player who created them leaves the room.

Parenting Objects Created by Photon

In the editor, though, I notice that those cubes are being created without a parent when I really want them to be parented under my Root object as this is the one which will be anchored so as to sit in the same physical position across devices;

It would be fairly easy for me to grab the return value from PhotonNetwork.InstantiateSceneObject and change the parent relationship but that’s not going to help me if these objects are being created over the network from another user on another device so I need to try a different approach.

It turns out that I can hook into the instantiation of a networked object by implementing the IPunInstantiateMagicCallback ( ! ) interface and so I wrote an (ugly) script called CubeScript which I attached to my prefab in an attempt to pick up the newly created object and parent it back under the Root object in my scene;

using Photon.Pun;
using UnityEngine;

public class CubeScript : MonoBehaviour, IPunInstantiateMagicCallback
{
    public void OnPhotonInstantiate(PhotonMessageInfo info)
    {
        var parent = GameObject.Find("Root");

        this.transform.SetParent(parent.transform, true);
    }
}

Clearly, I need to come up with a better way of doing that then by using GameObject.Find() but this let me experiment.

I deployed that application to a HoloLens, ran it up, created a few cubes and then shut it down and ran it back up again and, sure enough, the cubes came back in the real world where they were originally created and so my assumption is that they would be visible in the same place in the real world to a second, third, etc. HoloLens user of the app.

However, I’ve got scripts on this cube which allow the user to translate, rotate and scale these holograms and, as yet, there’s nothing synchronising those changes to the network. That means that if I create a hologram at point x,y,z in physical space and then move it x1,y1,z1 then another use will not see those changes on their device. Similarly, if I re-run the application on the first device, I will see the hologram back at x,y,z. That needs attention…

Synchronising Hologram Transformations

There seemed to be an obvious way to do this transform sync’ing with Photon and it was calling out to me from the moment that I added the PhotonView script to my prefab;

If I change this “Observed Components” value to point at the object itself then Photon nicely adds for me;

and so it already (via the PhotonTransformView) knows how to synchronise scale, rotate and translate values across networked game objects.

But…I’m not sure that it’s going to do what I want here because, from looking at the script itself it is set up to synchronise the values of Transform.position, Transform.rotation and Transform.localScale.

The challenge with that is that “world” co-ordinates like this are going to correspond to different physical locations on multiple devices. For my scenario, I have my Root object which is spatially anchored to the same place in the real-world so providing a common “origin” for anything parented under it. That means that I need to then synchronise the co-ordinates of my cubes relative to that Root parent object.

That caused me to look for a “local” flag on PhotonTransformView or perhaps a separate PhotonTransformLocalView or similar & I didn’t find one so I simply made one by copying the original script and changing all transform.position and transform.rotation to refer to the localPosition and localScale instead and I configured that on my prefab;

I then tested this by running the application on my HoloLens and in the editor at the same time but I noticed an “interesting” thing in that cubes would be created ok but movement would only be sync’d from the device that created them, not from the other device.

I’d kind of expected this as Photon talks a lot about “ownership” of these networked objects and if you look at the description for RequestOwnership on this page then you’ll see that the “owner” of the object is the client that sends updates to position which implies that non-owners do not.

In configuring my PhotonView, I’d tried to set the “owner” to be “Takeover” intending anyone to own any object they liked but that wasn’t quite enough to make this work.

Photon Object-Control on Focus

I wasn’t sure whether I could actually tell Photon to “not care” about “ownership” but I suspect not and so rather than trying to do that I simply tried to code around it by trying to RequestOwnership of any cube any time the user focused on it.

So, I modified my CubeScript such that it now looked like;

using Microsoft.MixedReality.Toolkit.Input;
using Photon.Pun;
using UnityEngine;

public class CubeScript : MonoBehaviour, IPunInstantiateMagicCallback, IMixedRealityFocusHandler
{
    public void OnFocusEnter(FocusEventData eventData)
    {
        // ask the photonview for permission
        var photonView = this.GetComponent<PhotonView>();

        photonView?.RequestOwnership();
    }

    public void OnFocusExit(FocusEventData eventData)
    {
    }

    public void OnPhotonInstantiate(PhotonMessageInfo info)
    {
        var parent = GameObject.Find("Root");

        this.transform.SetParent(parent.transform, true);
    }
}

and this seemed to work fine for my scenario – I could move the cube on the HoloLens and I could move it in the editor and those movements were sync’d to the other device.

However, I noticed another challenge – my cubes still weren’t always where I expected them to be…

If a cube transforms in an empty room…

By this point, I was running through a set of steps as below;

  • Run the app on HoloLens to create the room and the anchor
  • Create some cubes
  • Run up the app on the editor
  • Test to see that cubes could be transformed on the HoloLens and sync’d to the editor and vice versa
  • Quit the app on the HoloLens and re-run it to check that it would join the room, locate the anchor and put the cubes back where I left them
  • Test again to see that cubes could be transformed on the HoloLens and sync’d to the editor and vice versa

and all was good – everything there seemed to work fine.

Where I had a problem though was in the scenario where a user was alone in the room. In that scenario, I found that leaving/joining the room would result in cubes with transforms reset to their starting values – i.e. any transformations that had been performed on the cube since it was created were lost. I would see the same whether I tried this out from the HoloLens or from the editor.

Initially, I thought that this related to Photon automatically clearing out the events associated with a player when they left the room and so I updated my room creation code to set the CleanupCacheOnLeave option to be false;

    public override void OnConnectedToMaster()
    {
        base.OnConnectedToMaster();

        var roomOptions = new RoomOptions();
        roomOptions.EmptyRoomTtl = this.emptyRoomTimeToLiveSeconds * 1000;
        roomOptions.CleanupCacheOnLeave = false;
        PhotonNetwork.JoinOrCreateRoom(ROOM_NAME, roomOptions, null);
    }

but this seemed to make no difference.

I spent a little time debugging and ultimately confirmed my thought that Photon does not send these messages out to a room of one player. This is from the PhotonNetworkPart.cs script;

and that (very nice and extremely helpful) comment also told me that these OnSerialize messages aren’t buffered by Photon.

Now, if I’d read between the lines a little more carefully on the Photon documentation page;

Synchronization and State

Then I think I would have known this all along because it does point out that RPCs can be buffered but that object synchronizations are not;

” Unlike Object Synchronization, RPCs might be buffered. Any buffered RPC will be sent to players who join later, which can be useful if actions have to be replayed one after another. For example a joining client can replay how someone placed a tool in the scene and how someone else upgraded it. The latter depends on the first action. “

This means that when a user joins the room, they will only get the correct current transforms for any cubes if there is another user in the room that is sending those transforms out to the network. Additionally, I think this is also dependent on the value of the ViewSynchronization value – see the piece in that referenced document that talks about “unreliable” versus “unreliable on change” which details when updates are sent to the network.

That’ll teach me to read the manual properly next time 🙂

Frequencies and Buffering…

It’d be fairly easy to replace the functionality that the PhotonTransferLocalView is currently providing for me with Photon RPCs that could be buffered but I might then fall into the trap of having lots (probably too many) RPCs being buffered every time the user scales, rotates or moves an object. That’s not likely to be a great choice – I’d perhaps rather rely on the unbuffered behaviour that I have already.

What I really want is some sort of bufferedLatest option such that we do buffering but only for the last update sent but I don’t know that Photon has that type of functionality.

So, for the scenario were > 1 users are in a room manipulating holograms I’m going to keep the immediacy offered by my PhotonTransferLocalView.

For the scenario where users leave the room and return, I need to have some other approach and I thought that I would go back to using custom properties on the room with the idea being to;

  • watch for when a user completes a manipulation and use that to update a custom property on the room, using the ID of the PhotonView as the key for the property and a string value representing localPosition, localScale, localRotation as the value of the property.

then, whenever Photon instantiates a cube, I can check to see if this property is present for that cube’s PhotonView ID and, if so, apply these local transform values.

What does that do to my code? Firstly, I set up the scripts on my cube objects such that they handled the end of manipulations.

I did this for my BoundingBox for Rotate Stopped and Scale Stopped;

and I also did it for ManipulationHandler;

Why do this in both places? Because of this MRTK “issue”;

OnManipulationEnded doesn’t fire for rotation or scale

With that in place, I can use that event to serialize the local transform and put it into a custom property on the room;

    string ViewIDAsString => this.GetComponent<PhotonView>().ViewID.ToString();

    public void OnManipulationEnded()
    {
        var photonView = this.GetComponent<PhotonView>();

        if (photonView != null)
        {
            var transformStringValue = LocalTransformToString(this.transform);

            PhotonNetwork.CurrentRoom.SetCustomProperties(
                new Hashtable()
                {
                    {  this.ViewIDAsString, transformStringValue }
                }
            );
        }
    }

I’ll spare you the details of the LocalTransformToString method, it’s just capturing position, rotation, scale into a string.

Then, when Photon instantiates a networked cube I can add a little extra code to the method that I already had which reparents it in order access the custom property value from the room and use it to put the transform on the cube back to how it was at the last recorded manipulation;

    public void OnPhotonInstantiate(PhotonMessageInfo info)
    {
        var parent = GameObject.Find("Root");
        this.transform.SetParent(parent.transform, true);

        // Do we have a stored transform for this cube within the room?
        if (PhotonNetwork.CurrentRoom.CustomProperties.Keys.Contains(this.ViewIDAsString))
        {
            var transformStringValue = PhotonNetwork.CurrentRoom.CustomProperties[this.ViewIDAsString] as string;

            StringToLocalTransform(this.transform, transformStringValue);
        }
    }

and that seems to work out pretty nicely – using the PhotonTransformView for the non-buffered, frequently changing values and using “buffered” custom room properties for values that will change less frequently.

Wrapping Up

As always, I learned a few things while trying to put this post and the previous one together and, mainly, I learned about Photon because I don’t have a tonne of familiarity with it.

That said, getting the basics of a shared holographic experience up and running wasn’t too difficult and if I needed to spin up another examples those learnings would mean that I could get back to it pretty quickly.

I put the Unity project here on github in case you (or a future version of me) wanted to do anything with it – naturally, apply a pinch of salt as I put it together purely for the experiments in this post. Just one note – the keys for Azure Spatial Anchors embedded in that project won’t work, you’ll need to update to provide your own configuration.

Simple Shared Holograms with Photon Networking (Part 1)

I’ve written a lot in the past about shared holograms and I’ve also written about Photon networking a couple of years ago;

Experiments with Shared Holographic Experiences and Photon Unity Networking

but I recently was working through this new tutorial around shared experiences with Photon;

Multi-user Capabilities Tutorials

and, while it’s great, it had me wondering what the minimal set of pieces might be for getting a shared experience up and running on the current versions of Unity, the Mixed Reality Toolkit for Unity and HoloLens and so I set about trying that out and I’m going to jot down notes here in case anyone is looking at this for the first time.

What I found surprisingly good to see is that it is fairly simple to get to the point where you have shared holograms using a networking technology like Photon.

Let’s get going.

Versions

I am using the following pieces;

Making a Project

To start with, I made a new 3D project in Unity using the “New Project” dialog box;

I then set this up for Mixed Reality development by doing what I think of as the bare minimum;

  • Switch the platform to UWP
  • Switch on Virtual Reality SDK support
  • Set some items in the application manifest

that means using this dialog (File->Build Settings);

and this set of settings (Player Settings->XR Settings);

and this set of settings (Player Settings->Publishing Settings);

and, with that, my project is good to go.

Importing Toolkits

I then used the regular right mouse menu on my Project Assets to import the MRTK Foundation package downloaded from the previous link as below;

note that I import all of the assets here to get all of the toolkit. I then used the Mixed Reality Toolkit->Add to Scene and Configure menu and selected the default HoloLens profile;

I then went to the Unity Asset Store and searched for “PUN” to find this package which I then downloaded and imported;

When it came to the import here I wasn’t 100% sure that I needed all of PUN and so I deselected PhotonChat as I don’t think I need it;

and imported that into my application.

Importing Azure Spatial Anchors

In order to have a shared holographic experience, it’s crucial to establish a common co-ordinate system across the participating devices.

There’s lots of ways of establishing a shared co-ordinate system across MR devices with perhaps the usual one being to use a spatial anchor.

When I originally wrote about shared holograms with PUN in this blog post, I ended up writing a bunch of code to share spatial anchors using Azure blob storage because PUN doesn’t make it so easy to pass around large binary objects.

Here in 2019, though, we’ve got new options and Azure Spatial Anchors to help out with the heavy lifting and so I wanted to make use of Azure Spatial Anchors here to establish a common co-ordinate system.

I wrote a little about Azure Spatial Anchors here;

Baby Steps with the Azure Spatial Anchors Service

and so I won’t repeat everything that I said but will, instead, try and keep undertaking doing the minimal to get what I want up and running.

In that blog post, I talked about how I take only these pieces (5 files) from the official SDK sample here;

But I wanted a simpler wrapper class to make this “easier” for my code to work with and so I wrote the class below. Note that this is most definitely making some very simplistic trade-offs in terms of sacrificing failure handling and error conditions for simplicity and especially in a couple of places;

  • By not following the guidance and providing callbacks here such that an app using this class can provide UI to tell the user what to do to improve the capture of an anchor
  • By potentially having loops that might execute for ever – I’m being very optimistic here.

With those caveats in place, here’s my little helper class;

using Microsoft.Azure.SpatialAnchors;
using Microsoft.MixedReality.Toolkit.Utilities;
using System;
using System.Threading.Tasks;
using UnityEngine;
using UnityEngine.XR.WSA;

namespace AzureSpatialAnchors
{
    public class AzureSpatialAnchorService : MonoBehaviour
    {
        [Serializable]
        public class AzureSpatialAnchorServiceProfile
        {
            [SerializeField]
            [Tooltip("The account id from the Azure portal for the Azure Spatial Anchors service")]
            string azureAccountId;
            public string AzureAccountId => this.azureAccountId;

            [SerializeField]
            [Tooltip("The access key from the Azure portal for the Azure Spatial Anchors service (for Key authentication)")]
            string azureServiceKey;
            public string AzureServiceKey => this.azureServiceKey;
        }

        [SerializeField]
        [Tooltip("The configuration for the Azure Spatial Anchors Service")]
        AzureSpatialAnchorServiceProfile profile = new AzureSpatialAnchorServiceProfile();
        public AzureSpatialAnchorServiceProfile Profile => this.profile;

        TaskCompletionSource<CloudSpatialAnchor> taskWaitForAnchorLocation;

        CloudSpatialAnchorSession cloudSpatialAnchorSession;

        public AzureSpatialAnchorService()
        {
        }
        public async Task<string> CreateAnchorOnObjectAsync(GameObject gameObjectForAnchor)
        {
            string anchorId = string.Empty;
            try
            {
                this.StartSession();

                var worldAnchor = gameObjectForAnchor.GetComponent<WorldAnchor>();

                if (worldAnchor == null)
                {
                    throw new ArgumentException("Expected a world anchor on the game object parameter");
                }

                // Note - these next 2 waits are highly dubious as they may never happen so
                // a real world solution would have to do more but I'm trying to be 
                // minimal here
                await new WaitUntil(() => worldAnchor.isLocated);

                // As per previous comment.
                while (true)
                {
                    var status = await this.cloudSpatialAnchorSession.GetSessionStatusAsync();

                    if (status.ReadyForCreateProgress >= 1.0f)
                    {
                        break;
                    }
                    await Task.Delay(250);
                }
                var cloudAnchor = new CloudSpatialAnchor();

                cloudAnchor.LocalAnchor = worldAnchor.GetNativeSpatialAnchorPtr();

                await this.cloudSpatialAnchorSession.CreateAnchorAsync(cloudAnchor);

                anchorId = cloudAnchor?.Identifier;
            }
            catch (Exception ex) // TODO: reasonable exceptions here.
            {
                Debug.Log($"Caught {ex.Message}");
            }
            return (anchorId);
        }
        public async Task<bool> PopulateAnchorOnObjectAsync(string anchorId, GameObject gameObjectForAnchor)
        {
            bool anchorLocated = false;

            try
            {
                this.StartSession();

                this.taskWaitForAnchorLocation = new TaskCompletionSource<CloudSpatialAnchor>();

                var watcher = this.cloudSpatialAnchorSession.CreateWatcher(
                    new AnchorLocateCriteria()
                    {
                        Identifiers = new string[] { anchorId },
                        BypassCache = true,
                        Strategy = LocateStrategy.AnyStrategy,
                        RequestedCategories = AnchorDataCategory.Spatial
                    }
                );

                var cloudAnchor = await this.taskWaitForAnchorLocation.Task;

                anchorLocated = cloudAnchor != null;

                if (anchorLocated)
                {
                    gameObjectForAnchor.GetComponent<WorldAnchor>().SetNativeSpatialAnchorPtr(cloudAnchor.LocalAnchor);
                }
                watcher.Stop();
            }
            catch (Exception ex) // TODO: reasonable exceptions here.
            {
                Debug.Log($"Caught {ex.Message}");
            }
            return (anchorLocated);
        }
        /// <summary>
        /// Start the Azure Spatial Anchor Service session
        /// This must be called before calling create, populate or delete methods.
        /// </summary>
        public void StartSession()
        {
            if (this.cloudSpatialAnchorSession == null)
            {
                Debug.Assert(this.cloudSpatialAnchorSession == null);

                this.ThrowOnBadAuthConfiguration();
                // setup the session
                this.cloudSpatialAnchorSession = new CloudSpatialAnchorSession();
                // set the Azure configuration parameters
                this.cloudSpatialAnchorSession.Configuration.AccountId = this.Profile.AzureAccountId;
                this.cloudSpatialAnchorSession.Configuration.AccountKey = this.Profile.AzureServiceKey;
                // register event handlers
                this.cloudSpatialAnchorSession.Error += this.OnCloudSessionError;
                this.cloudSpatialAnchorSession.AnchorLocated += OnAnchorLocated;
                this.cloudSpatialAnchorSession.LocateAnchorsCompleted += OnLocateAnchorsCompleted;

                // start the session
                this.cloudSpatialAnchorSession.Start();
            }
        }
        /// <summary>
        /// Stop the Azure Spatial Anchor Service session
        /// </summary>
        public void StopSession()
        {
            if (this.cloudSpatialAnchorSession != null)
            {
                // stop session
                this.cloudSpatialAnchorSession.Stop();
                // clear event handlers
                this.cloudSpatialAnchorSession.Error -= this.OnCloudSessionError;
                this.cloudSpatialAnchorSession.AnchorLocated -= OnAnchorLocated;
                this.cloudSpatialAnchorSession.LocateAnchorsCompleted -= OnLocateAnchorsCompleted;
                // cleanup
                this.cloudSpatialAnchorSession.Dispose();
                this.cloudSpatialAnchorSession = null;
            }
        }
        void OnLocateAnchorsCompleted(object sender, LocateAnchorsCompletedEventArgs args)
        {
            Debug.Log("On Locate Anchors Completed");
            Debug.Assert(this.taskWaitForAnchorLocation != null);

            if (!this.taskWaitForAnchorLocation.Task.IsCompleted)
            {
                this.taskWaitForAnchorLocation.TrySetResult(null);
            }
        }
        void OnAnchorLocated(object sender, AnchorLocatedEventArgs args)
        {
            Debug.Log($"On Anchor Located, status is {args.Status} anchor is {args.Anchor?.Identifier}, pointer is {args.Anchor?.LocalAnchor}");
            Debug.Assert(this.taskWaitForAnchorLocation != null);

            this.taskWaitForAnchorLocation.SetResult(args.Anchor);
        }
        void OnCloudSessionError(object sender, SessionErrorEventArgs args)
        {
            Debug.Log($"On Cloud Session Error: {args.ErrorMessage}");
        }
        void ThrowOnBadAuthConfiguration()
        {
            if (string.IsNullOrEmpty(this.Profile.AzureAccountId) ||
                string.IsNullOrEmpty(this.Profile.AzureServiceKey))
            {
                throw new ArgumentNullException("Missing required configuration to connect to service");
            }
        }
    }
}

It’s perhaps worth saying that while I packaged this as a MonoBehaviour here, I have other variants of this code that would package it as a Mixed Reality extension service which would make it available across the entire application rather than to a set of components that happen to be configured on a particular GameObject. In this case, I went with a MonoBehaviour and configured this into my scene as below;

Setting up PUN to Run with a Cloud Session

It’s possible to use PUN either using a local network server or a cloud server but I want things to be simple and with minimal configuration so I decided to run from the cloud.

With that in mind, I visited the PUN portal here;

https://dashboard.photonengine.com/en-US/publiccloud

and used the Create New App button to create a new app of type Photon PUN with a suitable name. I did not fill in the description or Url properties. I called it TestApp and you can see that the portal then gives me a GUID to represent that app as below;

Once the Photon package has imported into Unity, it conveniently pops up a dialog where I can enter this GUID to link the app with that cloud instance;

and that’s all I need to have this up and running.

Getting a Connection

Getting a connection is dead simple. There are, no doubt, a million options that you can use but all I did was to create an empty GameObject in my scene (named Root) and then write a script which inherits from the PUN base class MonoBehaviourPunCallbacks which provides overrides for network events and so I wrote;

    public class PhotonScript : MonoBehaviourPunCallbacks
    {
        void Start()
        {
            PhotonNetwork.ConnectUsingSettings();
        }
        public override void OnConnectedToMaster()
        {
            base.OnConnectedToMaster();
        }
    }

and trying that out in the editor and seeing it run through in the debugger all seemed to be working nicely.

Getting a Room

The “boundary” for communications in PUN seems to be the “room” and, as you might expect, there’s potential for a lot of functionality, capability and configuration around picking rooms and, optionally, using “lobbies” to select these rooms.

For my purposes, I’m going to pretend that none of this matters and it’s ok to just hard-code a room to avoid any of these additional steps.

Consequently, I can write some simple code to create/or join a room once the network connection is made;

    public class PhotonScript : MonoBehaviourPunCallbacks
    {
        void Start()
        {
            PhotonNetwork.ConnectUsingSettings();
        }
        public override void OnConnectedToMaster()
        {
            base.OnConnectedToMaster();

            PhotonNetwork.JoinOrCreateRoom("HardCodedRoom", null, null);
        }
        public override void OnJoinedRoom()
        {
            base.OnJoinedRoom();
        }
        public override void OnCreatedRoom()
        {
            base.OnCreatedRoom();
        }
    }

and, again, in the debugger attached to the editor I can see both the OnCreatedRoom and OnJoinedRoom overrides being called so things seem fine.

If I run the code again, I see that the room is once again created and joined and this comes down to the ‘time to live’ specified for the RoomOptions for the room. At the moment, my code does not pass any RoomOptions and so the room seems to get torn down pretty quickly whereas I could leave the room ‘alive’ for longer if I changed that value

There are also options on the room around whether objects that are created by a particular player are to be removed when that player leaves the room and around how much time needs to pass before a player is considered to have left the room. For my purposes, I’m not too worried about those details just yet so I tidied up my script and simply set the time to live value on the room itself such that it would give me enough time to join the same room more than once from a single device if I needed to;

    public class PhotonScript : MonoBehaviourPunCallbacks
    {
        enum RoomStatus
        {
            None,
            CreatedRoom,
            JoinedRoom
        }

        public int emptyRoomTimeToLiveSeconds = 120;

        RoomStatus roomStatus = RoomStatus.None;

        void Start()
        {
            PhotonNetwork.ConnectUsingSettings();
        }
        public override void OnConnectedToMaster()
        {
            base.OnConnectedToMaster();

            var roomOptions = new RoomOptions();
            roomOptions.EmptyRoomTtl = this.emptyRoomTimeToLiveSeconds * 1000;
            PhotonNetwork.JoinOrCreateRoom(ROOM_NAME, roomOptions, null);
        }
        public async override void OnJoinedRoom()
        {
            base.OnJoinedRoom();

            if (this.roomStatus == RoomStatus.None)
            {
                this.roomStatus = RoomStatus.JoinedRoom;
            }
        }
        public async override void OnCreatedRoom()
        {
            base.OnCreatedRoom();

            this.roomStatus = RoomStatus.CreatedRoom;
        }
        static readonly string ROOM_NAME = "HardCodedRoomName";
    }

Establishing a Common Co-ordinate System

The heart of a shared holographic experience revolves around using some mechanism to establish a co-ordinate system that’s common across all the devices that are participating in the experience.

I want to do this using world anchors on HoloLens and in conjunction with Azure Spatial Anchors in the cloud which is going to provide me with a mechanism to share the anchor from one device to another.

To keep things simple, I’m going to try and establish a 1:1 relationship between 1 anchor and 1 room in Photon. This probably isn’t realistic for a real-world application but it’s more than enough for my sample here.

The way that I want things to work is as below;

  • When a user creates the Photon room, it will be assumed that user should also create the spatial anchor and post it to the Azure Spatial Anchors (ASA) service.
  • When a user joins a room, it will be assumed that the user should attempt to find the anchor at the ASA service and import it if it exists and, otherwise, wait to be notified that changes have occurred and it should try that process again.

The object that I want to anchor in my scene is the Root object and I have added a WorldAnchor to it as below;

In terms of Photon, the mechanism that I chose to use to try and implement this was to use custom room properties as described here.

My process then becomes;

  • For a user who creates a room, ensure that the WorldAnchor on the Root object isLocated and then use the ASA pieces to create an Azure Spatial Anchor from this getting the ID and storing it in a custom room property named anchorId
  • For a user who joins a room they check the room properties to look for a property named anchorId
    • If present, use ASA to download the anchor and import it to the WorldAnchor on the Root object
    • If not present, assume that we are too early in the process and wait for Photon to call OnUpdateRoomProperties, letting us know that the anchorId property has now been set by the user who created the room & we can now access the value, call the ASA service & get the anchor.

and so my script ended up looking as below;

using System;
using System.Threading.Tasks;
using AzureSpatialAnchors;
using ExitGames.Client.Photon;
using Photon.Pun;
using Photon.Realtime;

public class PhotonScript : MonoBehaviourPunCallbacks
{
    enum RoomStatus
    {
        None,
        CreatedRoom,
        JoinedRoom,
        JoinedRoomDownloadedAnchor
    }

    public int emptyRoomTimeToLiveSeconds = 120;

    RoomStatus roomStatus = RoomStatus.None;

    void Start()
    {
        PhotonNetwork.ConnectUsingSettings();
    }
    public override void OnConnectedToMaster()
    {
        base.OnConnectedToMaster();

        var roomOptions = new RoomOptions();
        roomOptions.EmptyRoomTtl = this.emptyRoomTimeToLiveSeconds * 1000;
        PhotonNetwork.JoinOrCreateRoom(ROOM_NAME, roomOptions, null);
    }
    public async override void OnJoinedRoom()
    {
        base.OnJoinedRoom();

        // Note that the creator of the room also joins the room...
        if (this.roomStatus == RoomStatus.None)
        {
            this.roomStatus = RoomStatus.JoinedRoom;
        }
        await this.PopulateAnchorAsync();
    }
    public async override void OnCreatedRoom()
    {
        base.OnCreatedRoom();
        this.roomStatus = RoomStatus.CreatedRoom;
        await this.CreateAnchorAsync();
    }
    async Task CreateAnchorAsync()
    {
        // If we created the room then we will attempt to create an anchor for the parent
        // of the cubes that we are creating.
        var anchorService = this.GetComponent<AzureSpatialAnchorService>();

        var anchorId = await anchorService.CreateAnchorOnObjectAsync(this.gameObject);

        // Put this ID into a custom property so that other devices joining the
        // room can get hold of it.
        PhotonNetwork.CurrentRoom.SetCustomProperties(
            new Hashtable()
            {
                { ANCHOR_ID_CUSTOM_PROPERTY, anchorId }
            }
        );
    }
    async Task PopulateAnchorAsync()
    {
        if (this.roomStatus == RoomStatus.JoinedRoom)
        {
            object keyValue = null;

            // First time around, this property may not be here so we see if is there.
            if (PhotonNetwork.CurrentRoom.CustomProperties.TryGetValue(
                ANCHOR_ID_CUSTOM_PROPERTY, out keyValue))
            {
                // If the anchorId property is present then we will try and get the
                // anchor but only once so change the status.
                this.roomStatus = RoomStatus.JoinedRoomDownloadedAnchor;

                // If we didn't create the room then we want to try and get the anchor
                // from the cloud and apply it.
                var anchorService = this.GetComponent<AzureSpatialAnchorService>();

                await anchorService.PopulateAnchorOnObjectAsync(
                    (string)keyValue, this.gameObject);
            }
        }
    }
    public async override void OnRoomPropertiesUpdate(Hashtable propertiesThatChanged)
    {
        base.OnRoomPropertiesUpdate(propertiesThatChanged);

        await this.PopulateAnchorAsync();
    }
    static readonly string ANCHOR_ID_CUSTOM_PROPERTY = "anchorId";
    static readonly string ROOM_NAME = "HardCodedRoomName";
}

At this point, in order to try and “test” whether this worked or not I added a simple elongated cube under my Root object in the scene;

with the aim being to try this out on a single HoloLens device by performing;

  1. Run the application standing in a particular position and orientation to establish an origin.
  2. Wait a little while for the anchor to get created and sync’d to ASA.
  3. Close the application.
  4. Step a few metres to one side.
  5. Re-run the application within 2 minutes to attempt to join the same room.
  6. Wait a little for the anchor to get download and located.
  7. Expect that the blue bar will ‘jump’ to the position that it had at step 1 above.

and that worked out fine on the first run of that project and so I have the “core” of a shared holographic experience up and running on Photon in that I can establish a common co-ordinate system across multiple devices with very little code indeed.

The next step (in the next blog post) would be to see if I can create some holograms and move them around…

Rough Notes on Porting “glTF Viewer” from Mixed Reality Toolkit (MRTK) V1 to MRTK V2 (RC2.1)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens or Azure Mixed Reality other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Around 6 months ago, I wrote a simple application for HoloLens 1 and published it to the Windows Store.

It’s called “glTF Viewer” and it provides a way to view models stored in glTF format on the HoloLens with basic move, rotate, scale manipulations. It also provides a way via which one user can put such a model onto their HoloLens, open it up and then share it automatically to other users on the same local network such that they will also be able to see the same model and the manipulations performed on it. This includes downloading the files for the model from the originating device and caching them onto the requesting device.

You can find the application in the store here;

glTF Viewer in the Microsoft Store

and you can find the original blogpost that I wrote about the process of writing this application here;

A Simple glTF Viewer for HoloLens

and you can find the source code for the application over here;

glTF Viewer on GitHub

I’d like to keep this application up to date and so with the arrival of MRTK V2 (release candidates) I thought that it would be a good idea to port the application over to MRTK V2 such that the application was “more modern” and better suited to work on HoloLens 2 when the device becomes available.

In doing that work, I thought it might be helpful to document the steps that I have taken to port this application and that’s what this blog post is all about – it’s a set of ‘rough notes’ made as I go through the process of moving the code from V1 to V2.

Before beginning, though, I want to be honest about the way in which I have gone about this port. What I actually did was;

  1. Begin the port thinking that I would write it up as I went along.
  2. Get bogged down in some technical details.
  3. Complete the port.
  4. Realise that I had not written anything much down.

So it was a bit of a failure in terms of writing anything down.

Consequently, what I thought that I would do is to revisit the process and repeat the port from scratch but, this time, write it down Smile as I went along.

That’s what the rest of this post is for – the step-by-step process of going from MRTK V1 to MRTK V2 on this one application having done the process once already.

Before I get started though, I’d like to point out some links.

Some Links…

There are a number of links that relate to activities and reading that you can do if you’re thinking of getting started with a mixed reality application for HoloLens 2 and/or thinking of porting an existing application across from HoloLens 1. The main sites that I find myself using are;

Armed with those docs, it’s time to get started porting my glTF Viewer to MRTK V2.

Making a New Branch, Getting Versions Right

I cloned my existing repo from https://github.com/mtaulty/GLTF-Model-Viewer using a recursive clone and made sure that it would still build.

There are quite a few steps necessary to build this project right now described in the readme at https://github.com/mtaulty/GLTF-Model-Viewer.

Specifically, the repo contains a sub-module which uses UnityGLTF from the Kronos Group. There’s nothing too unusual about that except that the original MRTK also included some pieces around GLTF which clashed with UnityGLTF and so I had to write some scripts to as to set a few things up and remove one or two toolkit files in order to get things to build.

I described this process in the original blog post under the section entitled ‘A Small Challenge with UnityGLTF’.

One of the expected benefits of porting to MRTK V2 with its built-in support for GLTF is to be able to get rid of the sub-module and the scripts needed to hack the build process and end up with a much cleaner project all round Smile 

I made a new branch for my work named V2WorkBlogPost as I already had the V2Work branch where I first tried to make a port and from which I intend to merge back into master at some later point.

With that branch in play, I made sure that I had the right prerequisites for what I was about to do, taking them from the ‘Getting Started’ page here;

  • Visual Studio 2017.
    • I have this although I’m actually working in 2019 at this point.
  • Unity 2018.4.x.
    • I have 2018.4.3f1 – I have a particular interest in this version because it is supposed to fix a (UWP platform) issue which I raised here where the UWP implementations of System.IO.File APIs got reworked in Windows SDK 16299 which broke existing code which used those file APIs. You can see more on that in the original blog post under the title “Challenge 3 – File APIs Change with .NET Standard 2.0 on UWP”. It’s nice that Unity has taken the effort to try and fix this so I’ll be super keen to try it out.
  • Latest MRTK release.
    • I took the V2.0.0 RC2.1 release and I only took the Foundation package rather than the examples as I do not want the examples in my project here. Naturally, I have the examples in another place so that I can try things out.
  • Windows SDK 18362+.
    • I have 18362 as the latest installed SDK on this machine.

It is worth noting at this point a couple of additional things about my glTF Viewer application as it is prior to this port;

  • It has already been built in a Unity 2018.* version. It was last built with 2018.3.2f1.
  • It is already building on the IL2CPP back-end

Why is my application already building for Il2CPP?

Generally, I would much prefer to work on the .NET back-end but it has to be acknowledged that IL2CPP is inevitable given that Unity 2019 versions no longer have the .NET back-end support but there is a bigger reason for my use of IL2CPP. My application using classes from .NET Standard 2.0 (specifically HttpListener) and due to the deprecation of the .NET back-end Unity did not add support for .NET Standard 2.0 into the .NET back-end. So, if I want to use HttpListener then I have to use IL2CPP. I wrote about this in gory detail at the time that I wrote the application so please refer back to the original blog post  (in the section entitled ‘Challenge Number 1 – Picking up .NET Standard 2.0 ’) if you want the blow-by-blow.

So, armed with the right software and an application that already builds in Unity 2018 on the IL2CPP back-end, I’m ready to make some changes.

Opening the Project in Unity

I opened up my project in the 2018.4.3f1 version of Unity and allowed it to upgrade it from 2018.3.2f1.

I didn’t expect to see problems in this upgrade but it did seem to get stuck on this particular error;

image

which says;

“Project has invalid dependencies:
    com.unity.xr.windowsmr.metro: Package [com.unity.xr.windowsmr.metro@1.0.10] cannot be found”

so my best thought was to use the Package Manager which offered to upgrade this to Version 1.0.12

image

and that seemed to do the trick. I had a look at my build settings as well and switched platform over to the UWP;

image

A quick note on the debugging settings here. For Il2CPP, you can either choose to debug the C# code or the generated C++ code and Unity has all the details over here.

UWP: Debugging on IL2CPP Scripting Backend

Take extra care to ensure that you have the right capabilities set in your project for this to work as mentioned in the first paragraph of that page.

Because of this, I generally build Release code from Visual Studio and attempt to use the Unity C# debugging first. If that doesn’t help me out, I tend to debug the generated C++ code using the native debugger in Visual Studio and, sometimes, I rebuild from Visual Studio in Debug configuration to help with that debugging on native code.

I’d note also that I do toggle “Scripts Only Build” when I think it is appropriate in order to try and speed up build times but it’s “risky” as it’s easy to leave it on when you should have turned it off so beware on that one Smile

With that done, Unity was opening my project in version 2018.4.3f1 and it would build a Visual Studio project for me and so I committed those changes and moved on.

The commit is here.

A Word on Scenes

An important thing to note about the glTF Viewer application is that it’s really quite simple. There’s a bit of code in there for messaging and so on but there’s not much to it and, as such, it’s built as a single scene in Unity as you can see below;

image

If you have a multi-scene application then you’re going to need to take some steps to work with the MRTK V2 across those multiple scenes to ensure that;

  1. The MRTK doesn’t get unloaded when scenes change
  2. More than one MRTK doesn’t get loaded when scenes change

I’ve seen a few apps where this can be a struggle and there’s an issue raised on the MRTK V2 around this over here with a long discussion attached which I think leads to the approach of having a “base” scene with the MRTK embedded into it and then loading/unloading scenes with the “additive” flag set but you might want to check out that whole discussion if this is an area of interest for you as it doesn’t impact my app here.

Adding the New Toolkit

This is much easier than the previous 2 steps in that I just imported the Unity package that represents MRTK V2 RC 2.1.

I hit one error;

“Assembly has reference to non-existent assembly ‘Unity.TextMeshPro’ (Assets/MixedRealityToolkit.SDK/MixedRealityToolkit.SDK.asmdef)”

but that was easily fixed by going back into the Package Manager and installing the Text Mesh Pro package into my project and I, once again, ensured that the project would build in Unity. It did build but it spat out this list of “errors” that I have seen many times working on these pieces so I thought I would include a screenshot here;

image

These errors all relate to the “Reference Rewriter” and all seem to relate to System.Numerics and I have seen these flagged as errors by Unity in many projects recently and yet the build is still flagged as Succeeded and seems to deploy and work fine on a device.

Consequently, I ignore them although the last error listed there about a failure to copy from the Temp folder to the Library folder is an actual problem that I have with Unity at the moment and I have to fix that one by restarting the editor and the hub until it goes away Confused smile

When it did go away, I then hit this error;

“Scripted importers UnityGLTF.GLTFImporter and Microsoft.MixedReality.Toolkit.Utilities.Gltf.Serialization.Editor.GlbAssetImporter are targeting the glb extension, rejecting both.
UnityEditor.Experimental.AssetImporters.ScriptedImporter:RegisterScriptedImporters()”

but I can fully understand why Unity is complaining here because I do have two versions of UnityGLTF in the project right now so I’m not surprised that Unity is a bit puzzled but I’m hoping to address this shortly and Unity seems to be tolerating the situation for now and so, with those caveats, I do now have a project that contains both the old MRTK V1 and the new MRTK V2 as below;

image

The big question for me at this point is whether to take a dependency on the MRTK V2 as a Git sub-module or whether to just include the code from the MRTK V2 in my Unity project.

I much prefer to take a dependency on it as a sub-module but I figure that while it is not yet finished I will have the code in my project and then I can do the sub-module step at a later point. Consequently, I had quite a lot of folders to add to my Git repo and it leaves my repo in a slightly odd state because the MRTK V1 is in there as a sub-module and the MRTK V2 is in there as code but I’m about to remove MRTK V1 anyway so it won’t be in this hybrid state for too much longer.

The commit is here.

Removing the MRTK V1 – Surgical Removal or the Doomsday Option?

I now have a project with both the MRTK V1 and the MRTK V2 within it but how do I go about removing the V1 and replacing it with the V2?

So far when I’ve worked on applications that are doing this it feels to me like there are 2 possibilities;

  1. The “Doomsday” option – i.e. delete the MRTK V1 and see what breaks.
  2. The “Surgical” option – i.e. make an inventory of what’s being used from the MRTK V1 and consider what replacement is needed.

For the blog post, I’m going to go with option 2 but I’ve seen developers try both approaches and I’m not convinced that one is any better than the other.

In my particular application, I did a survey of my scene to try and figure out what is being used from the toolkit.

Firstly, I had some objects in my scene which I think I used in their default configuration;

  • Cursor object
  • InputManager object
  • MixedRealityCameraParent object

I’m expecting all of these to be replaced by the MRTK V2 camera system and input system without too much effort on my part.

I also noticed that I had a ProgressIndicator. At the time of writing, I’m asking for this to be replaced into the MRTK V2 but it’s not there as far as I know and so my expectation here is to simply keep these pieces from the MRTK V1 in my application for now and continue to use the progress indicator as it is.

Having taken a look at my scene, I wanted to see where I was using the MRTK V1 from my own code. My first thought was to attempt to use the “Code Map” feature of Visual Studio but I don’t think there’s enough “differentiation” between my code and the code in the toolkit to be able to make sense of what’s going on.

Abandoning that idea, I looked at the entire set of my scripts that existed in the scripting project;

image

There are around 30 or so scripts there, it’s not huge and so I opened them all up in the editor and searched for HoloToolkit in all of them and I came up with a list of 8 files;

image

I then opened those files and did a strategic search to try and find types from the HoloToolkit and I found;

  • A use of the interface IFocusable in FocusWatcher.cs a class which was trying to keep track of which (if any) object has focus.
  • A use of the ObjectCursor in a class CursorManager.cs which tried to make the cursor active/inactive at suitable times, usually while something was asynchronously loading.
  • The ModelUpdatesManager class which adds the type TwoHandManipulatable to a GameObject such that it can be moved, rotated, scaled and this class needs a BoundingBox prefab in order to operate.
  • A use of the ProgressIndicator type which I use in order to show/hide progress when a long running operation is going on.

Additionally, I know that I am also using UnityGLTF from the Kronos repo in order to load GLTF models from files whether they be JSON/binary and whether they be an object packaged into a single file or into multiple files which all need loading.

The application also makes use of voice commands but I know that in the MRTK V1 I had to avoid the speech support as it caused me some issues. See back to the original blog post under the section entitled “Challenge 7” for the blow-by-blow on problems I had using speech as pre

While it’s probably not a perfect list, this then gives me some things to think about – note that I am mostly building this list by looking back at the porting guide and finding equivalents for the functionality that I have used;

  1. Input – Replace the Cursor, InputManager, MixedRealityCameraParent in the scene with the new MRTK systems.
  2. Speech – Look into whether speech support in MRTK V2 works better in my scenario than it did in MRTK V1.
  3. GLTF – Replace the Unity GLTF use from the Kronos repo with the new pieces built into MRTK V2.
  4. Focus – Replace the use of IFocusable with the use of IMixedRealityFocusHandler.
  5. Cursor – Come up with a new means for showing/hiding the cursor across the various pointers that are used by the MRTK V2.
  6. Manipluations – Replace the TwoHandManipulatable script with use of the new ManipulationHandler, NearInteractionGrabbable and BoundingBox scripts with suitable options set on them.
  7. Rework – Look into which pieces of the application could benefit from being reworked, re-architected based on the new service-based approach in MRTK V2.

That’s a little backlog to work on and I’ll work through them in the following sub-sections.

Input

Firstly, I removed the InputManager, Cursor and MixedRealityCameraParent from my scene and then used the Mixed Reality Toolkit –> Add to Scene and Configure menu to add the MRTK V2 into the scene. At this point, the “Mixed Reality Toolkit” menu is a little confusing as both the MRTK V1 and V2 are contributing to it but, for now, I can live with that.

I chose the DefaultHoloLens2ConfigurationProfile for my toolkit profile as below;

image

A word about “profiles”. I think it’s great that a lot of behaviour is moving into “profiles” or what an old-fashioned person like me might call “configuration by means of a serialized object” Smile

The implication of this though is that if you were to lose these profiles then your application would break. I’ve seen these profiles be lost more than once by someone who allowed them to be stored in the MRTK folders (by default the MixedRealityToolkit.Generated folder) themselves & then deleted one version of the MRTK in order to add another losing the MixedRealityToolkit.Generated folder in the process.

Additionally, imagine that in one of today’s Default profiles a setting is “off”. What’s to say that a future profile won’t replace it with a value of “on” and change your application behaviour?

Maybe I’m just paranoid Winking smile but my way of managing these profiles is to create a “Profiles” folder of my own and then duplicate every single profile that is in use into that folder and give it a name that lines up with my app. That way, I know exactly where my profiles are coming from and I don’t run the risk of deleting them by mistake or having them overwritten by a newer toolkit.

While doing this, I noticed that the DefaultMixedRealityToolkitConfigurationProfile allows for “copy and customize”;

image

whereas the DefaultHoloLens2ConfigurationProfile doesn’t seem to;

image

but I might be missing how this is supposed to work. Regardless, I started with the DefaultMixedRealityToolkitConfigurationProfile and I cloned it to make a copy in Profiles\GLTFViewerToolkitConfigurationProfile.

I then went through that profile and;

  • Changed the Target Scale to be World.
  • Changed the Camera profile to be the DefaultHoloLens2CameraProfile before cloning that to make Profiles\GLTFViewerCameraProfile
  • Changed the Input profile to be the DefaultHoloLens2InputSystemProfile before cloning that to make Profiles\GLTFViewerInputSystemProfile
    • In doing this, I cloned all of the 8 sub-sections for Input Actions, Input Action Rules, Pointer, Gestures, Speech Commands, Controller Mapping, Controller Visualization, Hand Tracking
  • I switched off the Boundary system, leaving it configured with its default profile
  • I switched off the Teleport system, leaving it configured with its default profile
  • I switched off the Spatial Awareness system, leaving it with its default profile and removing the spatial observer (just in case!)
  • I cloned the DefaultMixedRealityDiagnosticsProfile to make my own and left it as it was.
  • I cloned the Extensions profile to make my own and left it as it was.
  • I left the editor section as it was.

With that in place, I then have all these profiles in my own folder and they feel like they are under my control.

image

At this point, I thought I’d risk pressing “Play” in the editor and I was surprised that I didn’t hear the welcome message that I had built into the app but, instead, spotted a “not implemented exception”.

Speech and Audio, Editor and UWP

I dug into this exception and realised that I had written a class AudioManager which decides whether to play voice clips or not and that class had been built to work only on UWP devices, not in the editor – i.e. it was making use of ApplicationData.Current.LocalSettings so I quickly tried to rewire that in order to use PlayerPrefs instead so that it could work both in editor and on device.

With that done, I got my audible welcome message on pressing play, I could see the framerate counter from the MRTK V2 and I seemed to be able to move around in the editor.

I couldn’t open any files though because I’d also written some more code which was editor specific.

My application uses voice commands but I had a major challenge with voice commands on the MRTK V1 in that they stopped working whenever the application lost/regained focus.

Worst of all this included when the application lost focus to make use of the file dialog so a user of the application was able to use the voice command “Open” to raise the file dialog, thereby breaking the voice commands before their model file had even been chosen.

I wrote about this in the original blog post under the section “Challenge 7”. The upshot is that I removed anything related to MRTK V1 speech or Unity speech from my application and I fell back to purely using SpeechRecognizer from the UWP for my application and that worked out fine but, of course, not in the Unity editor.

I only have 3 speech commands – open, reset, remove and so what I would ideally like to do is to work in the way of MRTK V2 by defining new input actions for these commands along with a profiler command to toggle the profile display as below in my input actions profile;

image

and then I could define some speech commands in my speech settings profile;

image

and then in my class which handles speech commands, I could add a property to map the MixedRealityInputAction (open etc.) to a handler using my own internal class ActionHandler because I don’t think Unity can serialize dictionaries for me;

image

and then configure them to their respective values in the editor…

image

and then I should be able to implement IMixedRealityInputActionHandler to invoke the actions here (rather than directly tie myself to those actions coming from only voice commands);

image

In doing so, I think I also need to register my GameObject as a “global” handler for these commands and so I need to add a call to do;

image

and that seemed to work really, really nicely.

That said, I am still pretty concerned that this isn’t going to work on the device itself reliably across invocations of the file dialog as I see the new WindowsSpeechInputProvider implementation using the KeywordRecognizer and I’m not sure that this type behaves well on the device when the application loses/gains focus.

Consequently, I figured that I would use all of this MRTK V2 infrastructure to deliver speech commands to me in the editor but, on the device, I would like to switch it off and rely on the mechanism that I’d previously built which I know works.

I edited my Input system profile in order to try and remove the WindowsSpeechInputProvider outside of the editor and I disabled the WindowsDicationInputProvider altogether;

image

and I then changed my startup code such that it did different things depending on whether it was in the editor or not;

image

and my own speech handling code is super, super simple and inefficient but I know that it works on a V1 device so I am trying to largely keep intact and here it is below – it essentially keeps creating a SpeechRecognizer (UWP not Unity) and using it for a single recognition before throwing it away and starting again;

#if ENABLE_WINMD_SUPPORT    
    /// <summary>
    /// Why am I using my own speech handling rather than relying on SpeechInputSource and
    /// SpeechInputHandler? I started using those and they worked fine.
    /// However, I found that my speech commands would stop working across invocations of
    /// the file open dialog. They would work *before* and *stop* after.
    /// I spent a lot of time on this and I found that things would *work* under the debugger
    /// but not without it.
    /// That led me to think that this related to suspend/resume and perhaps HoloLens suspends
    /// the app when you move to the file dialog because I notice that dialog running as its
    /// own app on HoloLens.
    /// I tried hard to do work with suspend/resume but I kept hitting problems and so I wrote
    /// my own code where I try quite hard to avoid a single instance of SpeechRecognizer being
    /// used more than once - i.e. I create it, recognise with it & throw it away each time
    /// as this seems to *actually work* better than any other approach I tried.
    /// I also find that SpeechRecognizer.RecognizeAsync can get into a situation where it
    /// returns "Success" and "Rejected" at the same time & once that happens you don't get
    /// any more recognition unless you throw it away and so that's behind my approach.
    /// </summary>
    async void StartSpeechCommandHandlingAsync()
    {
        while (true)
        {            
            var command = await this.SelectSpeechCommandAsync();

            if (command.Action != MixedRealityInputAction.None)
            {
                this.InvokeActionHandler(command.Action);
            }
            else
            {
                // Just being paranoid in case we start spinning around here
                // My expectatation is that this code should never/rarely
                // execute.
                await Task.Delay(250);
            }
        }
    }
    async Task<SpeechCommands> SelectSpeechCommandAsync()
    {
        var registeredCommands = MixedRealityToolkit.InputSystem.InputSystemProfile.SpeechCommandsProfile.SpeechCommands;

        SpeechCommands command = default(SpeechCommands);

        using (var recognizer = new SpeechRecognizer())
        {
            recognizer.Constraints.Add(
                new SpeechRecognitionListConstraint(registeredCommands.Select(c => c.Keyword)));

            await recognizer.CompileConstraintsAsync();

            var result = await recognizer.RecognizeAsync();

            if ((result.Status == SpeechRecognitionResultStatus.Success) &&
                ((result.Confidence == SpeechRecognitionConfidence.Medium) ||
                 (result.Confidence == SpeechRecognitionConfidence.High)))
            {
                command = registeredCommands.FirstOrDefault(c => string.Compare(c.Keyword, result.Text, true) == 0);
            }                    
        }
        return (command);
    }
#endif // ENABLE_WINMD_SUPPORT

I suspect that I’ll be revisiting this code once I try and deploy to a device but, for now, it works in the editor and moves me onto my next little challenge.

I also switched off the frame rate profiler by default in the profile;

image

and implemented my handler to toggle it on/off;

image

Opening File Dialogs

My application has, initially, a single voice command, “Open”, which raises a file dialog in order to open a glTF model.

However, I’d only written the file open code in order to support opening the file dialog on a UWP device. I hadn’t done the work to make it open in the editor and I realised that this needed addressing so I quickly amended the method that I have to add an additional piece of code for the non-UWP platform case;

    async Task<string> PickFileFrom3DObjectsFolderAsync()
    {
        var filePath = string.Empty;

#if ENABLE_WINMD_SUPPORT
        var known3DObjectsFolder = KnownFolders.Objects3D.Path.ToLower().TrimEnd('\\');

        do
        {
            filePath = await FileDialogHelper.PickGLTFFileAsync();

            if (!string.IsNullOrEmpty(filePath) &&
                !filePath.ToLower().StartsWith(known3DObjectsFolder))
            {
                filePath = string.Empty;
                this.AudioManager.PlayClipOnceOnly(AudioClipType.PickFileFrom3DObjectsFolder);
            }
        } while (filePath == string.Empty);
#else
        filePath = EditorUtility.OpenFilePanelWithFilters(
            "Select GLTF File",
            string.Empty,
            new string[] { "GLTF Files", "gltf,glb", "All Files", "*" });
#endif 

        return (filePath);
    }

but I found that even if I could raise the file dialog, I was still getting exceptions opening files…

Loading GLTF Models

The problem that I was hitting was that the GLTFParser was struggling to read the files that I was feeding it and so I decided to take the leap to stop using that code and start using the GLTF code bundled into the MRTK V2.

In the existing code, I make use of a class GLTFSceneImporter to load the one or more files that might make up a GLTF model. In my original blog post I had a few struggles using this in a deterministic way as it’s very coroutine based and I found it hard to be in control of a couple of things;

  • Knowing when it had finished
  • Knowing when it had thrown exceptions

I mentioned these challenges in the original post under the title of “A Small Challenge with Async/Await and CoRoutines” and also “Another Small Challenge with CoRoutines and Unity’s Threading Model”.

At the time, I largely worked around them by writing a base class named ExtendedMonoBehaviour which did some work for me in this regard. It’s in the repo so I won’t call it out in any detail here.

The GLTFSceneImporter delegated the responsibility for actually opening files to an implementation of an interface named ILoader which looks as below;

namespace UnityGLTF.Loader
{
	public interface ILoader
	{
		IEnumerator LoadStream(string relativeFilePath);

		void LoadStreamSync(string jsonFilePath);

		Stream LoadedStream { get; }

		bool HasSyncLoadMethod { get; }
	}
}

This was very useful for me as the user might choose to open a multi-file GLTF file with various separate material files etc. and this is the way in which my code gets to “know” which files have actually been opened. I need this list of files to be able to offer the model over HTTP to other devices that might request it in a shared experience.

In order to use this, I had a class RecordingFileLoader which implemented this ILoader interface and kept track of every file that it successfully opened on behalf of the loader and I passed this around into a couple of places that needed to know about the file list.

Looking at the new MRTK V2 support for GLTF, things seem much improved in that there is a new class GltfUtility which seems to offer an ImportGltfObjectFromPathAsync method. The built-in support for async makes my base class ExtendedMonoBehaviour redundant Smile but it does leave me with the challenge of trying to figure out how to know which files the code has actually loaded a model from.

That method returns a GltfObject and I wrote some code which attempts to work out which files loaded by interrogating the buffers property after it has been populated. I already had this class ImportedModelInfo which wrapped around my RecordingFileLoader and so I modified it to take on this extra functionality;

using Microsoft.MixedReality.Toolkit.Utilities.Gltf.Schema;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using UnityEngine;

public class ImportedModelInfo
{
    public ImportedModelInfo(
        string fullFilePath,
        GltfObject gltfObject)
    {
        // Where were these files loaded from?
        this.BaseDirectoryPath = Path.GetDirectoryName(fullFilePath);

        // What's the name of the file itself?
        this.relativeLoadedFilePaths = new List<string>();
        this.relativeLoadedFilePaths.Add(Path.GetFileName(fullFilePath));

        // Note: At the time of writing, I'm unsure about what the URI property
        // might contain here for buffers and images given that the GLTF spec
        // says that it can be file URIs or data URIs and so what does the GLTF
        // reading code return to me in these cases?

        // I'm expecting Uris like 
        //  foo.bin
        //  subfolder/foo.bin
        //  subfolder/bar/foo.bin

        // and will probably fail if I encounter something other than that.
        var definedUris =
            gltfObject.buffers
                .Where(b => !string.IsNullOrEmpty(b.uri))
                .Select(b => b.uri)
            .Concat(
                gltfObject.images
                    .Where(i => !string.IsNullOrEmpty(i.uri))
                    .Select(i => i.uri));

        this.relativeLoadedFilePaths.AddRange(definedUris);

        this.GameObject = gltfObject.GameObjectReference;
    }
    public string BaseDirectoryPath { get; private set; }
    public IReadOnlyList<string> RelativeLoadedFilePaths => this.relativeLoadedFilePaths.AsReadOnly();
    public GameObject GameObject { get; set; }

    List<string> relativeLoadedFilePaths;
}

with the reworking of one or two other pieces of code that then allowed me to delete my classes RecordingFileLoader and ExtendedMonoBehaviour which felt good Smile

I had to do another slight modification to code which had never been run in the editor before because it was expecting to export world anchors but, other than that, it was ok and I could now load at least one GLTF model in the editor as below;

image

What I couldn’t do was any kind of manipulations on the object so that was perhaps where I needed to look next although I suspect that depends on focus and I also suspect it relies on having a collider which might not be present…

The commit for these pieces is here.

Focus

The earlier code would attach this behaviour;

using HoloToolkit.Unity.InputModule;
using UnityEngine;

public class FocusWatcher : MonoBehaviour, IFocusable
{
    public void OnFocusEnter()
    {
        focusedObject = this.gameObject;
    }
    public void OnFocusExit()
    {
        focusedObject = null;
    }
    public static bool HasFocusedObject => (focusedObject != null);
    public static GameObject FocusedObject => focusedObject;
    static GameObject focusedObject;
}

to the models that had been loaded such that when voice commands like “reset” or “remove” were used, the code could check the HasFocusedObject property, get the FocusedObject value itself and then would typically look for some other component on that GameObject and make a method call on it to reset its position or remove it from the scene.

It’s questionable as to whether this behaviour should be attached to the objects themselves or whether it should just be a global handler for the whole scene but the effect is the same either way.

I need an equivalent in the new MRTK V2 and the natural thing to do would seem to be to reach into the MixedRealityToolkit.InputSystem.FocusProvider and make a call to GetFocusedObject() but that method expects that the caller knows which pointer is in use and I’m not sure that I do.

Instead, I chose to just update the existing class so as to implement IMixedRealityFocusHandler and keep doing what it had been doing before;

using HoloToolkit.Unity.InputModule;
using Microsoft.MixedReality.Toolkit.Input;
using UnityEngine;

public class FocusWatcher : MonoBehaviour, IMixedRealityFocusHandler
{
    public void OnFocusEnter(FocusEventData eventData)
    {
        focusedObject = this.gameObject;
    }
    public void OnFocusExit(FocusEventData eventData)
    {
        focusedObject = null;
    }
    public static bool HasFocusedObject => (focusedObject != null);
    public static GameObject FocusedObject => focusedObject;
    static GameObject focusedObject;
}

but I noticed that I still wasn’t able to interact with the duck – there’s still work to be done Smile

The commit for this stage is here.

Cursor

My class which manipulates the cursor for me was still stubbed out and so I attempted to update that from what it had been;

using HoloToolkit.Unity.InputModule;
using UnityEngine;

public class CursorManager : MonoBehaviour
{
    [SerializeField]
    private ObjectCursor cursor;

    public void Show()
    {
        this.cursor.gameObject.SetActive(true);
    }
    public void Hide()
    {
        this.cursor.gameObject.SetActive(false);
    }
}

to this version;

using Microsoft.MixedReality.Toolkit;
using Microsoft.MixedReality.Toolkit.Input;
using System.Collections.Generic;
using System.Linq;
using UnityEngine;

public class CursorManager : MonoBehaviour
{
    public CursorManager()
    {
        this.hiddenPointers = new List<IMixedRealityPointer>();
    }
    public void Hide()
    {
        // TODO: I need to understand how you are supposed to do this on V2, I just want
        // to switch all cursors off when the user cannot do anything useful with them.
        foreach (var inputSource in MixedRealityToolkit.InputSystem.DetectedInputSources)
        {
            foreach (var pointer in inputSource.Pointers)
            {
                if ((pointer.IsActive) && (pointer.BaseCursor != null))
                {
                    pointer.BaseCursor.SetVisibility(false);
                    this.hiddenPointers.Add(pointer);
                }
            }
        }
        MixedRealityToolkit.InputSystem.GazeProvider.Enabled = false;
    }
    public void Show()
    {
        foreach (var pointer in this.hiddenPointers)
        {
            pointer.BaseCursor.SetVisibility(true);
        }
        this.hiddenPointers.Clear();

        MixedRealityToolkit.InputSystem.GazeProvider.Enabled = true;
    }
    List<IMixedRealityPointer> hiddenPointers;
}

I’m not sure whether this is “right” or not – once again I find myself puzzling a little over all these pointers and cursors and trying to figure which ones I’m meant to interact with but the code feels reasonably “safe” in that it attempts to put back what it did in the first place so, hopefully, I’m not breaking the toolkit with this.

That commit is here.

Manipulations

Up until now, I’ve left the code which attempts to handle manipulations as it was. That is, there is code in the application;

image

which attempts to add TwoHandManipulatable to a model which has been loaded from the disk (rather than one which has been received over the network where I don’t allow local manipulations). That TwoHandManipulatable wants a BoundingBoxPrefab and so you can see that my code here has passed such a thing through to it.

It’s probably not too surprising that this isn’t working as it’s mixing MRTK V1 classes with MRTK V2 in the scene so I wouldn’t really expect it to do anything.

Additionally, I’m not sure from looking at the objects in the editor that there is any type of collider being added by the glTF loading code so I probably need to deal with that too.

I suspect then that I’m going to need to add a few pieces here;

  • A BoxCollider to allow for interactions on the model.
  • ManipulationHandler to allow the model to be moved, rotated, etc.
  • NearInteractionGrabbable so that the manipulations cater for both near and far interactions on a HoloLens 2.
  • BoundingBox to provide some visualisation of the interactions with the model.

Additionally, I think that I’m going to want to be able to have quite a bit of control over the settings of some of the materials etc. on the BoundingBox and some of the axes of control on the other pieces and so it feels like it might be a lot easier to set this all up as a prefab that I can build in the editor and then just pass through to this code.

Previously, when loading a model my code took an approach of something like this;

  • load the GLTF model, giving a new GameObject with a collider already on it
  • create a new object to act as the model’s parent, parenting this object itself off some root parent within the scene
  • position the parent object 3m down the user’s gaze vector, facing the user
  • attach a world anchor to the parent object both for stability but also so it can be exported to other devices
  • add manipulation behaviours to the GLTF model itself so that it can be moved, rotated, scaled underneath its parent which is anchored

I decided to change this slightly for the new toolkit to;

  • load the GLTF model, giving a new GameObject ( M )
  • create a new object ( A ) to act as the anchored parent
  • create a new object to act as the model’s parent ( P ) from a prefab where BoxCollider, ManipulationHandler, NearInteractionGrabbable, BoundingBox are already present and configured on that prefab
  • parent M under P, P under A, A under R
  • add a world anchor to A

and that lets me slip this prefab into the hierarchy like adding an item into a linked-list so as to let the prefab bring a bunch of behaviour with it.

That prefab is as below;

image

and I tweaked a few materials and settings both on the BoundingBox largely based on examples that I looked at in the example scenes from the toolkit;

image

and;

image

Changing the hierarchy of the components that are set up when a model is loaded into the scene had some impact on my scripts which create/access world anchors and on my scripts which tried to watch for object transformations to send/receive over the network and so I had to make a few related changes here to patch that up and pass a few objects to the right place but I’ll keep that detail out of the post.

It also broke my simplistic FocusWatcher class because that class expected that the GameObject which had focus would be the model itself with direct excess to various behaviours that I have added to it whereas, now, that object is buried in a bit of hierarchy and so I got rid of the FocusWatcher altogether at this point and tried to write this method which would hopefully return to me all focused objects which had a particular component within their hierarchy;

    IEnumerable<T> GetFocusedObjectWithChildComponent<T>() where T : MonoBehaviour
    {
        // TODO: I need to figure whether this is the right way to do things. Is it right
        // to get all the active pointers, ask them what is focused & then use that as
        // the list of focused objects?
        var pointers = MixedRealityToolkit.InputSystem.FocusProvider.GetPointers<IMixedRealityPointer>()
            .Where(p => p.IsActive);

        foreach (var pointer in pointers)
        {
            FocusDetails focusDetails;

            if (MixedRealityToolkit.InputSystem.FocusProvider.TryGetFocusDetails(
                pointer, out focusDetails))
            {
                var component = focusDetails.Object?.GetComponentInChildren<T>();

                if (component != null)
                {
                    yield return component;
                }
            }
        }
    }

whether this is a good thing to do or not, I’m not yet sure but for my app it’s only called on a couple of voice commands so it shouldn’t be executing very frequently.

I tried this out in the editor and I seemed to be at a place where I could open GTLF models and use near and far interactions to transform them as below;

image

the commit for this stage is here.

Removing the MRTK V1

At this point, I felt like I was done with the MRTK V1 apart from the ProgressRingIndicator which I am still using so I need to preserve it in my project for now.

I made a new folder named TookitV1 and I moved across the Progress related pieces which appeared to be;

  • Animations – the contents of the Progress folder
  • Fonts – I copied all of these
  • Materials – I copied only ButtonIconMaterial here
  • Prefabs – the contents of the Progress folder
  • Scripts – the contents of the Progress folder

I did a quick commit and then deleted the HoloToolkit folder and I also deleted the UnityGLTF folder as I should, at this point, not be using anything from those 2 places.

At this point, the ProgressIndicator blew up compiling and told me that it was missing the HoloToolkit.Unity namespace (easily fixed) and that it wanted to derive from Singleton<T> but I found that easy enough to fix by just changing the base class to MonoBehaviour and adding a static Instance property which was set to the first instance which spun up in the application.

I still had problems though in that I had a couple of missing scripts in the prefab for the ProgressIndicator and I tried to replicate what had been there previously with the SolverHandler and Orbital as below

image

and I had to patch a couple of materials but, other than that, the MRTK V1 was gone and the app seemed to continue to function in the editor.

The commit is here.

Removing MRTK V1 and UnityGLTF as Submodules

I had previously included the MRTK V1 and UnityGLTF github repos as submodules of my repo and I no longer need them so removing them would make the repo a lot cleaner.

Additionally, I had a setup.bat script which attempted to move a lot of files around, do some preliminary building of Unity GLTF etc. and I no longer need that either.

I should be in a state on this branch where the project can “simply” be pulled from github and built.

With that in mind, I attempted to remove both of those submodules following the procedure described here as I’ve done this once or twice but I can never remember how you’re meant to do it.

I also removed the setup.bat and altered the readme.md.

Now, usually, when I do so many things at once some thing goes wrong so the next step was to…

Make a Clean Folder, Clone the Repo, Fix Problems

I cloned the repo again recursively into a new, clean folder with git clone –recursive https://github.com/mtaulty/GLTF-Model-Viewer and then switched to the V2WorkBlogPost and I noticed that git struggled to remove the MixedRealityToolkit-Unity and the UnityGLTF folders which had been created/populated as part of bringing down the recursive repo so I got rid of them manually (I’ll admit that the finer details of submodules are a bit of a mystery to me).

I reopened that project in Unity and, remarkably, all seemed to be fine – the project ran fine in the editor once I’d switched platforms & I didn’t seem to have missed files from my commits.

The commit is here.

Deploying to a Device

At this point, it felt like it was time to build for a device and see how the application was running as I find that there are often pieces of functionality that work ok in the editor but fail on a device.

I only have a HoloLens 1 device with me at the time of writing and so I used HoloLens 1, I can’t try on HoloLens 2 right now.

In trying to build for the device I hit an immediate failure;

“IOException: Sharing violation on path C:\Data\temp\blogpost\GLTF-Model-Viewer\GLTFModelViewer\Temp\StagingArea\Data\Managed\tempStrip\UnityEngine.AudioModule.dll”

but I see this quite frequently with Unity at the moment and so did a quick re-start (and shutdown Visual Studio) but then I got hit with the;

“Copying assembly from ‘Temp/Unity.TextMeshPro.dll’ to ‘Library/ScriptAssemblies/Unity.TextMeshPro.dll’ failed”

which is another transient error I see quite a lot so I did some more restarts (of both Unity and the Unity Hub) and managed to produce a successful VS build which seemed to deploy ok and run fine;

image

In deploying to the device, I also did some basic tests of the multi-user network sharing functionality which also seemed to be working fine.

Other Rework – Mixed Reality Extension Services

There are a few places in this code base where I make use of “services” which are really “global” across the project. As examples;

  • I have a class StorageFolderWebServer which, in a limited way, takes a UWP StorageFolder and makes some of its content available over HTTP via HttpListener
  • I have a NetworkMessageProvider which facilitates the shared experience by multicasting and receiving New Model, Transformed Model, Deleted Model messages around the local network.
    • This sits on top of a MessageService which simply knows how to Send/Receive messages having initially joined a multicast group.
  • I have a MessageDialogHelper which shows message boxes without blowing up the Unity/UWP threads.
  • I have a FileDialogHelper which shows a file dialog without blowing up the Unity/UWP threads.

Most of these are probably static classes but I feel that they are really providing services which may/not have some configurable element to them and which other pieces of code just need to look up somewhere in a registry and make use of thereby allowing them to be replaced at some point in the future.

As the MRTK V2 provides a form of service registry via the means of “extensions” to the toolkit, I thought it would make sense to try that out and see if I could refactor some code to work that way.

By way of example, I started with my MessageService class and extracted an interface from it deriving it from IMixedRealityExtensionService;

using Microsoft.MixedReality.Toolkit;
using System;

namespace MulticastMessaging
{
    public interface IMessageService : IMixedRealityExtensionService
    {
        MessageRegistrar MessageRegistrar { get; set; }
        void Close();
        void Open();
        void Send<T>(T message, Action<bool> callback = null) where T : Message;
    }
}

and then I defined a profile class for my service with the sorts of properties that I might want to set on it;

using Microsoft.MixedReality.Toolkit;
using UnityEngine;

namespace MulticastMessaging
{

    [CreateAssetMenu(
        menuName = "Mixed Reality Toolkit/Message Service Profile",
        fileName = "MessageServiceProfile")]
    [MixedRealityServiceProfile(typeof(MessageService))]
    public class MessageServiceProfile : BaseMixedRealityProfile
    {
        [SerializeField]
        [Tooltip("The address to use for multicast messaging")]
        public string multicastAddress = "239.0.0.0";

        [SerializeField]
        [Tooltip("The port to use for multicast messaging")]
        public int multicastPort = 49152;
    }
}

and then implemented that on my MessageService class deriving that from BaseExtensionService and marking it with a MixedRealityExtensionService attribute as you see below;

namespace MulticastMessaging
{
    using Microsoft.MixedReality.Toolkit;
    using Microsoft.MixedReality.Toolkit.Utilities;
    using System;
    using System.Diagnostics;
    using System.IO;
    using System.Net;
    using System.Net.Sockets;

    [MixedRealityExtensionService(SupportedPlatforms.WindowsUniversal | SupportedPlatforms.WindowsEditor)]
    public class MessageService : BaseExtensionService, IMessageService
    {
        // Note: 239.0.0.0 is the start of the UDP multicast addresses reserved for
        // private use.
        // Note: 49152 is the result I get out of executing;
        //      netsh int ipv4 show dynamicport udp
        // on Windows 10.
        public MessageService(
            IMixedRealityServiceRegistrar registrar,
            string name,
            uint priority,
            BaseMixedRealityProfile profile) : base(registrar, name, priority, profile)
        {

        }
        MessageServiceProfile Profile => base.ConfigurationProfile as MessageServiceProfile;

Clearly, that’s not the whole code but note the use of the MixedRealityExtensionService attribute and also the reach into the base class to get the ConfigurationProfile and cast it up as the concrete type of my actual profile.

With that in place, I can now use the editor to create one of those profiles;

image

and then I can add my new service to extensions of the toolkit;

image

and then change my code to grab hold of the instance via

MixedRealityToolkit.Instance.GetService<IMessageService>();

whenever I want to get hold of the instance of that service.

In this branch, I only added two services this way – my networking provider and my messaging service but in my V2Work branch I’ve made more of these services and plan to rework a few more pieces in this way;

image

The commit at this point is here.

Wrapping Up

I wanted to go around the loop again on this exercise partly to make my own notes around things that I have perhaps forgotten and partly in case there were some pieces that others might pick up on and share.

I’m not planning to take this V2WorkBlogPost branch any further or release anything from it because I’ve already done the port in my V2Work branch and I want to move that forward and, ultimately, merge back into master from there but I did learn a few things by repeating the exercise, namely;

  1. I can do a better job of making speech work in the editor and at runtime.
  2. I should make more extension services for some of the other pieces of my app.
  3. I did a better job of leaving the MRTK V1 in the code base until I really no longer needed it whereas first time around I removed it too early and got in a bit of a mess Smile
  4. I realised that more of the app functionality needs to work in the editor and I can improve that but there’s still a way to go as I haven’t made attempts to have all of it work in the editor.

I hope that there was something useful in here for readers (if any get this far to the end of the post) and good luck in porting your own apps across to MRTK V2 Smile