Simple Shared Holograms with Photon Networking (Part 2)

Following on from the previous post, it’s time to get some holograms onto the screen and make it such that they are moveable by the user.

The “easiest” way to do this would seem to be add a voice command such that some hologram is created when the user issues a keyword and the easiest thing to create is (always !) a cube and so I started there.

Adding a Voice Command

Adding voice commands is pretty easy with the MRTK.

I went to the Input section of my MRTK profile, cloned the speech commands profile and added in a new “Cube” keyword as below;

and then I added an instance of Speech Input Handler to my Root object as below and wired it up to a new empty method on my PhotonScript named OnCreateCube;

Representing the Root Object

When developing with anchors, it’s always “nice” to have a representation of “where” the anchor is in space and whether it’s been created, located etc.

In my previous post, my anchor was simply represented by a blue beam running through the centre of the anchor location so I improved this slightly have that Root object now contain some 3D axes;

and I also changed the code to add materials such that I could change the colour of the sphere to indicate the anchor status. It starts off white but then;

  • if the anchor is created, it turns blue
  • if the anchor is located, it turns green
  • if there’s an error, it turns red

I was surprised by how useful it is to run the app, see the axes appear at 0,0,0 on my head and then watch the sphere turn green and the axes jump around in space to their previously anchored location – it’s really handy to have a cheap visualisation.

Creating Objects

Now I just need something to create so I made a simple prefab which is a cube scaled to 0.2 x 0.2 x 0.2 along with some MRTK scripts to make it moveable, namely BoundingBox, ManipulationHandler and NearInteractionGrabbable;

Note that the prefab also has the PhotonView component on it so as to make it possible to instantiate this prefab with Photon as a “networked” object.

With that in place, I can add a field to my PhotonScript to store this prefab and then instantiate it in response to the “Cube!” voice command;

    public void OnCreateCube()
    {
        // Position it down the gaze vector
        var position = Camera.main.transform.position + Camera.main.transform.forward.normalized * 1.2f;

        // Create the cube
        var cube = PhotonNetwork.InstantiateSceneObject(this.cubePrefab.name, position, Quaternion.identity);
    }

and that all works quite nicely and I’m creating cubes and my intention with using InstantiateSceneObject is to have those cubes be “owned” by the scene rather than by a particular player so I’m hoping that they will stick around when the player who created them leaves the room.

Parenting Objects Created by Photon

In the editor, though, I notice that those cubes are being created without a parent when I really want them to be parented under my Root object as this is the one which will be anchored so as to sit in the same physical position across devices;

It would be fairly easy for me to grab the return value from PhotonNetwork.InstantiateSceneObject and change the parent relationship but that’s not going to help me if these objects are being created over the network from another user on another device so I need to try a different approach.

It turns out that I can hook into the instantiation of a networked object by implementing the IPunInstantiateMagicCallback ( ! ) interface and so I wrote an (ugly) script called CubeScript which I attached to my prefab in an attempt to pick up the newly created object and parent it back under the Root object in my scene;

using Photon.Pun;
using UnityEngine;

public class CubeScript : MonoBehaviour, IPunInstantiateMagicCallback
{
    public void OnPhotonInstantiate(PhotonMessageInfo info)
    {
        var parent = GameObject.Find("Root");

        this.transform.SetParent(parent.transform, true);
    }
}

Clearly, I need to come up with a better way of doing that then by using GameObject.Find() but this let me experiment.

I deployed that application to a HoloLens, ran it up, created a few cubes and then shut it down and ran it back up again and, sure enough, the cubes came back in the real world where they were originally created and so my assumption is that they would be visible in the same place in the real world to a second, third, etc. HoloLens user of the app.

However, I’ve got scripts on this cube which allow the user to translate, rotate and scale these holograms and, as yet, there’s nothing synchronising those changes to the network. That means that if I create a hologram at point x,y,z in physical space and then move it x1,y1,z1 then another use will not see those changes on their device. Similarly, if I re-run the application on the first device, I will see the hologram back at x,y,z. That needs attention…

Synchronising Hologram Transformations

There seemed to be an obvious way to do this transform sync’ing with Photon and it was calling out to me from the moment that I added the PhotonView script to my prefab;

If I change this “Observed Components” value to point at the object itself then Photon nicely adds for me;

and so it already (via the PhotonTransformView) knows how to synchronise scale, rotate and translate values across networked game objects.

But…I’m not sure that it’s going to do what I want here because, from looking at the script itself it is set up to synchronise the values of Transform.position, Transform.rotation and Transform.localScale.

The challenge with that is that “world” co-ordinates like this are going to correspond to different physical locations on multiple devices. For my scenario, I have my Root object which is spatially anchored to the same place in the real-world so providing a common “origin” for anything parented under it. That means that I need to then synchronise the co-ordinates of my cubes relative to that Root parent object.

That caused me to look for a “local” flag on PhotonTransformView or perhaps a separate PhotonTransformLocalView or similar & I didn’t find one so I simply made one by copying the original script and changing all transform.position and transform.rotation to refer to the localPosition and localScale instead and I configured that on my prefab;

I then tested this by running the application on my HoloLens and in the editor at the same time but I noticed an “interesting” thing in that cubes would be created ok but movement would only be sync’d from the device that created them, not from the other device.

I’d kind of expected this as Photon talks a lot about “ownership” of these networked objects and if you look at the description for RequestOwnership on this page then you’ll see that the “owner” of the object is the client that sends updates to position which implies that non-owners do not.

In configuring my PhotonView, I’d tried to set the “owner” to be “Takeover” intending anyone to own any object they liked but that wasn’t quite enough to make this work.

Photon Object-Control on Focus

I wasn’t sure whether I could actually tell Photon to “not care” about “ownership” but I suspect not and so rather than trying to do that I simply tried to code around it by trying to RequestOwnership of any cube any time the user focused on it.

So, I modified my CubeScript such that it now looked like;

using Microsoft.MixedReality.Toolkit.Input;
using Photon.Pun;
using UnityEngine;

public class CubeScript : MonoBehaviour, IPunInstantiateMagicCallback, IMixedRealityFocusHandler
{
    public void OnFocusEnter(FocusEventData eventData)
    {
        // ask the photonview for permission
        var photonView = this.GetComponent<PhotonView>();

        photonView?.RequestOwnership();
    }

    public void OnFocusExit(FocusEventData eventData)
    {
    }

    public void OnPhotonInstantiate(PhotonMessageInfo info)
    {
        var parent = GameObject.Find("Root");

        this.transform.SetParent(parent.transform, true);
    }
}

and this seemed to work fine for my scenario – I could move the cube on the HoloLens and I could move it in the editor and those movements were sync’d to the other device.

However, I noticed another challenge – my cubes still weren’t always where I expected them to be…

If a cube transforms in an empty room…

By this point, I was running through a set of steps as below;

  • Run the app on HoloLens to create the room and the anchor
  • Create some cubes
  • Run up the app on the editor
  • Test to see that cubes could be transformed on the HoloLens and sync’d to the editor and vice versa
  • Quit the app on the HoloLens and re-run it to check that it would join the room, locate the anchor and put the cubes back where I left them
  • Test again to see that cubes could be transformed on the HoloLens and sync’d to the editor and vice versa

and all was good – everything there seemed to work fine.

Where I had a problem though was in the scenario where a user was alone in the room. In that scenario, I found that leaving/joining the room would result in cubes with transforms reset to their starting values – i.e. any transformations that had been performed on the cube since it was created were lost. I would see the same whether I tried this out from the HoloLens or from the editor.

Initially, I thought that this related to Photon automatically clearing out the events associated with a player when they left the room and so I updated my room creation code to set the CleanupCacheOnLeave option to be false;

    public override void OnConnectedToMaster()
    {
        base.OnConnectedToMaster();

        var roomOptions = new RoomOptions();
        roomOptions.EmptyRoomTtl = this.emptyRoomTimeToLiveSeconds * 1000;
        roomOptions.CleanupCacheOnLeave = false;
        PhotonNetwork.JoinOrCreateRoom(ROOM_NAME, roomOptions, null);
    }

but this seemed to make no difference.

I spent a little time debugging and ultimately confirmed my thought that Photon does not send these messages out to a room of one player. This is from the PhotonNetworkPart.cs script;

and that (very nice and extremely helpful) comment also told me that these OnSerialize messages aren’t buffered by Photon.

Now, if I’d read between the lines a little more carefully on the Photon documentation page;

Synchronization and State

Then I think I would have known this all along because it does point out that RPCs can be buffered but that object synchronizations are not;

” Unlike Object Synchronization, RPCs might be buffered. Any buffered RPC will be sent to players who join later, which can be useful if actions have to be replayed one after another. For example a joining client can replay how someone placed a tool in the scene and how someone else upgraded it. The latter depends on the first action. “

This means that when a user joins the room, they will only get the correct current transforms for any cubes if there is another user in the room that is sending those transforms out to the network. Additionally, I think this is also dependent on the value of the ViewSynchronization value – see the piece in that referenced document that talks about “unreliable” versus “unreliable on change” which details when updates are sent to the network.

That’ll teach me to read the manual properly next time 🙂

Frequencies and Buffering…

It’d be fairly easy to replace the functionality that the PhotonTransferLocalView is currently providing for me with Photon RPCs that could be buffered but I might then fall into the trap of having lots (probably too many) RPCs being buffered every time the user scales, rotates or moves an object. That’s not likely to be a great choice – I’d perhaps rather rely on the unbuffered behaviour that I have already.

What I really want is some sort of bufferedLatest option such that we do buffering but only for the last update sent but I don’t know that Photon has that type of functionality.

So, for the scenario were > 1 users are in a room manipulating holograms I’m going to keep the immediacy offered by my PhotonTransferLocalView.

For the scenario where users leave the room and return, I need to have some other approach and I thought that I would go back to using custom properties on the room with the idea being to;

  • watch for when a user completes a manipulation and use that to update a custom property on the room, using the ID of the PhotonView as the key for the property and a string value representing localPosition, localScale, localRotation as the value of the property.

then, whenever Photon instantiates a cube, I can check to see if this property is present for that cube’s PhotonView ID and, if so, apply these local transform values.

What does that do to my code? Firstly, I set up the scripts on my cube objects such that they handled the end of manipulations.

I did this for my BoundingBox for Rotate Stopped and Scale Stopped;

and I also did it for ManipulationHandler;

Why do this in both places? Because of this MRTK “issue”;

OnManipulationEnded doesn’t fire for rotation or scale

With that in place, I can use that event to serialize the local transform and put it into a custom property on the room;

    string ViewIDAsString => this.GetComponent<PhotonView>().ViewID.ToString();

    public void OnManipulationEnded()
    {
        var photonView = this.GetComponent<PhotonView>();

        if (photonView != null)
        {
            var transformStringValue = LocalTransformToString(this.transform);

            PhotonNetwork.CurrentRoom.SetCustomProperties(
                new Hashtable()
                {
                    {  this.ViewIDAsString, transformStringValue }
                }
            );
        }
    }

I’ll spare you the details of the LocalTransformToString method, it’s just capturing position, rotation, scale into a string.

Then, when Photon instantiates a networked cube I can add a little extra code to the method that I already had which reparents it in order access the custom property value from the room and use it to put the transform on the cube back to how it was at the last recorded manipulation;

    public void OnPhotonInstantiate(PhotonMessageInfo info)
    {
        var parent = GameObject.Find("Root");
        this.transform.SetParent(parent.transform, true);

        // Do we have a stored transform for this cube within the room?
        if (PhotonNetwork.CurrentRoom.CustomProperties.Keys.Contains(this.ViewIDAsString))
        {
            var transformStringValue = PhotonNetwork.CurrentRoom.CustomProperties[this.ViewIDAsString] as string;

            StringToLocalTransform(this.transform, transformStringValue);
        }
    }

and that seems to work out pretty nicely – using the PhotonTransformView for the non-buffered, frequently changing values and using “buffered” custom room properties for values that will change less frequently.

Wrapping Up

As always, I learned a few things while trying to put this post and the previous one together and, mainly, I learned about Photon because I don’t have a tonne of familiarity with it.

That said, getting the basics of a shared holographic experience up and running wasn’t too difficult and if I needed to spin up another examples those learnings would mean that I could get back to it pretty quickly.

I put the Unity project here on github in case you (or a future version of me) wanted to do anything with it – naturally, apply a pinch of salt as I put it together purely for the experiments in this post. Just one note – the keys for Azure Spatial Anchors embedded in that project won’t work, you’ll need to update to provide your own configuration.

1 thought on “Simple Shared Holograms with Photon Networking (Part 2)

  1. Pingback: Simple Shared Holograms with Photon Networking (Part 3) – Mike Taulty

Comments are closed.