Following on from the previous post where I’d got to the point where I had an app using Photon which could;
- Connect to the Photon network
- Connect to a (hard-coded) Photon room by name
- Check a property of the room to see if an anchorId had been stored there
- If so, talk to the Azure Spatial Anchors service, download that anchor and locate a Root game object in my scene with the anchor
- If not, create an anchor for the game object Root in my scene and store it to the Azure Spatial Anchor service, getting an ID which can then be added as a property of the room
- Give all users a voice command “cube” to create a cube which is then synchronised with all participants in the room
- Let all users manipulate cubes so as to translate, scale and rotate them and keep those transforms synchronised across users
- Ensure that if users left/join the room then, within a timeout, the state of the cubes is preserved
it made sense to allow users to remove cubes from the room and it didn’t seem like it would too difficult to achieve so…
With that in mind, I added a new voice command “Remove” to the profile in the mixed reality toolkit;

and then I want this to only be relevant when the user is focused on a cube and so I added the handler for it onto my cube prefab itself, making sure that focus was required;

and I wired that through to a method which simply called PhotonNetwork.Destroy;
public void OnRemove()
{
this.SetViewIdCustomRoomProperty(null);
PhotonNetwork.Destroy(this.gameObject);
}
Because I have this set of custom properties (see the previous post for details) which store a Key:Value pair of ViewID:LastRecordedTransform I also really need to clear out the key for my PhotonView at this point if I am destroying the object. I didn’t seem to see a method on a Photon Room for deleting or clearing a custom property and so I set the value to null as you can see above where the SetViewIdCustomRoomProperty is just a little function that does;
void SetViewIdCustomRoomProperty(string value)
{
PhotonNetwork.CurrentRoom.SetCustomProperties(
new Hashtable()
{
{ this.ViewIDAsString, value }
}
);
}
and, with that I can delete cubes on one device and see them disappear on another one.
Input Simulation in the Editor & HoloLens 1
A small tip that I’ll pass along at this point is that when working with MRTK V2 but targeting HoloLens V1 I find it useful to switch the input simulation mode in the Unity editor to move it away from ‘articulated hands’ mode to ‘gestures’ mode. You can find that setting here;

without that setting, I find that for my HoloLens 1 device target the editor is getting ahead of itself and behaving a little too much like HoloLens 2 😉
Representing the User
It’s pretty common in these shared experiences to have a representation of the other users who are a part of the experience. If they are in the same physical room, it’s common to float something above their heads (e.g. a halo or a model of a HoloLens) whereas if they are in a different physical place then it’s common to bring in some level of avatar. I’ll call it a ‘halo’ for both cases.
Technically, that comes down to displaying the ‘halo’ at the position and orientation of the other user’s camera perhaps with an offset to avoid occluding the user.
This feels very much like the same scenario as what I did in synchronising the cubes but with perhaps a couple of differences;
- the transform of the ‘halo’ does not need to survive the user leaving the room – it leaves the room with the user it represents & so I don’t need to take any steps to preserve it as I did with the cubes.
- a user may [not] expect to have their own ‘halo’ visible although you can argue how important that is if (e.g.) it’s floating above their head such that they can’t easily see it 🙂
The easiest way to do this would seem to be to create a ‘halo’ object, make it a child of the main camera in the scene (with an offset) and then synchronise its transform over the network. The only slight challenge in that, is that I would need to take care to synchronise its position relative to the anchored Root object which (unlike my cubes) would not be its parent. That’s because the Root object represents a known place and orientation in the real world.
I found a 3D model of a HoloLens somewhere and made a prefab out of it as below;

I have it configured such that it lives in the Resources folder and I have added both a PhotonView script to it along with a PhotonRelativeTransformView script as you can see in the screenshot above.
What’s a PhotonRelativeTransformView? This is another ‘copy’ of the PhotonTransferView script which I modified to be much simpler in that it takes the name of a GameObject (the relative transform) and then attempts to synchronise just the transform and rotation of the ‘halo’ object relative to this relative transform object as below;
namespace Photon.Pun
{
using UnityEngine;
[RequireComponent(typeof(PhotonView))]
public class PhotonRelativeTransformView : MonoBehaviour, IPunObservable
{
[SerializeField]
string relativeTransformGameObjectName;
GameObject relativeGameObject;
Vector3 RelativePosition
{
get
{
return (this.gameObject.transform.position - this.RelativeGameObject.transform.position);
}
set
{
this.gameObject.transform.position = this.RelativeGameObject.transform.position + value;
}
}
Quaternion RelativeRotation
{
get
{
return (Quaternion.Inverse(this.RelativeGameObject.transform.rotation) * this.transform.rotation);
}
set
{
this.gameObject.transform.rotation = this.RelativeGameObject.transform.rotation;
this.gameObject.transform.rotation *= value;
}
}
GameObject RelativeGameObject
{
get
{
if (this.relativeGameObject == null)
{
this.relativeGameObject = GameObject.Find(this.relativeTransformGameObjectName);
}
return (this.relativeGameObject);
}
}
public void OnPhotonSerializeView(PhotonStream stream, PhotonMessageInfo info)
{
if (stream.IsWriting)
{
stream.SendNext(this.RelativePosition);
stream.SendNext(this.RelativeRotation);
}
else
{
this.RelativePosition = (Vector3)stream.ReceiveNext();
this.RelativeRotation = (Quaternion)stream.ReceiveNext();
}
}
}
}
With that in play, I added a slot onto my main script (the PhotonScript) to store this Halo Prefab;

and then just used PhotonNetwork.Instantiate to create an instance of that prefab whenever the script first starts up and joins the network. My hope is that if the player leaves the room then Photon will take it away again.I parent that instance off the camera;
public async override void OnJoinedRoom()
{
base.OnJoinedRoom();
// Note that the creator of the room also joins the room...
if (this.roomStatus == RoomStatus.None)
{
this.roomStatus = RoomStatus.JoinedRoom;
}
await this.PopulateAnchorAsync();
var halo = PhotonNetwork.Instantiate(this.haloPrefab.name, Vector3.zero, Quaternion.identity);
halo.transform.SetParent(CameraCache.Main.transform);
}
I gave that a very quick test and it seems like (across the HoloLens and the editor at least) it was doing the right thing as you can see from capture below taken from the HoloLens;

where the large circle is the HoloLens displaying the position of the other user represented by the Unity editor and the small circle is the editor displaying the position of the HoloLens.
That all seems to work out quite nicely. I’ve updated the repo here. At the time of writing, I’m not sure whether I’ll revisit this again and add anything more but I’ll post it here if I do…