Hitchhiking the HoloToolkit-Unity, Leg 14–More with Spatial Understanding

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

I experimented with Spatial Understanding back in this blog post;

Hitchhiking the HoloToolkit-Unity, Leg 3–Spatial Understanding (& Mapping)

but I’ve done more with it in the meantime and wanted to try and write up a small part of that here.

Firstly, let’s outline a basic project with some pieces within it. In the Unity project in the screenshot below I have brought in the HoloToolkit-Unity and I’ve set up the project for HoloLens project settings, scene settings and I’ve made sure that I have the microphone and spatial perception capability settings.

I’ve then brought in the standard SpatialMapping, SpatialUnderstanding and InputManager prefabs.

image

I’ve done little to alter them other than the settings shown below for SpatialMapping;

image

and these settings for SpatialUnderstanding;

image

and then I’ve added to my Placeholder object some components from the toolkit;

image

Including a keyword manager to handle voice commands and Space Visualizer and App State object which I brought in to try and visualise the ‘smoothed’ mesh produced by spatial understanding.

With a small amount of script in place to handle my two voice commands of Scan and Stop;

using HoloToolkit.Unity;
using UnityEngine;

public class Placeholder : MonoBehaviour {


    void Start ()
    {
        SpatialUnderstanding.Instance.ScanStateChanged += OnScanStateChanged;
	}
    void OnScanStateChanged()
    {
        if (SpatialUnderstanding.Instance.ScanState == SpatialUnderstanding.ScanStates.Done)
        {
            this.isScanning = false;
            this.isStopping = false;
        }
    }
    public void OnScan()
    {
        if (!this.isScanning)
        {
            this.isScanning = true;
            SpatialUnderstanding.Instance.RequestBeginScanning();
        }
    }
    public void OnStop()
    {
        if (this.isScanning && !this.isStopping)
        {
            this.isStopping = true;
            SpatialUnderstanding.Instance.RequestFinishScan();
        }
    }
    bool isStopping;
    bool isScanning;
}

 

and that’s enough for me to be able to use Scan and Stop in order to see the smoothed mesh from Spatial Understanding over my environment.

image

and then if I want to (e.g.) find and visualize the largest wall in my environment then I can make use of convenience methods on the SpaceVisualizer class – e.g.

    void OnScanStateChanged()
    {
        if (SpatialUnderstanding.Instance.ScanState == SpatialUnderstanding.ScanStates.Done)
        {
            this.isScanning = false;
            this.isStopping = false;

            // Use Space Visualizer to find a large wall
            SpaceVisualizer.Instance.Query_Topology_FindLargeWall();
        }
    }

and that method in SpaceVisualizer looks like;

        public void Query_Topology_FindLargeWall()
        {
            ClearGeometry();

            // Only if we're enabled
            if (!SpatialUnderstanding.Instance.AllowSpatialUnderstanding)
            {
                return;
            }

            // Query
            IntPtr wallPtr = SpatialUnderstanding.Instance.UnderstandingDLL.PinObject(resultsTopology);
            int wallCount = SpatialUnderstandingDllTopology.QueryTopology_FindLargestWall(
                wallPtr);
            if (wallCount == 0)
            {
                AppState.Instance.SpaceQueryDescription = "Find Largest Wall (0)";
                return;
            }

            // Add the line boxes
            float timeDelay = (float)lineBoxList.Count * AnimatedBox.DelayPerItem;
            lineBoxList.Add(
                new AnimatedBox(
                    timeDelay,
                    resultsTopology[0].position,
                    Quaternion.LookRotation(resultsTopology[0].normal, Vector3.up),
                    Color.magenta,
                    new Vector3(resultsTopology[0].width, resultsTopology[0].length, 0.05f) * 0.5f)
            );
            AppState.Instance.SpaceQueryDescription = "Find Largest Wall (1)";
        }

Now, we have some interesting calls in here referencing SpatialUnderstanding.Instance and then SpatialUnderstandingDllTopology and then SpatialUnderstanding.Instance.UnderstandingDLL and this produces a result such as that blurred out below where the pink lines represent an edge of the largest wall that the device could see;

image

So, the result is fine but what’s going on with the structure of the code here? I think it’s relevant to talk about how some of these pieces are layered together because I think it can be confusing to make use of the spatial understanding pieces and I forget how it works each time that I come to this library and so I’m writing them down as far as I understand them.

SpatialUnderstanding (the Unity script)

The script named SpatialUnderstanding is a singleton accessed via the static SpatialUnderstanding.Instance and is what showed up in the editor as the SpatialUnderstanding component.

It has properties such as AutoBeginScanning, UpdatePeriod_DuringScanning, UpdatePeriod_AfterScanning and it controls the scanning process via methods like RequestBeginScanning/RequestFinishScan and maintains the current state via the ScanState property. It has an Update method which drives the scanning process.

It’s fairly clear what this does for us but where does the functionality ultimately come from?

SpatialUnderstanding (the native DLL)

Inside of the HoloToolkit (rather than the HoloToolkit-Unity) there is a native SpatialUnderstanding project which looks something like this;

image

This builds out into a Windows 8.1 WinRT DLL and ultimately shows up in Unity under the Plugins folder;

image

and it’s essentially a regular, flat DLL with a number of exports – these are defined in the various DLL_*.h header files so for example Dll_Interface.h contains methods like;

	// Init/term
	EXTERN_C __declspec(dllexport) int SpatialUnderstanding_Init();
	EXTERN_C __declspec(dllexport) void SpatialUnderstanding_Term();

	// Scan flow control
	EXTERN_C __declspec(dllexport) void GeneratePlayspace_InitScan(
		float camPos_X, float camPos_Y, float camPos_Z,
		float camFwd_X, float camFwd_Y, float camFwd_Z,
		float camUp_X, float camUp_Y, float  camUp_Z,
		float searchDst, float optimalSize);

and then Dll_Topology.h includes functions such as;

	EXTERN_C __declspec(dllexport) int QueryTopology_FindLargestWall(
		_Inout_ TopologyResult* wall);

Now, calling these functions from C# in Unity is going to require using PInvoke and so…

SpatialUnderstandingDll*.cs in the Toolkit

There’s a number of scripts in the toolkit which then provide PInvoke wrappers over these different functional areas exported from the native DLL;

image

and so if I dig into a file like SpatialUnderstandingDll.cs I’ll find there’s an internal, public class called Imports which has PInvoke signatures like;

            [DllImport("SpatialUnderstanding")]
            public static extern int SpatialUnderstanding_Init();

and so that’s how I might call into that one function exported from the DLL from my C# code and if I dig into SpatialUnderstandingTopologyDll.cs as another example then I’ll find;

        [DllImport("SpatialUnderstanding")]
        public static extern int QueryTopology_FindLargestWall(
            [In, Out] IntPtr wall);             // TopologyResult

and so these provide the wrappers that make these functions callable but there’s another ‘trick’ here…

SpatialUnderstandingDll.cs Script in the Toolkit

When it comes to calling into functions like this one below;

            [DllImport("SpatialUnderstanding")]
            public static extern int QueryPlayspaceStats(
                [In] IntPtr playspaceStats);    // PlayspaceStats

so, what do I provide as an IntPtr here? Well, the same class file defines;

            [StructLayout(LayoutKind.Sequential, Pack = 1)]
            public class PlayspaceStats
            {
                public int IsWorkingOnStats;				// 0 if still working on creating the stats

                public float HorizSurfaceArea;              // In m2 : All horizontal faces UP between Ground - 0.15 and Ground + 1.f (include Ground and convenient horiz surface)
                public float TotalSurfaceArea;              // In m2 : All !
                public float UpSurfaceArea;                 // In m2 : All horizontal faces UP (no constraint => including ground)
                public float DownSurfaceArea;               // In m2 : All horizontal faces DOWN (no constraint => including ceiling)
                public float WallSurfaceArea;               // In m2 : All Vertical faces (not only walls)
                public float VirtualCeilingSurfaceArea;     // In m2 : estimation of surface of virtual Ceiling.
                public float VirtualWallSurfaceArea;        // In m2 : estimation of surface of virtual Walls.

                public int NumFloor;                        // List of Area of each Floor surface (contains count)
                public int NumCeiling;                      // List of Area of each Ceiling surface (contains count)
                public int NumWall_XNeg;                    // List of Area of each Wall XNeg surface (contains count)
                public int NumWall_XPos;                    // List of Area of each Wall XPos surface (contains count)
                public int NumWall_ZNeg;                    // List of Area of each Wall ZNeg surface (contains count)
                public int NumWall_ZPos;                    // List of Area of each Wall ZPos surface (contains count)
                public int NumPlatform;                     // List of Area of each Horizontal not Floor surface (contains count)

                public int CellCount_IsPaintMode;           // Number paint cells (could deduce surface of painted area) => 8cm x 8cm cell
                public int CellCount_IsSeenQualtiy_None;    // Number of not seen cells => 8cm x 8cm cell
                public int CellCount_IsSeenQualtiy_Seen;    // Number of seen cells => 8cm x 8cm cell
                public int CellCount_IsSeenQualtiy_Good;    // Number of seen cells good quality => 8cm x 8cm cell
            };

and so that’s good – I guess that I just have to alloc one of these, pin it and pass it across the boundary before unpinning it after the method completes?

The class helps me with that again. Firstly, it has private members like this one (there are others following the same pattern);

        private Imports.PlayspaceStats reusedPlayspaceStats = new Imports.PlayspaceStats();
        private IntPtr reusedPlayspaceStatsPtr;

and then it provides a method like this one;

        public Imports.PlayspaceStats GetStaticPlayspaceStats()
        {
            return reusedPlayspaceStats;
        }

and so rather than having to make my own instance of PlayspaceStats I can just ‘borrow’ this one when I need it – e.g;

            var ptr = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStatsPtr();
            SpatialUnderstandingDll.Imports.QueryPlayspaceStats(ptr);

and that Imports class also has methods to PinObject/String which both pins and puts them onto a list which can later be cleared via UnpinAllObjects so it’s essentially helping with the mechanics involved in calling the underlying DLL.

Extending Functionality

I was looking into the library here because I wanted to extend the functionality provided.

There’s lots of different functionality supported including examples like;

  • Getting hold of meshes
  • Raycasting to determine what type of object the user is looking at (wall, ceiling, etc)
  • Getting the space alignment, floor, ceilings etc.
  • Creating objects of specific sizes on walls, floors, ceilings etc. or randomly away from those obstacles.
  • Finding positions for objects on walls, ceilings, etc.

and quite a lot more. What I wanted though was a simple list of the walls that the library has found within my environment and I didn’t find a method in the library that already did this and so I thought it might be useful to step through how I might add that functionality for my own purposes in the future and for anyone else that wants to do something similar.

The functionality relating to walls seems to reside in these classes in the native SpatialUnderstanding project;

image

and so the first thing I did was to visit TopologyAnalyzer_W.h and add a new method signature;


	void	GetWalls(Vec3fDA& _outPos, Vec3fDA& _outNormal, 
		FloatDA& _outWidths, FloatDA& _outLengths, Bool _bAllowVirtualWall = TRUE);

and then I added an implementation of this in the .cpp file which essentially just copies the centre, normal, width, height of a wall out of the array it resides in;

void TopologyAnalyzer_W::GetWalls(Vec3fDA& _outPos, Vec3fDA& _outNormal, 
	FloatDA& _outWidths, FloatDA&_outLengths, Bool _bAllowVirtualWall)
{
	for (S32 w = 0; w<m_daWalls.GetSize(); w++)
	{
		Wall& wall = m_daWalls[w];

		if (wall.m_bIsVirtual && !_bAllowVirtualWall)
			continue;

		_outPos.Add(wall.m_vCentroid);
		_outNormal.Add(wall.m_vNormal);
		_outLengths.Add(wall.m_fHeight);
		_outWidths.Add(wall.m_fWidth);
	}
}

With that in place, I can modify the Dll_Topology.h and .cpp files to expose that new function;

	EXTERN_C __declspec(dllexport) int QueryTopology_FindWalls(
		_Inout_ TopologyResult* wall);

and;

EXTERN_C __declspec(dllexport) int QueryTopology_FindWalls(
	_In_ int locationCount,
	_Inout_ Dll_Interface::TopologyResult* locationData)
{
	UnderstandingMgr_W &UnderstandingMgr = UnderstandingMgr_W::GetUnderstandingMgr();

	Vec3fDA outPos, outNormal;
	FloatDA outWidths, outLengths;

	UnderstandingMgr.GetPlayspaceInfos().m_TopologyAnalyzer.GetWalls(
		outPos, outNormal, outWidths, outLengths, FALSE);

	return(OutputLocations(locationCount, locationData, outPos, outNormal, outWidths,
		outLengths));
}

and that leans heavily on the existing OutputLocations function within that class.

I can then alter the PInvoke wrapper in SpatialUnderstandingDllTopology.cs in order to surface this new API;

        [DllImport("SpatialUnderstanding")]
        public static extern int QueryTopology_FindWalls(
            [In] int locationCount,             // Pass in the space allocated in locationData
            [In, Out] IntPtr locationData);     // TopologyResult

and then I’m set up to be able to make that call although naturally I need to make sure that I recompile my C++ code, get the resultant DLL and ensure that Unity has picked it up instead of the one that lives in the HoloToolkit-Unity by default.

Making Use of the Extension

I can then perhaps try this new ‘FindWalls’ functionality out by adding a method to the existing SpaceVisualizer to make this call and visualize it with rectangles;

        public void Query_Topology_FindWalls(int top)
        {
            // Only if we're enabled
            if (!SpatialUnderstanding.Instance.AllowSpatialUnderstanding)
            {
                return;
            }
            var resultsTopology = new SpatialUnderstandingDllTopology.TopologyResult[128];

            IntPtr resultsTopologyPtr =
                SpatialUnderstanding.Instance.UnderstandingDLL.PinObject(resultsTopology);
            
            int locationCount = SpatialUnderstandingDllTopology.QueryTopology_FindWalls(
                resultsTopology.Length,
                resultsTopologyPtr);

            if (locationCount != 0)
            {
                var rects = resultsTopology.OrderByDescending(
                    r => r.width * r.length).Take(Math.Max(top, resultsTopology.Length));

                foreach (var rect in rects)
                {
                    float timeDelay = (float)lineBoxList.Count * AnimatedBox.DelayPerItem;
                    lineBoxList.Add(
                        new AnimatedBox(
                            0.0f,
                            rect.position,
                            Quaternion.LookRotation(rect.normal, Vector3.up),
                            Color.blue,
                            new Vector3(rect.width, rect.length, 0.05f) * 0.5f));
                }
            }
        }

and I could then call that from my original code when the status of the scanning goes –> Done;

    void OnScanStateChanged()
    {
        if (SpatialUnderstanding.Instance.ScanState == SpatialUnderstanding.ScanStates.Done)
        {
            this.isScanning = false;
            this.isStopping = false;

            // Use Space Visualizer to find a large wall
            SpaceVisualizer.Instance.Query_Topology_FindWalls(10);

        }
    }

and that produces some ‘interesting’ results in giving me walls around this chair;

image

but, naturally, I could then filter down by size or filter down by angle to remove some of those from my result set but I’ve managed to get the data that I wanted out from the native code and into my C# code so that I can further work on it.

It’s possible that I’ve missed a “get all walls” method in here so feel free to comment and let me know if that’s the case but I thought I’d write up these rough notes in the meantime as I know I’ll be coming back to read this note myself in the future when I revisit spatial understanding again Smile

Hitchhiking the HoloToolkit-Unity, Leg 13–Continuing with Shared Experiences

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Continuing with what I’d published in this previous post, I wanted to add some more functionality to the shared experience that I’ve been experimenting with and so I’m going to take this (and the codebase) further in this post.

The first thing that I wanted to add was the ability to download the model that’s going to be viewed from a web server rather than have it hard-baked into the application’s binary.

The ‘Unity’ way of doing this seems to be to use asset bundles and so, in order to achieve this, I combined what I’d learned in this post into my code base such that my project now contains the code that’s needed to both;

  1. Build asset bundles
  2. Download asset bundles at runtime

Note that (as per that post) this involves me changing Unity’s demo scripts somewhat to work with HoloLens (only tiny changes as it happens). Wherever I’ve done that, I’ve used a #if MIKET_CHANGE to flag what I’ve been doing to someone else’s script.

This code now lives here in my project;

image

and I have both the scripts for runtime and the scripts for the Editor so that I can build the asset bundle for the model from within my project.

Loading the Model from an Asset Bundle at Startup

With those scripts added, I needed to do a little work to get this dynamic model loading implemented and so I made a prefab out of the model, took it out of my scene and made it into an asset bundle;

image

and then built that bundle;

image

and deployed it to my test web server in Azure;

image

I made a small structural change in that I moved my ‘Coordinator’ script which handles the essential ‘flow’ of the app so that it was a component of the SharedObjects game object rather than a stand alone script;

image

and I added a new script here which I called ‘Bundle Downloader’ which is intended to the bare bones of loading up an asset from a bundle on a web server, leaning very heavily on my earlier post and the code that Unity ships as part of their demo project and with some very basic support around providing a fallback prefab for the case where the download doesn’t succeed or the developer has simply switched it off in the editor;

using AssetBundles;
using HoloToolkit.Unity;
using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class BundleDownloadedEventArgs : EventArgs
{
  public bool DownloadSucceeded { get; set; }
}

public class BundleDownloader : Singleton<BundleDownloader>
{
  [SerializeField]
  string downloadUrl;

  [SerializeField]
  string bundleName;

  [SerializeField]
  string prefabName;

  [SerializeField]
  GameObject fallbackPrefab;

  [SerializeField]
  bool isActive = true;

  public GameObject LoadedPrefab
  {
    get; set;
  }

  public event EventHandler<BundleDownloadedEventArgs> Downloaded;

  public void StartAsyncDownload()
  {
    StartCoroutine(this.DownloadAsync());
  }
  IEnumerator DownloadAsync()
  {
    var prefabObject = this.fallbackPrefab;
    var succeeded = false;

#if !UNITY_EDITOR

    if (this.isActive &&
      !string.IsNullOrEmpty(this.downloadUrl) &&
      !string.IsNullOrEmpty(this.bundleName) &&
      !string.IsNullOrEmpty(this.prefabName))
    {
      AssetBundleManager.SetSourceAssetBundleURL(this.downloadUrl);

      var initializeOperation = AssetBundleManager.Initialize();

      if (initializeOperation != null)
      {
        yield return StartCoroutine(initializeOperation);

        AssetBundleLoadAssetOperation loadOperation = null;

        try
        {
          loadOperation = AssetBundleManager.LoadAssetAsync(
            this.bundleName, this.prefabName, typeof(GameObject));
        }
        catch
        {

        }
        if (loadOperation != null)
        {
          yield return StartCoroutine(loadOperation);

          var loadedPrefab = loadOperation.GetAsset<GameObject>();

          if (loadedPrefab != null)
          {
            prefabObject = loadedPrefab;
            succeeded = true;
          }
        }
      }
    }
#else
    succeeded = true;
#endif

    this.LoadedPrefab = prefabObject;

    if (this.Downloaded != null)
    {
      this.Downloaded(
        this, new global::BundleDownloadedEventArgs()
        {
          DownloadSucceeded = succeeded
        }
      );
    }
    yield break;
  }
}

and I set that up in the Unity editor such that is properties were set to point to the right URLs etc;

image

and I then modified my Coordinator script such that it had two new status values which are represented at the top of this enum here and I also modified the display text that is displayed on start up;

 enum CurrentStatus
  {
    Start,
    WaitingForModelToLoad,
    WaitingToConnectToStage,
    WaitingForRoomApiToStabilise,
    WaitingForModelPositioning,
    WaitingForWorldAnchorExport,
    WaitingForWorldAnchorImport
  }
  void Start()
  {
    StatusTextDisplay.Instance.SetStatusText("waiting for model to load");
  }

and then added a case to my switch statement to check for that Start status and move the app on to the ‘waiting for network’ status;

        case CurrentStatus.Start:
          this.MoveToStatus(CurrentStatus.WaitingForModelToLoad);

          Debug.Log("Coordinator: starting to load model from web server");
          StatusTextDisplay.Instance.SetStatusText("loading model from web server");

          this.GetComponent<BundleDownloader>().Downloaded += this.OnModelDownloaded;
          this.GetComponent<BundleDownloader>().StartAsyncDownload();
          break;

and so this code now gets hold of the new BundleDownloader, asks it to download the model from the web server and when that completes we get this Downloaded event which we handle by adding;

  void OnModelDownloaded(object sender, BundleDownloadedEventArgs e)
  {
    var bundleDownloader = this.GetComponent<BundleDownloader>();

    bundleDownloader.Downloaded -= this.OnModelDownloaded;

    Debug.Log(
      string.Format(
        "Coordinator: download of model from web server has completed and {0}",
        e.DownloadSucceeded ? "succeeded" : "failed or wasn't tried"));

    StatusTextDisplay.Instance.SetStatusText(
      string.Format(
        "{0} model from web server",
        e.DownloadSucceeded ? "loaded" : "failed to load"));

    // Create the model and parent it off this object.
    this.model = Instantiate(bundleDownloader.LoadedPrefab);
    this.model.transform.parent = this.modelParent.transform;

    // Move the world locked parent so that it's in a 'reasonable'
    // place to start with
    this.modelParent.transform.SetPositionAndRotation(
      WORLD_LOCKED_STARTING_POSITION, Quaternion.identity);

    Debug.Log(
      string.Format(
        "Coordinator: waiting for network connection",
        e.DownloadSucceeded ? "succeeded" : "failed or wasn't tried"));

    StatusTextDisplay.Instance.SetStatusText("connecting to room server");

    this.MoveToStatus(CurrentStatus.WaitingToConnectToStage);
  }

with that in place, I now have the same app as previously but with the addition that this version dynamically loads its model from a web server – as in the screenshot below;

image

followed by the model arriving from the web;

image

Adding More to the Remote Head Manager (the T-shirt)

In the existing code, I have the idea that users can be in different rooms based on the name of their WiFi network.

The flow is something like;

  • A room is created if there is not a room on the server for the WiFi network name.
  • The first user into the room positions the model.
  • Subsequent users into the room see the model already positioned by the first user.
  • Users across all rooms can see the HoloLens position and movement of other users regardless of room.

This was working in the previous blog post but I wanted a better representation of a HoloLens user that was remote and so I found a simple model of a T-shirt out on the web;

image

and then I modified the Remote Head Manager script primarily to add another GameObject for the body to be displayed set to be this T-shirt below;

image

and I modified the script in order to support both a head and a body object for a user that is not determined to be in the same room as the user running the app. I’ve included the entire script here but it’s perhaps better to check the github repo for the details;

#define MIKET_CHANGE
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See LICENSE in the project root for license information.

using System;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity;
using HoloToolkit.Unity.InputModule;

namespace HoloToolkit.Sharing.Tests
{
  /// <summary>
  /// Broadcasts the head transform of the local user to other users in the session,
  /// and adds and updates the head transforms of remote users.
  /// Head transforms are sent and received in the local coordinate space of the GameObject this component is on.
  /// </summary>
  public class RemoteHeadManager : Singleton<RemoteHeadManager>
  {
    public class RemoteHeadInfo
    {
      public long UserID;
      public GameObject HeadObject;
#if MIKET_CHANGE
      public GameObject BodyObject;
#endif     
    }

#if MIKET_CHANGE
    public GameObject remoteHeadPrefab;
    public GameObject remoteBodyPrefab;
#endif

    /// <summary>
    /// Keep a list of the remote heads, indexed by XTools userID
    /// </summary>
    private Dictionary<long, RemoteHeadInfo> remoteHeads = new Dictionary<long, RemoteHeadInfo>();

#if MIKET_CHANGE
    private void OnEnable()
    {
      this.roomId = -1;

      CustomMessages.Instance.MessageHandlers[CustomMessages.TestMessageID.HeadTransform] =
        UpdateHeadTransform;

      SharingStage.Instance.SessionUsersTracker.UserJoined += UserJoinedSession;
      SharingStage.Instance.SessionUsersTracker.UserLeft += UserLeftSession;
    }
#else
    private void Start()
    {
      CustomMessages.Instance.MessageHandlers[CustomMessages.TestMessageID.HeadTransform] = UpdateHeadTransform;

      // SharingStage should be valid at this point, but we may not be connected.
      if (SharingStage.Instance.IsConnected)
      {
        Connected();
      }
      else
      {
        SharingStage.Instance.SharingManagerConnected += Connected;
      }
    }
    private void Connected(object sender = null, EventArgs e = null)
    {
      SharingStage.Instance.SharingManagerConnected -= Connected;

      SharingStage.Instance.SessionUsersTracker.UserJoined += UserJoinedSession;
      SharingStage.Instance.SessionUsersTracker.UserLeft += UserLeftSession;
    }
#endif

    private void Update()
    {
#if MIKET_CHANGE
      this.DetermineCurrentRoom();
#endif
      // Grab the current head transform and broadcast it to all the other users in the session
      Transform headTransform = Camera.main.transform;

      // Transform the head position and rotation from world space into local space
      Vector3 headPosition = transform.InverseTransformPoint(headTransform.position);

      Quaternion headRotation = Quaternion.Inverse(transform.rotation) * headTransform.rotation;

#if MIKET_CHANGE
      CustomMessages.Instance.SendHeadTransform(headPosition, headRotation,
         this.roomId);
#endif
    }
#if MIKET_CHANGE
    void DetermineCurrentRoom()
    {
      if (this.roomId == -1)
      {
        var roomManager = SharingStage.Instance.Manager.GetRoomManager();

        if (roomManager != null)
        {
          var room = roomManager.GetCurrentRoom();
          this.roomId = room.GetID();
        }
      }
    }
#endif

    protected override void OnDestroy()
    {
      if (SharingStage.Instance != null)
      {
        if (SharingStage.Instance.SessionUsersTracker != null)
        {
          SharingStage.Instance.SessionUsersTracker.UserJoined -= UserJoinedSession;
          SharingStage.Instance.SessionUsersTracker.UserLeft -= UserLeftSession;
        }
      }

      base.OnDestroy();
    }

    /// <summary>
    /// Called when a new user is leaving the current session.
    /// </summary>
    /// <param name="user">User that left the current session.</param>
    private void UserLeftSession(User user)
    {
      int userId = user.GetID();
      if (userId != SharingStage.Instance.Manager.GetLocalUser().GetID())
      {
        RemoveRemoteHead(remoteHeads[userId].HeadObject);
        remoteHeads.Remove(userId);
      }
    }

    /// <summary>
    /// Called when a user is joining the current session.
    /// </summary>
    /// <param name="user">User that joined the current session.</param>
    private void UserJoinedSession(User user)
    {
      if (user.GetID() != SharingStage.Instance.Manager.GetLocalUser().GetID())
      {
        GetRemoteHeadInfo(user.GetID());
      }
    }

    /// <summary>
    /// Gets the data structure for the remote users' head position.
    /// </summary>
    /// <param name="userId">User ID for which the remote head info should be obtained.</param>
    /// <returns>RemoteHeadInfo for the specified user.</returns>
    public RemoteHeadInfo GetRemoteHeadInfo(long userId)
    {
      RemoteHeadInfo headInfo;

      // Get the head info if its already in the list, otherwise add it
      if (!remoteHeads.TryGetValue(userId, out headInfo))
      {
        headInfo = new RemoteHeadInfo();
        headInfo.UserID = userId;
        headInfo.HeadObject = CreateRemoteHead();

#if MIKET_CHANGE
        headInfo.BodyObject = Instantiate(this.remoteBodyPrefab);
        headInfo.BodyObject.transform.parent = this.gameObject.transform;
#endif
        remoteHeads.Add(userId, headInfo);
      }

      return headInfo;
    }

    /// <summary>
    /// Called when a remote user sends a head transform.
    /// </summary>
    /// <param name="msg"></param>
    private void UpdateHeadTransform(NetworkInMessage msg)
    {
      // Parse the message
      long userID = msg.ReadInt64();

      Vector3 headPos = CustomMessages.Instance.ReadVector3(msg);

      Quaternion headRot = CustomMessages.Instance.ReadQuaternion(msg);

#if MIKET_CHANGE
      long remoteRoomId = msg.ReadInt64();
#endif

      RemoteHeadInfo headInfo = GetRemoteHeadInfo(userID);
      headInfo.HeadObject.transform.localPosition = headPos;
      headInfo.HeadObject.transform.localRotation = headRot;

#if MIKET_CHANGE
      var rayLength = maxRayDistance;

      RaycastHit hitInfo;

      if (Physics.Raycast(
        headInfo.HeadObject.transform.position,
        headInfo.HeadObject.transform.forward,
        out hitInfo))
      {
        rayLength = hitInfo.distance;
      }
      var lineRenderer = headInfo.HeadObject.GetComponent<LineRenderer>();
      lineRenderer.SetPosition(1, Vector3.forward * rayLength);

      if ((remoteRoomId == -1) || (this.roomId == -1) ||
        (remoteRoomId != this.roomId))
      {
        headInfo.BodyObject.SetActive(true);
        headInfo.BodyObject.transform.localPosition = headPos;
        headInfo.BodyObject.transform.localRotation = headRot;
      }
      else
      {
        headInfo.BodyObject.SetActive(false);
      }
#endif
    }

    /// <summary>
    /// Creates a new game object to represent the user's head.
    /// </summary>
    /// <returns></returns>
    private GameObject CreateRemoteHead()
    {
      GameObject newHeadObj = Instantiate(this.remoteHeadPrefab);
      newHeadObj.transform.parent = gameObject.transform;

#if MIKET_CHANGE
      this.AddLineRenderer(newHeadObj);
#endif
      return newHeadObj;
    }
#if MIKET_CHANGE
    void AddLineRenderer(GameObject headObject)
    {
      var lineRenderer = headObject.AddComponent<LineRenderer>();
      lineRenderer.useWorldSpace = false;
      lineRenderer.startWidth = 0.01f;
      lineRenderer.endWidth = 0.05f;
      lineRenderer.positionCount = 2;
      lineRenderer.SetPosition(0, Vector3.forward * 0.1f);
      var material = new Material(Shader.Find("Diffuse"));
      material.color = colors[this.colorIndex++ % colors.Length];

      lineRenderer.material = material;
    }
#endif

    /// <summary>
    /// When a user has left the session this will cleanup their
    /// head data.
    /// </summary>
    /// <param name="remoteHeadObject"></param>
    private void RemoveRemoteHead(GameObject remoteHeadObject)
    {
      DestroyImmediate(remoteHeadObject);
    }
#if MIKET_CHANGE
    long roomId;
    const float maxRayDistance = 5.0f;
    int colorIndex;
    static Color[] colors =
    {
      Color.red,
      Color.green,
      Color.blue,
      Color.cyan,
      Color.magenta,
      Color.yellow
    };
#endif
  }
}

Most of the modifications here are to try and figure out whether the user is present in the same room (i.e. WiFi network) as the current user – this seems to work ok but I’d not be surprised to see bugs and the way that I’m doing it currently means that you can certainly see a T-shirt for a local user until the point where the code realises that it really is a local user.

The net effect of all this looks something like the picture below and, currently, I move and rotate the T-shirt just as much as the head and that’s an area that could definitely be refined;

image

Moving the Model After It Has Been Positioned

Previously, the app has allowed one user to position a model in a room, say the word “lock” and then anchor that model such that another user can view it at the same place in the same room or remotely from another room.

What I haven’t previously made possible was the idea that the users might be able to continue to manipulate the model after it has been placed into the room.

In order to progress this, I need to modify the current code base because right now the ModelParent object takes on two roles;

image

The first role is that this object is the one which ultimately becomes world anchored to provide a common parent for users to view the same content.

The second role is that this object is the one with the “User Moveable” script on it allowing the user to move it around and then say “lock” to world anchor it.

These two roles are compatible in a situation where I no longer want to move the model once it has been “locked”. Hover, if the model is to move after that locking process then I need to make changes because the world anchor stops it responding to movement.

One way around this is to dynamically add/remove the world anchor but I preferred what seemed like a simpler way which is to leave this code alone and dynamically to add a “User Moveable” behaviour to the model as it is created and so I modified my coordinator script to do this.

I have code that runs [as/when] the world anchor is [imported/exported] from the server and so it seemed sensible to dynamically add that “User Moveable” behaviour at this point;

  void OnImportOrExportCompleted(bool succeeded)
  {
    StatusTextDisplay.Instance.SetStatusText("room in sync");

    // Allow the child to be moved now that it is positioned in
    // the right place.
    this.model.AddComponent<UserMoveable>();

    // Switch on the remote head management.
    this.modelParent.GetComponent<RemoteHeadManager>().enabled = true;
  }

and so now I have solved perhaps “half” the problem in that the model can be moved once it has been locked.

However, those movements won’t be reflected on any of the other devices in the room. That needs more of a change…

Baby Steps Into the SyncModel

To synchronize the movement of the model, I need a component which will monitor its transform, note any changes and then send them over the network to other devices to be applied locally.

I’ve seen this sort of behaviour in the SyncSpawnManager and PrefabSpawnManager pieces before which I wrote a little about in this post and, from unpicking that code, I know that the HoloToolkit-Unity has a component named TransformSynchronizer which does exactly this work and that it works with the SyncTransform class to do this;

image

Here, the TransformSynchronizer does the monitoring of the data held by the underlying SyncTransform which the code refers to as ‘the data model’. As such, it derives from SyncObject which lives within the whole SyncModel part of the toolkit;

image

At the time of writing, I’m not sure that I entirely understand how all of this ‘synchronized data model’ part of the sharing toolkit works to the ultimate level of detail but I think that I now know that the SharingStage component which is used to group up the networking functionality has a root object;

image

and that’s of type SyncRoot and it’s the data (broken down into SyncObject derived types) which is to be sync’d across devices with change notification support as you’d expect.

I noticed that the Prefab Spawn Manager has some data in this SyncRoot which it uses to synchronize the list of objects that have been spawned across devices via its various routines;

image

In this particular project, I’m not using the Prefab Spawn Manager because I am not dynamically adding objects into the scene once they are loaded from the web server.

However, I figure that I can use the same mechanism that the Prefab Spawn Manager uses in order to try and keep the transformations on my model in sync and so I added a property here;

#define MIKET_CHANGE
//
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See LICENSE in the project root for license information.
//

using HoloToolkit.Sharing.Spawning;
using HoloToolkit.Sharing.SyncModel;

namespace HoloToolkit.Sharing
{
  /// <summary>
  /// Root of the synchronization data model used by this application.
  /// </summary>
  public class SyncRoot : SyncObject
  {
#if MIKET_CHANGE
    [SyncData]
    public SyncSpawnedObject ModelObject;
#endif

    /// <summary>
    /// Children of the root.
    /// </summary>
    [SyncData]
    public SyncArray<SyncSpawnedObject> InstantiatedPrefabs;

    /// <summary>
    /// Constructor.
    /// </summary>
    /// <param name="rootElement">Root Element from Sharing Stage</param>
    public SyncRoot(ObjectElement rootElement)
    {
      Element = rootElement;
      FieldName = Element.GetName().GetString();
      InitializeSyncSettings();
      InitializeDataModel();
    }

    private void InitializeSyncSettings()
    {
      SyncSettings.Instance.Initialize();
    }

    /// <summary>
    /// Initializes any data models that need to have a local state.
    /// </summary>
    private void InitializeDataModel()
    {
      InstantiatedPrefabs.InitializeLocal(Element);

#if MIKET_CHANGE
      this.ModelObject.InitializeLocal(Element);
#endif
    }
  }
}

and I wanted to see if I could then make use of this to synchronize the model after it had already been created.

I modified the code in my Coordinator script so as to make sure that this was set up appropriately at the point when the world anchor’d object was either exported or imported from/into the current device. That changed the existing method that dealt with this to;

 void OnImportOrExportCompleted(bool succeeded)
  {
    StatusTextDisplay.Instance.SetStatusText("room in sync");

    // Allow the child to be moved now that it is positioned in
    // the right place.
    this.model.AddComponent<UserMoveable>();

    var dataModel = SharingStage.Instance.Root.ModelObject;
    dataModel.GameObject = this.model.gameObject;
    dataModel.Initialize(this.model.gameObject.name, this.model.transform.GetFullPath());
    dataModel.Transform.Position.Value = this.model.transform.localPosition;
    dataModel.Transform.Rotation.Value = this.model.transform.localRotation;
    dataModel.Transform.Scale.Value = this.model.transform.localScale;

    var synchronizer = this.model.EnsureComponent<TransformSynchronizer>();
    synchronizer.TransformDataModel = dataModel.Transform;

    // Switch on the remote head management.
    this.modelParent.GetComponent<RemoteHeadManager>().enabled = true;
  }

and that seemed to enable the experience that I was looking for as demonstrated in this little test video below;

and so that works reasonably well but it led me to think that there might be another option here where a user might want to move the four cubes of the model as a single group or, alternatively, as four separate objects.

Moving the Parent or the Children

In the solution at this point, I have a model (ultimately served from a web server) and that model is treated as a single thing. There’s a box collider wrapped around it so that it can be ‘hit tested’ and all the cubes behave as a group.

image

But there might be an occasion where a user wanted to treat each individual cube separately as they also have colliders on them. That could be taken further by allowing the user to do this recursively and let them choose levels of objects to manipulate but that might be going “a little far” for this blog post and so I’ve chosen just to think about the direct children of the model and I’ve made an assumption that the model designer will be kind and will have put colliders onto them for my code to hit against.

I don’t necessarily want to lose the original ‘grouped’ mode of operation though and so I figured it might be nice to have that be the default and then add some voice command (‘split’) which switches into the child-focused mode. At the time of writing, I haven’t attempted to implement the idea of reversing this choice as it’s a bit more tricky.

With that in mind, I decided that the way that I would approach this would be to make the model and its direct children all capable of being moved in a trackable way that’s synchronized across devices. If I assume this then the act of switching from ‘group mode’ to ‘child mode’ simply involves taking the collider off the model object in order to let the child colliders ‘come through’.

To make these changes, I first went and changed my SyncRoot again such that I had a SyncArray rather than a single SyncSpawnedObject instance;

#define MIKET_CHANGE
//
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See LICENSE in the project root for license information.
//

using HoloToolkit.Sharing.Spawning;
using HoloToolkit.Sharing.SyncModel;

namespace HoloToolkit.Sharing
{
  /// <summary>
  /// Root of the synchronization data model used by this application.
  /// </summary>
  public class SyncRoot : SyncObject
  {
#if MIKET_CHANGE
    [SyncData]
    public SyncArray<SyncSpawnedObject> ModelObjects;
#endif

    /// <summary>
    /// Children of the root.
    /// </summary>
    [SyncData]
    public SyncArray<SyncSpawnedObject> InstantiatedPrefabs;

    /// <summary>
    /// Constructor.
    /// </summary>
    /// <param name="rootElement">Root Element from Sharing Stage</param>
    public SyncRoot(ObjectElement rootElement)
    {
      Element = rootElement;
      FieldName = Element.GetName().GetString();
      InitializeSyncSettings();
      InitializeDataModel();
    }

    private void InitializeSyncSettings()
    {
      SyncSettings.Instance.Initialize();
    }

    /// <summary>
    /// Initializes any data models that need to have a local state.
    /// </summary>
    private void InitializeDataModel()
    {
      InstantiatedPrefabs.InitializeLocal(Element);

#if MIKET_CHANGE
      this.ModelObjects.InitializeLocal(Element);
#endif
    }
  }
}

and then I modified the code which runs when the world anchor is either imported or exported to populate this array in a very similar way to the single-object case. Note that I made the slightly dubious decision to store both the parent model and all of its children in the same array which would likely lead some future maintainer (i.e. me) to make ‘off by 1’ type errors here;

  void OnImportOrExportCompleted(bool succeeded)
  {
    StatusTextDisplay.Instance.SetStatusText(
      string.Format("room import/export {0}", succeeded ? "succeeded" : "failed"));

    if (succeeded)
    {
      var isExporter = this.currentStatus == CurrentStatus.WaitingForWorldAnchorExport;

      StatusTextDisplay.Instance.SetStatusText("room is now in sync");

      // First, make sure the model itself is set up to be moveable in a trackable way.
      // NB: We use index 0 for the model itself.
      this.MakeModelPartMoveableAndTrackable(this.model, 0, isExporter);

      // And all of its children are also moveable.
      var childCount = this.model.transform.childCount;

      // NB: 1 to N because the have the model in slot 0.
      for (int i = 1; i <= childCount; i++)
      {
        // NB: We add 1 because the model itself is in slot 1.
        var child = this.model.transform.GetChild(i - 1);
        this.MakeModelPartMoveableAndTrackable(child.gameObject, i, isExporter);
      }
      // Switch on the remote head management.
      this.modelParent.GetComponent<RemoteHeadManager>().enabled = true;

      // Switch on the keyword recognizer listening for 'join' and 'split'
      this.gameObject.GetComponent<KeywordManager>().StartKeywordRecognizer();
    }
  }
  SyncSpawnedObject MakeModelPartMoveableAndTrackable(
    GameObject objectInstance, int indexIntoRootSyncStore, bool isExporter)
  {
    SyncSpawnedObject dataModel = null;

    if (isExporter)
    {
      dataModel = new SyncSpawnedObject();
      dataModel.GameObject = objectInstance; ;
      dataModel.Initialize(objectInstance.name, objectInstance.transform.GetFullPath());
      dataModel.Transform.Position.Value = objectInstance.transform.localPosition;
      dataModel.Transform.Rotation.Value = objectInstance.transform.localRotation;
      dataModel.Transform.Scale.Value = objectInstance.transform.localScale;

      SharingStage.Instance.Root.ModelObjects.AddObject(dataModel);
    }
    else
    {
      dataModel = 
        SharingStage.Instance.Root.ModelObjects.GetDataArray()[indexIntoRootSyncStore];
    }
    objectInstance.EnsureComponent<UserMoveable>();
    var synchronizer = objectInstance.EnsureComponent<TransformSynchronizer>();
    synchronizer.TransformDataModel = dataModel.Transform;

    return (dataModel);
  }

and so now my code is tracking all of the N objects that live at level 0 (the model) and level 1 (its direct children) in the scene which is loaded from the web server.

With that in place, I just need to add a new KeywordManager component to listen for the word “split”

image

and then I can write the code on my Coordinator script to ensure this takes away any top collider on the model;

  public void OnSplit()
  {
    this.GetComponent<KeywordManager>().enabled = false;
    this.model.GetComponent<Collider>().enabled = false;
  }

and that’s pretty much it for this post.

Testing and Wrapping Up

To test this last piece of ‘splitting up’ code, I had to be a little crafty and make use of HoloLens+Emulator as two separate users as I only had the one HoloLens. I’ll try it with multiple devices at the first opportunity but, for now, here’s a capture of it running across my device and my emulator and it seems to work reasonably well;

What I’d like to do next is to capture how this now looks when I have multiple users in a single room and also multiple users split across rooms represented by the T-shirt.

I’ll come back to that in a follow on post…

The code for all of this is here on github, keep in mind that you’ll need to set up IP addresses or host names to make it work.

Hitchhiking the HoloToolkit-Unity, Leg 12– More Experiments with Shared Holographic Experiences

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

In my previous post, I made some steps forward in terms of enabling a basic shared holographic experience using the Sharing Service from the HoloToolkit-Unity such that;

  1. Multiple devices could join a scene
  2. Those devices could create shared holograms that each could see in a consistent place within the real-world
  3. The head position of each device was visible to the other devices within the scene

I wanted to take this further and try to come up with a solution that enabled a little more functionality when devices are physically co-located in a space but also to try and lay the ground for devices which are located elsewhere accepting that this might introduce latency issues.

The Scenario

What I wanted to enable runs something like this;

  1. There is a holographic model that multiple users want to look at, talk about etc.
  2. Those users might be physically in the same space, meaning that the model should be consistently placed in their real world in a position that the first user into the scene chooses.
  3. Users should be able to see the position of the HoloLens devices of other users and be aware of what those users are looking at.
    1. A stretch goal would be to allow users to annotate the model, scale, rotate it etc.

As another stretch goal, I’d also like this to work for users who are in a remote location but I’m probably not going to get to that in this blog post.

I wanted to try and make use of the ‘rooms’ facility of the Sharing Service within the HoloToolkit-Unity to see if can ultimately support the idea of multiple rooms which might be local/remote to implement something along the lines of;

  1. Join the sharing service.
  2. Identify the current WiFi network name and look for a room on the sharing service with that name.
    1. If that room exists, join it, download and import a spatial anchor and lock the model to that anchor.
    2. If that room doesn’t exist, allow the user to position the model and then create the room, add a spatial anchor to the model and export and upload it to that room on the sharing service.
  3. As users interact, broadcast their head positions (at some interval) over the network and remotely display those head positions and their gaze vector to the other users.
    1. For users in the same room, these head positions can be relative to some world-anchored object so as to make them translatable from one device to another.
    2. In a follow-on post, for remote users, these head positions need to be relative to the position of the models in the local/remote views.

That sounds achievable so I wanted to give it a try and I thought I’d write up the steps here…although I’m setting off on this journey without knowing whether the remote rooms part of this is achievable or not and I suspect it will come down to finding out whether messages can be broadcast across rooms by the sharing service.

Here’s the steps that I ran through here;

Step 0 – Make a Project

I made a new Unity project much like I do in this video and imported the HoloToolkit-Unity;

and I made sure that the project had the capabilities to use the microphone, internet client, private networks and spatial perception and I imported these pieces of the HoloToolkit;

image

Note that I don’t anticipate using the Spatial Mapping functionality here as I’m not intending to have my model collide with the real world but I do need the spatial perception capability in order for world-anchor functionality to work.

However, there is a dependency in the script WorldAnchorManager.cs on the SpatialMapping component although it doesn’t seem like a huge dependency that will impact me so I could either;

  1. Hack the script.
  2. Import the SpatialMapping section of the Toolkit knowing that I’m not really using it.

I went with (2) so had to also import;

image

and then my project will build.

Step 1 – Make a Model

I didn’t want to spend a long time on making a model and so I made simple model which is just four cubes positioned at ‘cardinal’ points 1m away from each other and initially centred in front of the user at a distance of 3m;

image

I also added a box collider to that model such that I can treat it as a single object in terms of hit testing and so on and, as you can see, I parented this model off an empty object named ModelParent.

Step 2 – Make it Moveable

I want the user to be able to manipulate the model somewhat and so I’m adding a little bit of control. Note that I am not using the regular Tap To Place script because it has a dependency on Spatial Mapping so that it can collide models with the physical environment which isn’t a concern for me here.

I added a few extra pieces to what I already have, specifically this ‘SharedObjects’ holder which holds an Audio Source, Cursor, Input Manager and Text to Speech Manager;

image

and I reparented my Model into a new, empty parent named ModelParent and I added a Keyword Manager to handle a single “lock” keyword;

image

and then married this up with a script to handle the basic interactions that I want here;

using HoloToolkit.Unity;
using HoloToolkit.Unity.InputModule;
using System;
using UnityEngine;
using UnityEngine.Events;

public class UserMoveable : MonoBehaviour, IManipulationHandler, IInputClickHandler
{
  [SerializeField]
  TextToSpeechManager textToSpeechManager;

  public event EventHandler Locked;

  enum Rail
  {
    X,
    Y
  }
  private void OnEnable()
  {
    this.textToSpeechManager.SpeakText(
      "Tap to toggle the model following you and drag to raise or rotate. Say lock when done");
  }
  public void OnLock()
  {
    // We're done.
    this.gameObject.GetComponent<KeywordManager>().StopKeywordRecognizer();
    this.enabled = false;

    if (this.Locked != null)
    {
      this.Locked(this, EventArgs.Empty);
    }
  }
  public UserMoveable()
  {
    this.rail = Rail.X;
  }
  public void OnManipulationStarted(ManipulationEventData eventData)
  {
    this.lastDelta = eventData.CumulativeDelta;
  }
  public void OnManipulationUpdated(ManipulationEventData eventData)
  {
    if (this.rail == null)
    {
      if (Math.Abs(eventData.CumulativeDelta.x) > Math.Abs(eventData.CumulativeDelta.y))
      {
        this.rail = Rail.X;
      }
      else
      {
        this.rail = Rail.Y;
      }
    }
    // Strangely, this can happen
    if (this.lastDelta.HasValue)
    {
      var delta = eventData.CumulativeDelta - this.lastDelta.Value;

      var xDelta = (0 - delta.x) * HORIZONTAL_FACTOR;

      if (this.rail == Rail.X)
      {
        this.gameObject.transform.Rotate(0, xDelta, 0, Space.Self);
      }
      else
      {
        this.gameObject.transform.Translate(0, delta.y * VERTICAL_FACTOR, 0, Space.World);
      }
    }
    this.lastDelta = eventData.CumulativeDelta;
  }
  public void OnManipulationCompleted(ManipulationEventData eventData)
  {
    this.Done();
  }
  public void OnManipulationCanceled(ManipulationEventData eventData)
  {
    this.Done();
  }
  void Done()
  {
    this.lastDelta = null;
    this.rail = null;
  }
  public void OnInputClicked(InputClickedEventData eventData)
  {
    this.isLockedToGaze = !this.isLockedToGaze;

    if (this.isLockedToGaze)
    {
      this.gazeLockedDistance =
        Math.Max(
          Vector3.Distance(this.gameObject.transform.position, GazeManager.Instance.GazeOrigin),
          MIN_GAZE_LOCK_DISTANCE);

      this.centreOffset = GazeManager.Instance.GazeOrigin +
        (this.gazeLockedDistance * GazeManager.Instance.GazeNormal) - this.gameObject.transform.position;
    }
  }
  void Update()
  {
    if (this.isLockedToGaze)
    {
      var gazeOrigin = GazeManager.Instance.GazeOrigin;

      var gazePosition = gazeOrigin + (GazeManager.Instance.GazeNormal * this.gazeLockedDistance);

      this.gameObject.transform.Translate(
        gazePosition.x - this.gameObject.transform.position.x - this.centreOffset.x,
        0,
        gazePosition.z - this.gameObject.transform.position.z - this.centreOffset.z,
        Space.Self);
    }
  }
  bool isLockedToGaze;
  Rail? rail;
  Vector3? lastDelta;
  Vector3 centreOffset;
  float gazeLockedDistance;

  // These are all really just fudge factors based on a small set of observations.
  const float HORIZONTAL_FACTOR = 250.0f;
  const float VERTICAL_FACTOR = 2.5f;
  const float MIN_GAZE_LOCK_DISTANCE = 0.5f;
}

and this now gives me a model which I can move around via air-trap and drag and then, once done, I can say ‘lock’ and the script disables itself, fires an event and is done.

It’s worth noting that this User Moveable behavior is disabled when the app starts up and is turned on by a later script if it is needed.

Step 3 – Add a Status Display

Even at the most basic level, it’s important to be able to see some kind of status display and so I added a Text Mesh to display some text;

image

and a script to allow the text to be set/cleared from anywhere in my code;

using HoloToolkit.Unity;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class StatusTextDisplay : Singleton<StatusTextDisplay>
{
  public void SetStatusText(string text)
  {
    this.gameObject.GetComponent<TextMesh>().text = text;
  }
  public void ClearStatusText()
  {
    this.SetStatusText(string.Empty);
  }
}

and I parented it from an object that would position it and give it a basic tag along and billboarding behaviour (taken straight from the toolkit) and positioned it such that it is 2m in front of the camera;

image

Step 4 – Connecting to the Sharing Service

I added the Sharing prefab straight from the toolkit and configured it such that it would attempt to connect to my sharing service when it first wakes up and such that it would automatically join a session named ‘Default’;

image

I added a script called Coordinator and had it check for when the sharing stage was connected via its Update routine;

image

void Update()
  {
    if (SharingStage.Instance.IsConnected)
    {
    }
  }

although, equally, I could have used the Connected event that this component also offers and so now I can add logic for when the sharing service is first connected.

Note that the script has access to the parent of the model so it can enable/disable it at the appropriate point in the logic.

Step 5 – Creating/Joining a Room, Positioning the Model and Exporting/Importing World Anchors

I expanded out that Coordinator script such that it contains a state machine that runs something like this;

  1. Waits for the network connection to be made
  2. Waits for the room API to stabilise
  3. Attempts to find a room on the sharing service with the same name as the local WiFi network and creates it if necessary
  4. If the room is found, joins the room, downloads the world anchor found in that room and imports it as the parent of the model
  5. If the room is not found, creates a world anchor for the parent of the model, exports it and uploads it to the room

A note on (2) above. I found that if I simply get hold of the RoomManager and call GetRoomCount() then it can give me a 0 result even when I know that there is a room on the sharing service. However, I’ve found that on the next frame that result changes and so I’ve added a little bit of code here which can call GetRoomCount() for up to N seconds to see if the answer it is giving me back is consistent.

That script ended up looking like this;

using HoloToolkit.Sharing;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using UnityEngine;
using System;
using HoloToolkit.Sharing.Tests;

#if UNITY_UWP && !UNITY_EDITOR
using Windows.Networking.Connectivity;
#endif

public class Coordinator : MonoBehaviour
{
  public GameObject modelParent;

  enum CurrentStatus
  {
    WaitingToConnectToStage,
    WaitingForRoomApiToStabilise,
    WaitingForModelPositioning,
    WaitingForWorldAnchorExport,
    WaitingForWorldAnchorImport
  }
  void Start()
  {
    StatusTextDisplay.Instance.SetStatusText("connecting network");

    this.modelParent.SetActive(false);
  }
  void Update()
  {
    if (SharingStage.Instance.IsConnected)
    {
      switch (this.currentStatus)
      {
        case CurrentStatus.WaitingToConnectToStage:

          Debug.Log("Coordinator: moving to connection stage");
          StatusTextDisplay.Instance.SetStatusText("network connected");

          this.GetWiFiNetworkName();
          this.roomApiStartTime = DateTime.Now;
          this.currentStatus = CurrentStatus.WaitingForRoomApiToStabilise;
          break;
        case CurrentStatus.WaitingForRoomApiToStabilise:

          // Note - with a room created, I find that the room API can return 0 rooms
          // and yet call it just one frame later and it changes it mind. Hence...
          // here we give it a little time.
          var doneWaitingForRoomApi = this.WaitForRoomCountToStabilise(ROOM_API_STABILISATION_TIME);

          if (doneWaitingForRoomApi)
          {
            this.CreateOrJoinRoomBasedonWifiNetworkName();
          }
          break;
        default:
          break;
      }
    }
  }
  bool WaitForRoomCountToStabilise(TimeSpan timeSpan)
  {
    bool expired = false;

    if (this.roomApiStartTime == null)
    {
      this.roomApiStartTime = DateTime.Now;
    }
    if (DateTime.Now - this.roomApiStartTime > timeSpan)
    {
      expired = true;
    }
    else
    {
      var roomManager = SharingStage.Instance.Manager.GetRoomManager();
      expired = (roomManager.GetRoomCount() > 0);
    }
    return (expired);
  }
  void CreateOrJoinRoomBasedonWifiNetworkName()
  {
    StatusTextDisplay.Instance.SetStatusText(
      string.Format("using wifi name of {0}", wifiName));

    Debug.Log(String.Format("Coordinator: identified WiFi as {0}", wifiName));

    var roomManager = SharingStage.Instance.Manager.GetRoomManager();
    this.currentRoom = null;
    var roomCount = roomManager.GetRoomCount();

    Debug.Log(String.Format("Coordinator: discovered {0} rooms", roomCount));

    for (int i = 0; i < roomCount; i++)
    {
      var room = roomManager.GetRoom(i);

      if (room.GetName() == wifiName)
      {
        this.currentRoom = room;
        break;
      }
    }

    if (this.currentRoom == null)
    {
      StatusTextDisplay.Instance.SetStatusText("setting up new room");

      this.currentRoom = roomManager.CreateRoom(new XString(wifiName), roomCount + 1, true);
      Debug.Log("Coordinator: created a new room for this WiFi network");

      this.modelParent.GetComponent<UserMoveable>().enabled = true;

      this.MoveToStatus(CurrentStatus.WaitingForModelPositioning);
      StatusTextDisplay.Instance.SetStatusText("waiting for user to position model");

      Debug.Log("Coordinator: waiting for user to position model");
      this.modelParent.GetComponent<UserMoveable>().Locked += OnPositionLocked;    
    }
    else
    {
      StatusTextDisplay.Instance.SetStatusText("joining existing room");

      roomManager.JoinRoom(this.currentRoom);

      var manager = this.modelParent.AddComponent<ImportAnchorManager>() as ICompleted;
      manager.Completed += this.OnImportOrExportCompleted;

      this.MoveToStatus(CurrentStatus.WaitingForWorldAnchorImport);

      StatusTextDisplay.Instance.SetStatusText("waiting for room sync");

      Debug.Log("Coordinator: joined an existing room for this WiFi network");
    }
    this.modelParent.SetActive(true);
  }
  void OnPositionLocked(object sender, EventArgs e)
  {
    Debug.Log("Coordinator: position has been locked by user");
    this.modelParent.GetComponent<UserMoveable>().Locked -= OnPositionLocked;

    StatusTextDisplay.Instance.SetStatusText("creating room sync");

    var manager = this.modelParent.AddComponent<ExportAnchorManager>() as ICompleted;
    manager.Completed += OnImportOrExportCompleted;

    this.MoveToStatus(CurrentStatus.WaitingForWorldAnchorExport);
  }
  void OnImportOrExportCompleted(bool succeeded)
  {
    StatusTextDisplay.Instance.SetStatusText("room in sync");

    if (this.currentStatus == CurrentStatus.WaitingForWorldAnchorImport)
    {
      // TBD: we're done importing the world anchor.
    }
    else
    {
      // TBD: we're done exporting the world anchor.
    }
    // Switch on the remote head management.
    this.modelParent.GetComponent<RemoteHeadManager>().enabled = true;
  }
  void GetWiFiNetworkName()
  {
    if (this.wifiName == null)
    {
      var name = string.Empty;

#if UNITY_UWP && !UNITY_EDITOR
      var interfaces = NetworkInformation.GetConnectionProfiles();

      var wifi = interfaces.Where(
        i => (i.GetNetworkConnectivityLevel() != NetworkConnectivityLevel.None) &&
             (i.IsWlanConnectionProfile)).FirstOrDefault();

      this.wifiName = wifi.WlanConnectionProfileDetails.GetConnectedSsid();
#endif
    }
  }
  void MoveToStatus(CurrentStatus newStatus)
  {
    // This is here to add logging etc. at a later point...
    this.currentStatus = newStatus;
  }
  string wifiName;
  Room currentRoom;
  CurrentStatus currentStatus;
  DateTime roomApiStartTime;
  static readonly TimeSpan ROOM_API_STABILISATION_TIME = TimeSpan.FromSeconds(3);
}

This script ends up applying either a component named ExportAnchorManager or ImportAnchorManager to the parent of the model depending on which is needed.

Step 6 – Importing/Exporting World Anchors

I took the Import Export Anchor Manager script from the HoloToolkit and split it out into two scripts with one for export and one for import.

I mostly rewrote the code that I found in that script along the way and I certainly lost some of the robustness of the original script in doing so (in favour of simplicity) and I also took out some of the functionality (e.g. saving/loading anchors from the local anchor store rather than always hitting the network). Additionally, the scripts I ended up with assume that connection to the Sharing Service is already made and that the Room Manager has been used to set up a current room.

As part of this, I made this little base class and interface;

using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity;
using HoloToolkit.Sharing;

#if UNITY_WSA && !UNITY_EDITOR
using HoloToolkit.Sharing;
#endif // UNITY_WSA

public interface ICompleted
{
  event Action<bool> Completed;
}
public class AnchorManager<T> : Singleton<T>, ICompleted where T : Singleton<T>
{
  public event Action<bool> Completed;

  /// <summary>
  /// The room manager API for the sharing service.
  /// </summary>
  protected RoomManager roomManager;

  /// <summary>
  /// Keeps track of the current room we are connected to.  Anchors
  /// are kept in rooms.
  /// </summary>
  protected Room currentRoom;

  /// <summary>
  /// Provides updates when anchor data is uploaded/downloaded.
  /// </summary>
  protected RoomManagerAdapter roomManagerListener;

  protected void FireCompleted(bool succeeded)
  {
    if (this.Completed != null)
    {
      this.Completed(succeeded);
    }
  }
  protected virtual void AddRoomManagerHandlers()
  {

  }
  protected void ConnectToRoom()
  {
    if (SharingStage.Instance.ShowDetailedLogs)
    {
      Debug.Log("Import Anchor Manager: Starting...");
    }

    // Setup the room manager callbacks.
    roomManager = SharingStage.Instance.Manager.GetRoomManager();
    roomManagerListener = new RoomManagerAdapter();
    this.AddRoomManagerHandlers();

    roomManager.AddListener(roomManagerListener);

    currentRoom = roomManager.GetCurrentRoom();
  }
  protected override void OnDestroy()
  {
    if (roomManagerListener != null)
    {
      if (roomManager != null)
      {
        roomManager.RemoveListener(roomManagerListener);
      }
      roomManagerListener.Dispose();
      roomManagerListener = null;
    }

    if (roomManager != null)
    {
      roomManager.Dispose();
      roomManager = null;
    }
    base.OnDestroy();
  }
}

My ExportAnchorManager looks like;

// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See LICENSE in the project root for license information.

using System;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity;
using HoloToolkit.Sharing;

#if UNITY_WSA && !UNITY_EDITOR
using UnityEngine.VR.WSA;
using UnityEngine.VR.WSA.Sharing;
#endif

public class ExportAnchorManager : AnchorManager<ExportAnchorManager>
{
  /// <summary>
  /// The anchor this object is attached to.
  /// </summary>
  /// 
#if UNITY_WSA && !UNITY_EDITOR
  WorldAnchor worldAnchor;
#endif // UNITY_WSA

  ExportState currentState = ExportState.Start;

  /// <summary>
  /// Keeps track of the name of the anchor we are exporting.
  /// </summary>
  string exportingAnchorName;

  /// <summary>
  /// The datablob of the anchor.
  /// </summary>
  List<byte> exportingAnchorBytes = new List<byte>();

  /// <summary>
  /// WorldAnchorTransferBatch is the primary object in serializing/deserializing anchors.
  /// <remarks>Only available on device.</remarks>
  /// </summary>

#if UNITY_WSA && !UNITY_EDITOR
  WorldAnchorTransferBatch worldAnchorTransferBatch;
#endif // UNITY_WSA

  /// <summary>
  /// Sometimes we'll see a really small anchor blob get generated.
  /// These tend to not work, so we have a minimum trustable size.
  /// </summary>
  const uint MinTrustworthySerializedAnchorDataSize = 100 * 1024;

  enum ExportState
  {
    // Overall states
    Start,
    WaitingForAnchorLocation,
    ExportingAnchor,
    Failed,
    AnchorUploaded
  }

  void Update()
  {
    if (SharingStage.Instance.IsConnected)
    {
      switch (this.currentState)
      {
        case ExportState.Start:
          this.currentState = ExportState.WaitingForAnchorLocation;
          this.ConnectToRoom();

#if UNITY_WSA && !UNITY_EDITOR
          this.worldAnchor = GetComponent<WorldAnchor>() ?? gameObject.AddComponent<WorldAnchor>();
          StatusTextDisplay.Instance.SetStatusText("waiting for model lock");
#endif
          break;
        case ExportState.WaitingForAnchorLocation:
#if UNITY_WSA && !UNITY_EDITOR
          if (this.worldAnchor.isLocated)
          {
            this.currentState = ExportState.ExportingAnchor;
            StatusTextDisplay.Instance.SetStatusText("exporting lock from device");
            this.ExportWorldAnchor();
          }
#endif // UNITY_WSA
          break;
        default:
          break;
      }
    }
  }
  protected override void AddRoomManagerHandlers()
  {
    base.AddRoomManagerHandlers();
    roomManagerListener.AnchorUploadedEvent += this.OnAnchorUploadCompleted;
  }
  void ExportWorldAnchor()
  {
    string guidString = Guid.NewGuid().ToString();

    exportingAnchorName = guidString;

    // Save the anchor to our local anchor store.
    if (SharingStage.Instance.ShowDetailedLogs)
    {
      Debug.Log("Anchor Manager: Exporting anchor " + exportingAnchorName);
    }

#if UNITY_WSA && !UNITY_EDITOR

    worldAnchorTransferBatch = new WorldAnchorTransferBatch();
    worldAnchorTransferBatch.AddWorldAnchor(guidString, worldAnchor);
    WorldAnchorTransferBatch.ExportAsync(worldAnchorTransferBatch, WriteBuffer, ExportComplete);

#endif // UNITY_WSA
  }
  void WriteBuffer(byte[] data)
  {
    exportingAnchorBytes.AddRange(data);
  }
  protected override void OnDestroy()
  {
    if (roomManagerListener != null)
    {
      roomManagerListener.AnchorUploadedEvent -= this.OnAnchorUploadCompleted;
    }
    base.OnDestroy();
  }
  void OnAnchorUploadCompleted(bool successful, XString failureReason)
  {
    if (successful)
    {
      StatusTextDisplay.Instance.SetStatusText("lock copied");

      if (SharingStage.Instance.ShowDetailedLogs)
      {
        Debug.Log("Anchor Manager: Sucessfully uploaded anchor");
      }
      currentState = ExportState.AnchorUploaded;
    }
    else
    {
      StatusTextDisplay.Instance.SetStatusText("lock copy failed");

      Debug.LogError("Anchor Manager: Upload failed " + failureReason);
      currentState = ExportState.Failed;
    }
    base.FireCompleted(currentState == ExportState.AnchorUploaded);
  }
#if UNITY_WSA && !UNITY_EDITOR

  void ExportComplete(SerializationCompletionReason status)
  {
    if ((status == SerializationCompletionReason.Succeeded)
      && (exportingAnchorBytes.Count > MinTrustworthySerializedAnchorDataSize))
    {
      StatusTextDisplay.Instance.SetStatusText(
        string.Format(
          "copying {0:N2}MB lock to server", (exportingAnchorBytes.Count / (1024 * 1024))));

      if (SharingStage.Instance.ShowDetailedLogs)
      {
        Debug.Log("Anchor Manager: Uploading anchor: " + exportingAnchorName);
      }

      roomManager.UploadAnchor(
          currentRoom,
          new XString(exportingAnchorName),
          exportingAnchorBytes.ToArray(),
          exportingAnchorBytes.Count);
    }
    else
    {
      StatusTextDisplay.Instance.SetStatusText("retrying export");

      Debug.LogWarning("Anchor Manager: Failed to upload anchor, trying again...");

      currentState = ExportState.WaitingForAnchorLocation;
    }
  }
#endif // UNITY_WSA
}

and the ImportAnchorManager is as below;

// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See LICENSE in the project root for license information.

using System;
using System.Collections;
using UnityEngine;
using HoloToolkit.Unity;
using HoloToolkit.Sharing;

#if UNITY_WSA && !UNITY_EDITOR
using UnityEngine.VR.WSA;
using UnityEngine.VR.WSA.Sharing;
#endif

public class ImportAnchorManager : AnchorManager<ImportAnchorManager>
{
  enum ImportState
  {
    Start,
    Failed,
    ReadyToImport,
    DataRequested,
    DataDownloadedReadyForImport,
    Importing,
    AnchorImportedAndLocked
  }

  ImportState currentState = ImportState.Start;

  byte[] rawAnchorData;

#if UNITY_WSA && !UNITY_EDITOR

  WorldAnchor worldAnchor;

#endif

  void Update()
  {
    if (SharingStage.Instance.IsConnected)
    {
      switch (currentState)
      {
#if UNITY_WSA && !UNITY_EDITOR
        case ImportState.Start:
          ConnectToRoom();
          this.currentState = ImportState.ReadyToImport;
          break;
        case ImportState.ReadyToImport:
          MakeAnchorDataRequest();
          break;
        case ImportState.DataDownloadedReadyForImport:
          // DataReady is set when the anchor download completes.
          currentState = ImportState.Importing;
          StatusTextDisplay.Instance.SetStatusText("importing room lock data");

          WorldAnchorTransferBatch.ImportAsync(rawAnchorData, ImportComplete);
          break;
#endif
      }
    }
  }
  protected override void AddRoomManagerHandlers()
  {
    base.AddRoomManagerHandlers();
    this.roomManagerListener.AnchorsDownloadedEvent += this.OnAnchorDonwloadCompleted;
  }

  protected override void OnDestroy()
  {
    if (roomManagerListener != null)
    {
      roomManagerListener.AnchorsDownloadedEvent -= OnAnchorDonwloadCompleted;
    }
    base.OnDestroy();
  }
  void OnAnchorDonwloadCompleted(
    bool successful,
    AnchorDownloadRequest request,
    XString failureReason)
  {
    // If we downloaded anchor data successfully we should import the data.
    if (successful)
    {
      StatusTextDisplay.Instance.SetStatusText(
        "room sync data downloaded");

      int datasize = request.GetDataSize();

      if (SharingStage.Instance.ShowDetailedLogs)
      {
        Debug.LogFormat("Anchor Manager: Anchor size: {0} bytes.", datasize.ToString());
      }

      rawAnchorData = new byte[datasize];

      request.GetData(rawAnchorData, datasize);

      currentState = ImportState.DataDownloadedReadyForImport;
    }
    else
    {
      StatusTextDisplay.Instance.SetStatusText(
        "retrying room lock request");

      // If we failed, we can ask for the data again.
      Debug.LogWarning("Anchor Manager: Anchor DL failed " + failureReason);

#if UNITY_WSA && !UNITY_EDITOR
      MakeAnchorDataRequest();
#endif
    }
  }

#if UNITY_WSA && !UNITY_EDITOR

  void MakeAnchorDataRequest()
  {
    StatusTextDisplay.Instance.SetStatusText("requesting sync data");

    if (roomManager.DownloadAnchor(currentRoom, currentRoom.GetAnchorName(0)))
    {
      currentState = ImportState.DataRequested;
    }
    else
    {
      Debug.LogError("Anchor Manager: Couldn't make the download request.");

      currentState = ImportState.Failed;
    }
  }
  void ImportComplete(SerializationCompletionReason status, WorldAnchorTransferBatch anchorBatch)
  {
    if (status == SerializationCompletionReason.Succeeded)
    {
      if (anchorBatch.GetAllIds().Length > 0)
      {
        string first = anchorBatch.GetAllIds()[0];

        if (SharingStage.Instance.ShowDetailedLogs)
        {
          Debug.Log("Anchor Manager: Sucessfully imported anchor " + first);
        }
        this.worldAnchor = anchorBatch.LockObject(first, gameObject);

        StatusTextDisplay.Instance.SetStatusText("room lock imported");
      }

      base.FireCompleted(true);
    }
    else
    {
      StatusTextDisplay.Instance.SetStatusText("retrying room lock import");

      Debug.LogError("Anchor Manager: Import failed");

      currentState = ImportState.DataDownloadedReadyForImport;
    }
  }
#endif // UNITY_WSA
}

and so at this point, I should have an app that can run on multiple devices and (in a basic way).

The app will connect to the sharing service, create or join a room based on the WiFi name and will then allow a user creating a room to position the model where they want it and it will then progress through importing/exporting world anchors such that each user will see the model positioned in the same place in the real world.

I made a video of this operating in a “single user” mode below which doesn’t quite convey what it’s doing but it’s the best I could work out with a single HoloLens device;

Step 7 – Head and Gaze Tracking

There’s already a script in the HoloToolkit-Unity which sends custom messages through the sharing service so that each device in a shared session can visualise the position of the other devices.

I used this script in my previous post, it’s called Remote Head Manager and it relies on another Custom Messages script so I added both of these to my modelParent object;

image

That Remote Head Manager test script from the toolkit draws a cube for the head position and it doesn’t draw the remote user’s gaze and so I modified the script to make the prefab that it draws configurable and so that it draws remote user’s gaze with a line. My modified version of the script is as below with MIKET_CHANGE wrapped around the places where I have done something to the original script;

#define MIKET_CHANGE
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See LICENSE in the project root for license information.

using System;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity;
using HoloToolkit.Unity.InputModule;

namespace HoloToolkit.Sharing.Tests
{
  /// <summary>
  /// Broadcasts the head transform of the local user to other users in the session,
  /// and adds and updates the head transforms of remote users.
  /// Head transforms are sent and received in the local coordinate space of the GameObject this component is on.
  /// </summary>
  public class RemoteHeadManager : Singleton<RemoteHeadManager>
  {
    public class RemoteHeadInfo
    {
      public long UserID;
      public GameObject HeadObject;
    }

#if MIKET_CHANGE
    public GameObject remoteHeadPrefab;
#endif

    /// <summary>
    /// Keep a list of the remote heads, indexed by XTools userID
    /// </summary>
    private Dictionary<long, RemoteHeadInfo> remoteHeads = new Dictionary<long, RemoteHeadInfo>();

#if MIKET_CHANGE
    private void OnEnable()
    {
      CustomMessages.Instance.MessageHandlers[CustomMessages.TestMessageID.HeadTransform] = UpdateHeadTransform;

      SharingStage.Instance.SessionUsersTracker.UserJoined += UserJoinedSession;
      SharingStage.Instance.SessionUsersTracker.UserLeft += UserLeftSession;
    }
#else
    private void Start()
    {
      CustomMessages.Instance.MessageHandlers[CustomMessages.TestMessageID.HeadTransform] = UpdateHeadTransform;

      // SharingStage should be valid at this point, but we may not be connected.
      if (SharingStage.Instance.IsConnected)
      {
        Connected();
      }
      else
      {
        SharingStage.Instance.SharingManagerConnected += Connected;
      }
    }
    private void Connected(object sender = null, EventArgs e = null)
    {
      SharingStage.Instance.SharingManagerConnected -= Connected;

      SharingStage.Instance.SessionUsersTracker.UserJoined += UserJoinedSession;
      SharingStage.Instance.SessionUsersTracker.UserLeft += UserLeftSession;
    }
#endif

    private void Update()
    {
      // Grab the current head transform and broadcast it to all the other users in the session
      Transform headTransform = Camera.main.transform;

      // Transform the head position and rotation from world space into local space
      Vector3 headPosition = transform.InverseTransformPoint(headTransform.position);

      Quaternion headRotation = Quaternion.Inverse(transform.rotation) * headTransform.rotation;

      CustomMessages.Instance.SendHeadTransform(headPosition, headRotation);
    }

    protected override void OnDestroy()
    {
      if (SharingStage.Instance != null)
      {
        if (SharingStage.Instance.SessionUsersTracker != null)
        {
          SharingStage.Instance.SessionUsersTracker.UserJoined -= UserJoinedSession;
          SharingStage.Instance.SessionUsersTracker.UserLeft -= UserLeftSession;
        }
      }

      base.OnDestroy();
    }

    /// <summary>
    /// Called when a new user is leaving the current session.
    /// </summary>
    /// <param name="user">User that left the current session.</param>
    private void UserLeftSession(User user)
    {
      int userId = user.GetID();
      if (userId != SharingStage.Instance.Manager.GetLocalUser().GetID())
      {
        RemoveRemoteHead(remoteHeads[userId].HeadObject);
        remoteHeads.Remove(userId);
      }
    }

    /// <summary>
    /// Called when a user is joining the current session.
    /// </summary>
    /// <param name="user">User that joined the current session.</param>
    private void UserJoinedSession(User user)
    {
      if (user.GetID() != SharingStage.Instance.Manager.GetLocalUser().GetID())
      {
        GetRemoteHeadInfo(user.GetID());
      }
    }

    /// <summary>
    /// Gets the data structure for the remote users' head position.
    /// </summary>
    /// <param name="userId">User ID for which the remote head info should be obtained.</param>
    /// <returns>RemoteHeadInfo for the specified user.</returns>
    public RemoteHeadInfo GetRemoteHeadInfo(long userId)
    {
      RemoteHeadInfo headInfo;

      // Get the head info if its already in the list, otherwise add it
      if (!remoteHeads.TryGetValue(userId, out headInfo))
      {
        headInfo = new RemoteHeadInfo();
        headInfo.UserID = userId;
        headInfo.HeadObject = CreateRemoteHead();

        remoteHeads.Add(userId, headInfo);
      }

      return headInfo;
    }

    /// <summary>
    /// Called when a remote user sends a head transform.
    /// </summary>
    /// <param name="msg"></param>
    private void UpdateHeadTransform(NetworkInMessage msg)
    {
      // Parse the message
      long userID = msg.ReadInt64();

      Vector3 headPos = CustomMessages.Instance.ReadVector3(msg);

      Quaternion headRot = CustomMessages.Instance.ReadQuaternion(msg);

      RemoteHeadInfo headInfo = GetRemoteHeadInfo(userID);
      headInfo.HeadObject.transform.localPosition = headPos;
      headInfo.HeadObject.transform.localRotation = headRot;

#if MIKET_CHANGE
      var rayLength = maxRayDistance;

      RaycastHit hitInfo;

      if (Physics.Raycast(
        headInfo.HeadObject.transform.position,
        headInfo.HeadObject.transform.forward,
        out hitInfo))
      {
        rayLength = hitInfo.distance;
      }
      var lineRenderer = headInfo.HeadObject.GetComponent<LineRenderer>();
      lineRenderer.SetPosition(1, Vector3.forward * rayLength);
#endif
    }

    /// <summary>
    /// Creates a new game object to represent the user's head.
    /// </summary>
    /// <returns></returns>
    private GameObject CreateRemoteHead()
    {
      GameObject newHeadObj = Instantiate(this.remoteHeadPrefab);
      newHeadObj.transform.parent = gameObject.transform;

#if MIKET_CHANGE
      this.AddLineRenderer(newHeadObj);
#endif
      return newHeadObj;
    }
#if MIKET_CHANGE
    void AddLineRenderer(GameObject headObject)
    {
      var lineRenderer = headObject.AddComponent<LineRenderer>();
      lineRenderer.useWorldSpace = false;
      lineRenderer.startWidth = 0.01f;
      lineRenderer.endWidth = 0.05f;
      lineRenderer.numPositions = 2;
      lineRenderer.SetPosition(0, Vector3.forward * 0.1f);
      var material = new Material(Shader.Find("Diffuse"));
      material.color = colors[this.colorIndex++ % colors.Length];

      lineRenderer.material = material;
    }
#endif

    /// <summary>
    /// When a user has left the session this will cleanup their
    /// head data.
    /// </summary>
    /// <param name="remoteHeadObject"></param>
    private void RemoveRemoteHead(GameObject remoteHeadObject)
    {
      DestroyImmediate(remoteHeadObject);
    }
#if MIKET_CHANGE
    const float maxRayDistance = 5.0f;
    int colorIndex;
    static Color[] colors =
    {
      Color.red,
      Color.green,
      Color.blue,
      Color.cyan,
      Color.magenta,
      Color.yellow
    };
#endif
  }
}

and the prefab in question here is a small model of a HoloLens;

image

which will show up with a ‘ray’ emanating from it at the position where each of the HoloLens devices are present in the scene.

Wrapping Up & Next Steps

This post has got long and so I’ll break off and will follow-up in another post if I can get an extension of this code working with users who are not in the same physical rooms and so need a remote experience enabling.

As a teaser, here’s an early experiment of trying that with my colleague Pete connecting to a cloud-based server where Pete is remotely viewing this model located in my home office and I can remotely see what Pete is viewing;

file

and so the initial experiment is looking pretty good Smile

If you’re interested in following along, the code for the post is here on Github – enjoy!