Using OCR to Read an IP Address in a Holographic UWP App

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Just a short post. I’ve been playing with a situation recently where I needed to get an IP address into a holographic app. There’s lots of different ways that you might do this depending on how frequently the IP address might change;

  • Hard-code it into the app.
  • Put it into a config file (perhaps loaded from some other server).
  • Type it into the app’s UI although I think it’s fair to say that typing IP addresses on HoloLens isn’t so much fun.
  • Magically download it via some sort of bluetooth service
  • Speak it to the application through the microphone and UWP speech recognition
  • Put the IP address into a QR code and have the app scan it through the camera

and I’m sure there’s many more but today I thought I’d experiment with what seemed like a fairly natural idea – when I want to give another person an IP address, I usually write it onto a piece of paper and hand it to them or pin it up on the wall.

So, why not with the device?

Because I already have some code which runs on HoloLens and scans for QR codes (see this blog post and accompanying github) it felt like it would be very little effort to change the QR code scanning that happens in that blog post into some “IP Address” OCR recognition and see how well or badly that works out.

Here’s an example of how well/badly it worked out – I think it works quite well but it’s fair to say that I haven’t managed to get it to work with handwriting although that doesn’t surprise me as I’ve never found the OCR engine in UWP to process handwriting and maybe it’s not meant to so I’m not being critical of it when I say that.

Note – it’s “a bit tricky” to record an app on the HoloLens that is actively taking over the webcam so I’m having to show this with static images;

20170328_144256_HoloLens

20170328_144320_HoloLens

but it feels like a scan of an IP address like that can be done in about 1-2s once the camera has a view of it although, naturally, it’s possible for the camera to get a partial view such as 2.168.0.1 for the above address so a re-scan might be necessary in some circumstances.

In terms of the code here, I simply did some minimal changes to what I already had around QR code processing in that I removed the ZXing library from my project and then changed the public interface so that it looks as it does in the code below and clearly I should do something about renaming the namespace here but this call says “Hey, find me an IP address from the first camera on the system or timeout after 30s”;

MediaFrameQrProcessing.Wrappers.IPAddressScanner.ScanFirstCameraForIPAddress(
        result =>
        {
          UnityEngine.WSA.Application.InvokeOnAppThread(() =>
          {
            // result here is a System.Net.IPAddress...
            this.textMesh.text = result?.ToString() ?? "not found";
          }, 
          false);
        },
        TimeSpan.FromSeconds(30));

and it gives me back an IPAddress instance. In my underlying library, I replaced the QR code frame scanner with a (basic!) IP address frame scanner;

namespace MediaFrameQrProcessing.Processors
{
  using MediaFrameQrProcessing.VideoDeviceFinders;
  using System.Runtime.InteropServices.WindowsRuntime;
  using System.Threading.Tasks;
  using Windows.Devices.Enumeration;
  using Windows.Media.Capture;
  using Windows.Media.Capture.Frames;
  using Windows.Media.Ocr;
  using System;
  using System.Text.RegularExpressions;
  using System.Net;

  public class IPAddressFrameProcessor : MediaCaptureFrameProcessor
  {
    public IPAddress Result { get; private set; }

    public IPAddressFrameProcessor(
      MediaFrameSourceFinder mediaFrameSourceFinder, 
      DeviceInformation videoDeviceInformation, 
      string mediaEncodingSubtype, 
      MediaCaptureMemoryPreference memoryPreference = MediaCaptureMemoryPreference.Cpu) 

      : base(
          mediaFrameSourceFinder, 
          videoDeviceInformation, 
          mediaEncodingSubtype, 
          memoryPreference)
    {
    }
    protected override async Task<bool> ProcessFrameAsync(MediaFrameReference frameReference)
    {
      bool done = false;

      // doc here https://msdn.microsoft.com/en-us/library/windows/apps/xaml/windows.media.capture.frames.videomediaframe.aspx
      // says to dispose this softwarebitmap if you access it.
      using (var bitmap = frameReference.VideoMediaFrame.SoftwareBitmap)
      {
        try
        {
          if (this.ocrEngine == null)
          {
            this.ocrEngine = OcrEngine.TryCreateFromUserProfileLanguages();
            this.regex = new Regex(IP_ADDRESS_PATTERN);
          }
          var results = await this.ocrEngine.RecognizeAsync(bitmap);

          if (results != null)
          {
            var matchingResults = this.regex.Matches(results.Text);

            for (int i = 0; !done && (i < matchingResults.Count); i++)
            {
              IPAddress parsedAddress;

              done = IPAddress.TryParse(matchingResults[i].Value, out parsedAddress);

              if (done)
              {
                this.Result = parsedAddress;
              }
            }
          }
        }
        catch
        {
        }
      }
      return (done);
    }
    Regex regex;
    OcrEngine ocrEngine;

    // Taken from S.O. http://stackoverflow.com/questions/106179/regular-expression-to-match-dns-hostname-or-ip-address
    const string IP_ADDRESS_PATTERN =
      @"(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}";
  }
}

and that all seems to work reasonably well.

The code for this (the library and the 2D XAML test app) is here if anyone wants to play with it.

Experimenting with Unity’s Asset Bundle Loading in Holographic UWP Projects

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

I wanted to experiment with Asset Bundles from Unity running on HoloLens in the sense that I wanted to make sure that I had the right pieces to dynamically load up something like some 3D models bundled on a web server dynamically at runtime and display them in a scene.

This is, no doubt, a very basic scenario for experienced Unity folks but I hadn’t tried it out on HoloLens so I thought I’d do that and share the write-up here.

Step 1 – Reading the Unity Asset Bundle Docs

I had a decent read of Unity’s docs around asset bundles, serialization graphs and that type of thing all from this set of resources;

A guide to AssetBundles and Resources

and that was pretty useful and led me through to the demo and companion scripts that Unity publishes on this repo;

Asset Bundle Demo Repo

and so I cloned that.

Step 2 – Building Asset Bundles

I then opened up that demo project in Unity and used the Assets menu to make the asset bundles;

image

which dropped the assets out into a folder called Windows;

image

and so I just need to drop these onto a web server somewhere and serve them up. I’ve heard that Azure thing is pretty good at this Winking smile

Step 3 – A Web Server for the Asset Bundles

I popped over to Azure and created a basic web server using the regular ‘Add App Service’ button;

image

and then it turned out that I didn’t actually have the Visual Studio web tools installed (woops, I should fix that) so I just went with the “edit this in the browser” option;

image

and hacked the web.config so as to try and make sure that it would serve up the files in question;

image

and then used the “Upload files” option to upload the files – note that this means that these files live at /AssetBundles/Windows and there’s another file in that folder also called Windows;

image

Step 4 – Trying the Desktop Demo

To see if this was “working” ok, I opened up the AssetLoader scene provided in the demo project, opened up the LoadAssets script associated with the Loader GameObject as below;

image

noting that it is loading the bundle named cube-bundle and the asset named MyCube;

image

and I hacked the accompanying script such that it loaded the asset from my new web site;

image

and, sure enough, I built and ran the desktop app and it displayed the cube as expected;

image

and Fiddler shows me where it’s coming from;

image

and so that’s all good, what about doing this on HoloLens?

Step 5 – Starting with a Blank HoloLens Project

I made a new Unity project much like I do in this video and imported the HoloToolkit-Unity;

Step 6 – Adding in the Asset Bundle Manager

I took some of the scripts from the demo asset bundle project and added them to my project, trying to be as ‘minimal’ as I could be so I brought in;

image

Bringing in the Asset Bundle Manager caused me a slight problem in that there are a couple of functions that don’t build in there on the HoloLens/UWP platform;

    //private static string GetStreamingAssetsPath()
    //{
    //  if (Application.isEditor)
    //    return "file://" + System.Environment.CurrentDirectory.Replace("\\", "/"); // Use the build output folder directly.
    //  else if (Application.isWebPlayer)
    //    return System.IO.Path.GetDirectoryName(Application.absoluteURL).Replace("\\", "/") + "/StreamingAssets";
    //  else if (Application.isMobilePlatform || Application.isConsolePlatform)
    //    return Application.streamingAssetsPath;
    //  else // For standalone player.
    //    return "file://" + Application.streamingAssetsPath;
    //}


    /// <summary>
    /// Sets base downloading URL to a directory relative to the streaming assets directory.
    /// Asset bundles are loaded from a local directory.
    /// </summary>
    public static void SetSourceAssetBundleDirectory(string relativePath)
    {
      throw new NotImplementedException();
      // BaseDownloadingURL = GetStreamingAssetsPath() + relativePath;
    }

one is public and calls the other and nothing seemed to call the public function so I figured that I was safe to comment these out at the moment (as above).

I continued by making a small “UI” with a button on it which would respond to a click;

image

and added some code to respond to the click;

using AssetBundles;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class Placeholder : MonoBehaviour
{
  public void OnClick()
  {
    if (!this.loaded)
    {
      StartCoroutine(this.LoadModelAsync());
      this.loaded = true;
    }
  }
  IEnumerator LoadModelAsync()
  {
    AssetBundleManager.SetSourceAssetBundleURL("http://mttestbundles.azurewebsites.net/AssetBundles/");

    var initializeOperation = AssetBundleManager.Initialize();

    if (initializeOperation == null)
    {
      yield break;
    }
    yield return StartCoroutine(initializeOperation);

    var loadOperation = AssetBundleManager.LoadAssetAsync(
      "cube-bundle", "MyCube", typeof(GameObject));

    if (loadOperation == null)
    {
      yield break;
    }
    yield return StartCoroutine(loadOperation);

    var prefab = loadOperation.GetAsset<GameObject>();

    if (prefab != null)
    {
      var cube = GameObject.Instantiate(prefab);

      // Put the cube somewhere obvious.
      cube.transform.position = new Vector3(0, 1.0f, 2.0f);

      // The cube in the asset bundle is quite big.
      cube.transform.localScale = new Vector3(0.2f, 0.2f, 0.2f);
    }
  }
  bool loaded;
}

This code is really just me taking what seemed like the “essential” code out of the Unity sample and seeing if I could get it running on its own here inside of my HoloLens project.

Not surprisingly, I couldn’t Smile The code crashed.

Step 7 – Adding the Platform Name

Fixing the crash wasn’t too difficult – there are two functions in the supplied Utility.cs code file named GetPlatformForAssetBundles (one for the editor and one for the non-editor) and they are just a big switch statement that turns an enumeration for the build platform into a string. They needed me to add;

image

although I can’t be 100% certain that this is the right thing to do but it seemed like a reasonable place to start.

However, this didn’t work either Smile

Step 8 – The Wrong Build Target

With that change in place, my code no longer crashed but I did some debugging and some watching of the Unity debug spew and ultimately I could see that Unity was taking a look at my asset bundle and rejecting it. That is, it was saying (and I quote Winking smile);

The file can not be loaded because it was created for another build target that is not compatible with this platform.
Please make sure to build AssetBundles using the build target platform that it is used by.
File’s Build target is: 5

Ok – so clearly, here, I’ve built the asset bundles using the demo project supplied and they emit some target identifier of “5” into the output and that “5” is not viewed as being compatible with my current build target in my HoloLens project.

I left the assets in the demo project and simply changed the build target to be Windows Store;

image

as the script which does the building (BuildScript) seemed to be making use of EditorUserBuildSettings.activeBuildTarget so that felt like it might work. Because the build platform was now UWP, I needed to make the same changes to the scripts that I’d made in my own project in order to add the “Windows” name for the WSA… runtime platform enumerations.

With that done, I could click “Build Asset Bundles…” and get some new bundles and I overwrote the ones on my web server with these new variants.

That worked out fine and on the HoloLens I can then click the button in order to dynamically pull the cube asset bundle from the web server and display it on the device as below;

20170328_120919_HoloLens

Wrapping Up

With a couple of minor script changes, this workflow around asset bundles seems to work pretty well for the HoloLens. I could perhaps experiment a little further and try out working with different variants of assets and so on but, for my quick experiment, I’m happy that it’s working reasonably well and without too much of a deviation from what the Unity docs say.

What I haven’t yet tried is using the local server that Unity provides and the simulation mode to see if I can get that working for a HoloLens project but it feels like it should work to me…maybe that’s one for another post?

Hitchhiking the HoloToolkit-Unity, Leg 12– More Experiments with Shared Holographic Experiences

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

In my previous post, I made some steps forward in terms of enabling a basic shared holographic experience using the Sharing Service from the HoloToolkit-Unity such that;

  1. Multiple devices could join a scene
  2. Those devices could create shared holograms that each could see in a consistent place within the real-world
  3. The head position of each device was visible to the other devices within the scene

I wanted to take this further and try to come up with a solution that enabled a little more functionality when devices are physically co-located in a space but also to try and lay the ground for devices which are located elsewhere accepting that this might introduce latency issues.

The Scenario

What I wanted to enable runs something like this;

  1. There is a holographic model that multiple users want to look at, talk about etc.
  2. Those users might be physically in the same space, meaning that the model should be consistently placed in their real world in a position that the first user into the scene chooses.
  3. Users should be able to see the position of the HoloLens devices of other users and be aware of what those users are looking at.
    1. A stretch goal would be to allow users to annotate the model, scale, rotate it etc.

As another stretch goal, I’d also like this to work for users who are in a remote location but I’m probably not going to get to that in this blog post.

I wanted to try and make use of the ‘rooms’ facility of the Sharing Service within the HoloToolkit-Unity to see if can ultimately support the idea of multiple rooms which might be local/remote to implement something along the lines of;

  1. Join the sharing service.
  2. Identify the current WiFi network name and look for a room on the sharing service with that name.
    1. If that room exists, join it, download and import a spatial anchor and lock the model to that anchor.
    2. If that room doesn’t exist, allow the user to position the model and then create the room, add a spatial anchor to the model and export and upload it to that room on the sharing service.
  3. As users interact, broadcast their head positions (at some interval) over the network and remotely display those head positions and their gaze vector to the other users.
    1. For users in the same room, these head positions can be relative to some world-anchored object so as to make them translatable from one device to another.
    2. In a follow-on post, for remote users, these head positions need to be relative to the position of the models in the local/remote views.

That sounds achievable so I wanted to give it a try and I thought I’d write up the steps here…although I’m setting off on this journey without knowing whether the remote rooms part of this is achievable or not and I suspect it will come down to finding out whether messages can be broadcast across rooms by the sharing service.

Here’s the steps that I ran through here;

Step 0 – Make a Project

I made a new Unity project much like I do in this video and imported the HoloToolkit-Unity;

and I made sure that the project had the capabilities to use the microphone, internet client, private networks and spatial perception and I imported these pieces of the HoloToolkit;

image

Note that I don’t anticipate using the Spatial Mapping functionality here as I’m not intending to have my model collide with the real world but I do need the spatial perception capability in order for world-anchor functionality to work.

However, there is a dependency in the script WorldAnchorManager.cs on the SpatialMapping component although it doesn’t seem like a huge dependency that will impact me so I could either;

  1. Hack the script.
  2. Import the SpatialMapping section of the Toolkit knowing that I’m not really using it.

I went with (2) so had to also import;

image

and then my project will build.

Step 1 – Make a Model

I didn’t want to spend a long time on making a model and so I made simple model which is just four cubes positioned at ‘cardinal’ points 1m away from each other and initially centred in front of the user at a distance of 3m;

image

I also added a box collider to that model such that I can treat it as a single object in terms of hit testing and so on and, as you can see, I parented this model off an empty object named ModelParent.

Step 2 – Make it Moveable

I want the user to be able to manipulate the model somewhat and so I’m adding a little bit of control. Note that I am not using the regular Tap To Place script because it has a dependency on Spatial Mapping so that it can collide models with the physical environment which isn’t a concern for me here.

I added a few extra pieces to what I already have, specifically this ‘SharedObjects’ holder which holds an Audio Source, Cursor, Input Manager and Text to Speech Manager;

image

and I reparented my Model into a new, empty parent named ModelParent and I added a Keyword Manager to handle a single “lock” keyword;

image

and then married this up with a script to handle the basic interactions that I want here;

using HoloToolkit.Unity;
using HoloToolkit.Unity.InputModule;
using System;
using UnityEngine;
using UnityEngine.Events;

public class UserMoveable : MonoBehaviour, IManipulationHandler, IInputClickHandler
{
  [SerializeField]
  TextToSpeechManager textToSpeechManager;

  public event EventHandler Locked;

  enum Rail
  {
    X,
    Y
  }
  private void OnEnable()
  {
    this.textToSpeechManager.SpeakText(
      "Tap to toggle the model following you and drag to raise or rotate. Say lock when done");
  }
  public void OnLock()
  {
    // We're done.
    this.gameObject.GetComponent<KeywordManager>().StopKeywordRecognizer();
    this.enabled = false;

    if (this.Locked != null)
    {
      this.Locked(this, EventArgs.Empty);
    }
  }
  public UserMoveable()
  {
    this.rail = Rail.X;
  }
  public void OnManipulationStarted(ManipulationEventData eventData)
  {
    this.lastDelta = eventData.CumulativeDelta;
  }
  public void OnManipulationUpdated(ManipulationEventData eventData)
  {
    if (this.rail == null)
    {
      if (Math.Abs(eventData.CumulativeDelta.x) > Math.Abs(eventData.CumulativeDelta.y))
      {
        this.rail = Rail.X;
      }
      else
      {
        this.rail = Rail.Y;
      }
    }
    // Strangely, this can happen
    if (this.lastDelta.HasValue)
    {
      var delta = eventData.CumulativeDelta - this.lastDelta.Value;

      var xDelta = (0 - delta.x) * HORIZONTAL_FACTOR;

      if (this.rail == Rail.X)
      {
        this.gameObject.transform.Rotate(0, xDelta, 0, Space.Self);
      }
      else
      {
        this.gameObject.transform.Translate(0, delta.y * VERTICAL_FACTOR, 0, Space.World);
      }
    }
    this.lastDelta = eventData.CumulativeDelta;
  }
  public void OnManipulationCompleted(ManipulationEventData eventData)
  {
    this.Done();
  }
  public void OnManipulationCanceled(ManipulationEventData eventData)
  {
    this.Done();
  }
  void Done()
  {
    this.lastDelta = null;
    this.rail = null;
  }
  public void OnInputClicked(InputClickedEventData eventData)
  {
    this.isLockedToGaze = !this.isLockedToGaze;

    if (this.isLockedToGaze)
    {
      this.gazeLockedDistance =
        Math.Max(
          Vector3.Distance(this.gameObject.transform.position, GazeManager.Instance.GazeOrigin),
          MIN_GAZE_LOCK_DISTANCE);

      this.centreOffset = GazeManager.Instance.GazeOrigin +
        (this.gazeLockedDistance * GazeManager.Instance.GazeNormal) - this.gameObject.transform.position;
    }
  }
  void Update()
  {
    if (this.isLockedToGaze)
    {
      var gazeOrigin = GazeManager.Instance.GazeOrigin;

      var gazePosition = gazeOrigin + (GazeManager.Instance.GazeNormal * this.gazeLockedDistance);

      this.gameObject.transform.Translate(
        gazePosition.x - this.gameObject.transform.position.x - this.centreOffset.x,
        0,
        gazePosition.z - this.gameObject.transform.position.z - this.centreOffset.z,
        Space.Self);
    }
  }
  bool isLockedToGaze;
  Rail? rail;
  Vector3? lastDelta;
  Vector3 centreOffset;
  float gazeLockedDistance;

  // These are all really just fudge factors based on a small set of observations.
  const float HORIZONTAL_FACTOR = 250.0f;
  const float VERTICAL_FACTOR = 2.5f;
  const float MIN_GAZE_LOCK_DISTANCE = 0.5f;
}

and this now gives me a model which I can move around via air-trap and drag and then, once done, I can say ‘lock’ and the script disables itself, fires an event and is done.

It’s worth noting that this User Moveable behavior is disabled when the app starts up and is turned on by a later script if it is needed.

Step 3 – Add a Status Display

Even at the most basic level, it’s important to be able to see some kind of status display and so I added a Text Mesh to display some text;

image

and a script to allow the text to be set/cleared from anywhere in my code;

using HoloToolkit.Unity;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class StatusTextDisplay : Singleton<StatusTextDisplay>
{
  public void SetStatusText(string text)
  {
    this.gameObject.GetComponent<TextMesh>().text = text;
  }
  public void ClearStatusText()
  {
    this.SetStatusText(string.Empty);
  }
}

and I parented it from an object that would position it and give it a basic tag along and billboarding behaviour (taken straight from the toolkit) and positioned it such that it is 2m in front of the camera;

image

Step 4 – Connecting to the Sharing Service

I added the Sharing prefab straight from the toolkit and configured it such that it would attempt to connect to my sharing service when it first wakes up and such that it would automatically join a session named ‘Default’;

image

I added a script called Coordinator and had it check for when the sharing stage was connected via its Update routine;

image

void Update()
  {
    if (SharingStage.Instance.IsConnected)
    {
    }
  }

although, equally, I could have used the Connected event that this component also offers and so now I can add logic for when the sharing service is first connected.

Note that the script has access to the parent of the model so it can enable/disable it at the appropriate point in the logic.

Step 5 – Creating/Joining a Room, Positioning the Model and Exporting/Importing World Anchors

I expanded out that Coordinator script such that it contains a state machine that runs something like this;

  1. Waits for the network connection to be made
  2. Waits for the room API to stabilise
  3. Attempts to find a room on the sharing service with the same name as the local WiFi network and creates it if necessary
  4. If the room is found, joins the room, downloads the world anchor found in that room and imports it as the parent of the model
  5. If the room is not found, creates a world anchor for the parent of the model, exports it and uploads it to the room

A note on (2) above. I found that if I simply get hold of the RoomManager and call GetRoomCount() then it can give me a 0 result even when I know that there is a room on the sharing service. However, I’ve found that on the next frame that result changes and so I’ve added a little bit of code here which can call GetRoomCount() for up to N seconds to see if the answer it is giving me back is consistent.

That script ended up looking like this;

using HoloToolkit.Sharing;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using UnityEngine;
using System;
using HoloToolkit.Sharing.Tests;

#if UNITY_UWP && !UNITY_EDITOR
using Windows.Networking.Connectivity;
#endif

public class Coordinator : MonoBehaviour
{
  public GameObject modelParent;

  enum CurrentStatus
  {
    WaitingToConnectToStage,
    WaitingForRoomApiToStabilise,
    WaitingForModelPositioning,
    WaitingForWorldAnchorExport,
    WaitingForWorldAnchorImport
  }
  void Start()
  {
    StatusTextDisplay.Instance.SetStatusText("connecting network");

    this.modelParent.SetActive(false);
  }
  void Update()
  {
    if (SharingStage.Instance.IsConnected)
    {
      switch (this.currentStatus)
      {
        case CurrentStatus.WaitingToConnectToStage:

          Debug.Log("Coordinator: moving to connection stage");
          StatusTextDisplay.Instance.SetStatusText("network connected");

          this.GetWiFiNetworkName();
          this.roomApiStartTime = DateTime.Now;
          this.currentStatus = CurrentStatus.WaitingForRoomApiToStabilise;
          break;
        case CurrentStatus.WaitingForRoomApiToStabilise:

          // Note - with a room created, I find that the room API can return 0 rooms
          // and yet call it just one frame later and it changes it mind. Hence...
          // here we give it a little time.
          var doneWaitingForRoomApi = this.WaitForRoomCountToStabilise(ROOM_API_STABILISATION_TIME);

          if (doneWaitingForRoomApi)
          {
            this.CreateOrJoinRoomBasedonWifiNetworkName();
          }
          break;
        default:
          break;
      }
    }
  }
  bool WaitForRoomCountToStabilise(TimeSpan timeSpan)
  {
    bool expired = false;

    if (this.roomApiStartTime == null)
    {
      this.roomApiStartTime = DateTime.Now;
    }
    if (DateTime.Now - this.roomApiStartTime > timeSpan)
    {
      expired = true;
    }
    else
    {
      var roomManager = SharingStage.Instance.Manager.GetRoomManager();
      expired = (roomManager.GetRoomCount() > 0);
    }
    return (expired);
  }
  void CreateOrJoinRoomBasedonWifiNetworkName()
  {
    StatusTextDisplay.Instance.SetStatusText(
      string.Format("using wifi name of {0}", wifiName));

    Debug.Log(String.Format("Coordinator: identified WiFi as {0}", wifiName));

    var roomManager = SharingStage.Instance.Manager.GetRoomManager();
    this.currentRoom = null;
    var roomCount = roomManager.GetRoomCount();

    Debug.Log(String.Format("Coordinator: discovered {0} rooms", roomCount));

    for (int i = 0; i < roomCount; i++)
    {
      var room = roomManager.GetRoom(i);

      if (room.GetName() == wifiName)
      {
        this.currentRoom = room;
        break;
      }
    }

    if (this.currentRoom == null)
    {
      StatusTextDisplay.Instance.SetStatusText("setting up new room");

      this.currentRoom = roomManager.CreateRoom(new XString(wifiName), roomCount + 1, true);
      Debug.Log("Coordinator: created a new room for this WiFi network");

      this.modelParent.GetComponent<UserMoveable>().enabled = true;

      this.MoveToStatus(CurrentStatus.WaitingForModelPositioning);
      StatusTextDisplay.Instance.SetStatusText("waiting for user to position model");

      Debug.Log("Coordinator: waiting for user to position model");
      this.modelParent.GetComponent<UserMoveable>().Locked += OnPositionLocked;    
    }
    else
    {
      StatusTextDisplay.Instance.SetStatusText("joining existing room");

      roomManager.JoinRoom(this.currentRoom);

      var manager = this.modelParent.AddComponent<ImportAnchorManager>() as ICompleted;
      manager.Completed += this.OnImportOrExportCompleted;

      this.MoveToStatus(CurrentStatus.WaitingForWorldAnchorImport);

      StatusTextDisplay.Instance.SetStatusText("waiting for room sync");

      Debug.Log("Coordinator: joined an existing room for this WiFi network");
    }
    this.modelParent.SetActive(true);
  }
  void OnPositionLocked(object sender, EventArgs e)
  {
    Debug.Log("Coordinator: position has been locked by user");
    this.modelParent.GetComponent<UserMoveable>().Locked -= OnPositionLocked;

    StatusTextDisplay.Instance.SetStatusText("creating room sync");

    var manager = this.modelParent.AddComponent<ExportAnchorManager>() as ICompleted;
    manager.Completed += OnImportOrExportCompleted;

    this.MoveToStatus(CurrentStatus.WaitingForWorldAnchorExport);
  }
  void OnImportOrExportCompleted(bool succeeded)
  {
    StatusTextDisplay.Instance.SetStatusText("room in sync");

    if (this.currentStatus == CurrentStatus.WaitingForWorldAnchorImport)
    {
      // TBD: we're done importing the world anchor.
    }
    else
    {
      // TBD: we're done exporting the world anchor.
    }
    // Switch on the remote head management.
    this.modelParent.GetComponent<RemoteHeadManager>().enabled = true;
  }
  void GetWiFiNetworkName()
  {
    if (this.wifiName == null)
    {
      var name = string.Empty;

#if UNITY_UWP && !UNITY_EDITOR
      var interfaces = NetworkInformation.GetConnectionProfiles();

      var wifi = interfaces.Where(
        i => (i.GetNetworkConnectivityLevel() != NetworkConnectivityLevel.None) &&
             (i.IsWlanConnectionProfile)).FirstOrDefault();

      this.wifiName = wifi.WlanConnectionProfileDetails.GetConnectedSsid();
#endif
    }
  }
  void MoveToStatus(CurrentStatus newStatus)
  {
    // This is here to add logging etc. at a later point...
    this.currentStatus = newStatus;
  }
  string wifiName;
  Room currentRoom;
  CurrentStatus currentStatus;
  DateTime roomApiStartTime;
  static readonly TimeSpan ROOM_API_STABILISATION_TIME = TimeSpan.FromSeconds(3);
}

This script ends up applying either a component named ExportAnchorManager or ImportAnchorManager to the parent of the model depending on which is needed.

Step 6 – Importing/Exporting World Anchors

I took the Import Export Anchor Manager script from the HoloToolkit and split it out into two scripts with one for export and one for import.

I mostly rewrote the code that I found in that script along the way and I certainly lost some of the robustness of the original script in doing so (in favour of simplicity) and I also took out some of the functionality (e.g. saving/loading anchors from the local anchor store rather than always hitting the network). Additionally, the scripts I ended up with assume that connection to the Sharing Service is already made and that the Room Manager has been used to set up a current room.

As part of this, I made this little base class and interface;

using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity;
using HoloToolkit.Sharing;

#if UNITY_WSA && !UNITY_EDITOR
using HoloToolkit.Sharing;
#endif // UNITY_WSA

public interface ICompleted
{
  event Action<bool> Completed;
}
public class AnchorManager<T> : Singleton<T>, ICompleted where T : Singleton<T>
{
  public event Action<bool> Completed;

  /// <summary>
  /// The room manager API for the sharing service.
  /// </summary>
  protected RoomManager roomManager;

  /// <summary>
  /// Keeps track of the current room we are connected to.  Anchors
  /// are kept in rooms.
  /// </summary>
  protected Room currentRoom;

  /// <summary>
  /// Provides updates when anchor data is uploaded/downloaded.
  /// </summary>
  protected RoomManagerAdapter roomManagerListener;

  protected void FireCompleted(bool succeeded)
  {
    if (this.Completed != null)
    {
      this.Completed(succeeded);
    }
  }
  protected virtual void AddRoomManagerHandlers()
  {

  }
  protected void ConnectToRoom()
  {
    if (SharingStage.Instance.ShowDetailedLogs)
    {
      Debug.Log("Import Anchor Manager: Starting...");
    }

    // Setup the room manager callbacks.
    roomManager = SharingStage.Instance.Manager.GetRoomManager();
    roomManagerListener = new RoomManagerAdapter();
    this.AddRoomManagerHandlers();

    roomManager.AddListener(roomManagerListener);

    currentRoom = roomManager.GetCurrentRoom();
  }
  protected override void OnDestroy()
  {
    if (roomManagerListener != null)
    {
      if (roomManager != null)
      {
        roomManager.RemoveListener(roomManagerListener);
      }
      roomManagerListener.Dispose();
      roomManagerListener = null;
    }

    if (roomManager != null)
    {
      roomManager.Dispose();
      roomManager = null;
    }
    base.OnDestroy();
  }
}

My ExportAnchorManager looks like;

// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See LICENSE in the project root for license information.

using System;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity;
using HoloToolkit.Sharing;

#if UNITY_WSA && !UNITY_EDITOR
using UnityEngine.VR.WSA;
using UnityEngine.VR.WSA.Sharing;
#endif

public class ExportAnchorManager : AnchorManager<ExportAnchorManager>
{
  /// <summary>
  /// The anchor this object is attached to.
  /// </summary>
  /// 
#if UNITY_WSA && !UNITY_EDITOR
  WorldAnchor worldAnchor;
#endif // UNITY_WSA

  ExportState currentState = ExportState.Start;

  /// <summary>
  /// Keeps track of the name of the anchor we are exporting.
  /// </summary>
  string exportingAnchorName;

  /// <summary>
  /// The datablob of the anchor.
  /// </summary>
  List<byte> exportingAnchorBytes = new List<byte>();

  /// <summary>
  /// WorldAnchorTransferBatch is the primary object in serializing/deserializing anchors.
  /// <remarks>Only available on device.</remarks>
  /// </summary>

#if UNITY_WSA && !UNITY_EDITOR
  WorldAnchorTransferBatch worldAnchorTransferBatch;
#endif // UNITY_WSA

  /// <summary>
  /// Sometimes we'll see a really small anchor blob get generated.
  /// These tend to not work, so we have a minimum trustable size.
  /// </summary>
  const uint MinTrustworthySerializedAnchorDataSize = 100 * 1024;

  enum ExportState
  {
    // Overall states
    Start,
    WaitingForAnchorLocation,
    ExportingAnchor,
    Failed,
    AnchorUploaded
  }

  void Update()
  {
    if (SharingStage.Instance.IsConnected)
    {
      switch (this.currentState)
      {
        case ExportState.Start:
          this.currentState = ExportState.WaitingForAnchorLocation;
          this.ConnectToRoom();

#if UNITY_WSA && !UNITY_EDITOR
          this.worldAnchor = GetComponent<WorldAnchor>() ?? gameObject.AddComponent<WorldAnchor>();
          StatusTextDisplay.Instance.SetStatusText("waiting for model lock");
#endif
          break;
        case ExportState.WaitingForAnchorLocation:
#if UNITY_WSA && !UNITY_EDITOR
          if (this.worldAnchor.isLocated)
          {
            this.currentState = ExportState.ExportingAnchor;
            StatusTextDisplay.Instance.SetStatusText("exporting lock from device");
            this.ExportWorldAnchor();
          }
#endif // UNITY_WSA
          break;
        default:
          break;
      }
    }
  }
  protected override void AddRoomManagerHandlers()
  {
    base.AddRoomManagerHandlers();
    roomManagerListener.AnchorUploadedEvent += this.OnAnchorUploadCompleted;
  }
  void ExportWorldAnchor()
  {
    string guidString = Guid.NewGuid().ToString();

    exportingAnchorName = guidString;

    // Save the anchor to our local anchor store.
    if (SharingStage.Instance.ShowDetailedLogs)
    {
      Debug.Log("Anchor Manager: Exporting anchor " + exportingAnchorName);
    }

#if UNITY_WSA && !UNITY_EDITOR

    worldAnchorTransferBatch = new WorldAnchorTransferBatch();
    worldAnchorTransferBatch.AddWorldAnchor(guidString, worldAnchor);
    WorldAnchorTransferBatch.ExportAsync(worldAnchorTransferBatch, WriteBuffer, ExportComplete);

#endif // UNITY_WSA
  }
  void WriteBuffer(byte[] data)
  {
    exportingAnchorBytes.AddRange(data);
  }
  protected override void OnDestroy()
  {
    if (roomManagerListener != null)
    {
      roomManagerListener.AnchorUploadedEvent -= this.OnAnchorUploadCompleted;
    }
    base.OnDestroy();
  }
  void OnAnchorUploadCompleted(bool successful, XString failureReason)
  {
    if (successful)
    {
      StatusTextDisplay.Instance.SetStatusText("lock copied");

      if (SharingStage.Instance.ShowDetailedLogs)
      {
        Debug.Log("Anchor Manager: Sucessfully uploaded anchor");
      }
      currentState = ExportState.AnchorUploaded;
    }
    else
    {
      StatusTextDisplay.Instance.SetStatusText("lock copy failed");

      Debug.LogError("Anchor Manager: Upload failed " + failureReason);
      currentState = ExportState.Failed;
    }
    base.FireCompleted(currentState == ExportState.AnchorUploaded);
  }
#if UNITY_WSA && !UNITY_EDITOR

  void ExportComplete(SerializationCompletionReason status)
  {
    if ((status == SerializationCompletionReason.Succeeded)
      && (exportingAnchorBytes.Count > MinTrustworthySerializedAnchorDataSize))
    {
      StatusTextDisplay.Instance.SetStatusText(
        string.Format(
          "copying {0:N2}MB lock to server", (exportingAnchorBytes.Count / (1024 * 1024))));

      if (SharingStage.Instance.ShowDetailedLogs)
      {
        Debug.Log("Anchor Manager: Uploading anchor: " + exportingAnchorName);
      }

      roomManager.UploadAnchor(
          currentRoom,
          new XString(exportingAnchorName),
          exportingAnchorBytes.ToArray(),
          exportingAnchorBytes.Count);
    }
    else
    {
      StatusTextDisplay.Instance.SetStatusText("retrying export");

      Debug.LogWarning("Anchor Manager: Failed to upload anchor, trying again...");

      currentState = ExportState.WaitingForAnchorLocation;
    }
  }
#endif // UNITY_WSA
}

and the ImportAnchorManager is as below;

// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See LICENSE in the project root for license information.

using System;
using System.Collections;
using UnityEngine;
using HoloToolkit.Unity;
using HoloToolkit.Sharing;

#if UNITY_WSA && !UNITY_EDITOR
using UnityEngine.VR.WSA;
using UnityEngine.VR.WSA.Sharing;
#endif

public class ImportAnchorManager : AnchorManager<ImportAnchorManager>
{
  enum ImportState
  {
    Start,
    Failed,
    ReadyToImport,
    DataRequested,
    DataDownloadedReadyForImport,
    Importing,
    AnchorImportedAndLocked
  }

  ImportState currentState = ImportState.Start;

  byte[] rawAnchorData;

#if UNITY_WSA && !UNITY_EDITOR

  WorldAnchor worldAnchor;

#endif

  void Update()
  {
    if (SharingStage.Instance.IsConnected)
    {
      switch (currentState)
      {
#if UNITY_WSA && !UNITY_EDITOR
        case ImportState.Start:
          ConnectToRoom();
          this.currentState = ImportState.ReadyToImport;
          break;
        case ImportState.ReadyToImport:
          MakeAnchorDataRequest();
          break;
        case ImportState.DataDownloadedReadyForImport:
          // DataReady is set when the anchor download completes.
          currentState = ImportState.Importing;
          StatusTextDisplay.Instance.SetStatusText("importing room lock data");

          WorldAnchorTransferBatch.ImportAsync(rawAnchorData, ImportComplete);
          break;
#endif
      }
    }
  }
  protected override void AddRoomManagerHandlers()
  {
    base.AddRoomManagerHandlers();
    this.roomManagerListener.AnchorsDownloadedEvent += this.OnAnchorDonwloadCompleted;
  }

  protected override void OnDestroy()
  {
    if (roomManagerListener != null)
    {
      roomManagerListener.AnchorsDownloadedEvent -= OnAnchorDonwloadCompleted;
    }
    base.OnDestroy();
  }
  void OnAnchorDonwloadCompleted(
    bool successful,
    AnchorDownloadRequest request,
    XString failureReason)
  {
    // If we downloaded anchor data successfully we should import the data.
    if (successful)
    {
      StatusTextDisplay.Instance.SetStatusText(
        "room sync data downloaded");

      int datasize = request.GetDataSize();

      if (SharingStage.Instance.ShowDetailedLogs)
      {
        Debug.LogFormat("Anchor Manager: Anchor size: {0} bytes.", datasize.ToString());
      }

      rawAnchorData = new byte[datasize];

      request.GetData(rawAnchorData, datasize);

      currentState = ImportState.DataDownloadedReadyForImport;
    }
    else
    {
      StatusTextDisplay.Instance.SetStatusText(
        "retrying room lock request");

      // If we failed, we can ask for the data again.
      Debug.LogWarning("Anchor Manager: Anchor DL failed " + failureReason);

#if UNITY_WSA && !UNITY_EDITOR
      MakeAnchorDataRequest();
#endif
    }
  }

#if UNITY_WSA && !UNITY_EDITOR

  void MakeAnchorDataRequest()
  {
    StatusTextDisplay.Instance.SetStatusText("requesting sync data");

    if (roomManager.DownloadAnchor(currentRoom, currentRoom.GetAnchorName(0)))
    {
      currentState = ImportState.DataRequested;
    }
    else
    {
      Debug.LogError("Anchor Manager: Couldn't make the download request.");

      currentState = ImportState.Failed;
    }
  }
  void ImportComplete(SerializationCompletionReason status, WorldAnchorTransferBatch anchorBatch)
  {
    if (status == SerializationCompletionReason.Succeeded)
    {
      if (anchorBatch.GetAllIds().Length > 0)
      {
        string first = anchorBatch.GetAllIds()[0];

        if (SharingStage.Instance.ShowDetailedLogs)
        {
          Debug.Log("Anchor Manager: Sucessfully imported anchor " + first);
        }
        this.worldAnchor = anchorBatch.LockObject(first, gameObject);

        StatusTextDisplay.Instance.SetStatusText("room lock imported");
      }

      base.FireCompleted(true);
    }
    else
    {
      StatusTextDisplay.Instance.SetStatusText("retrying room lock import");

      Debug.LogError("Anchor Manager: Import failed");

      currentState = ImportState.DataDownloadedReadyForImport;
    }
  }
#endif // UNITY_WSA
}

and so at this point, I should have an app that can run on multiple devices and (in a basic way).

The app will connect to the sharing service, create or join a room based on the WiFi name and will then allow a user creating a room to position the model where they want it and it will then progress through importing/exporting world anchors such that each user will see the model positioned in the same place in the real world.

I made a video of this operating in a “single user” mode below which doesn’t quite convey what it’s doing but it’s the best I could work out with a single HoloLens device;

Step 7 – Head and Gaze Tracking

There’s already a script in the HoloToolkit-Unity which sends custom messages through the sharing service so that each device in a shared session can visualise the position of the other devices.

I used this script in my previous post, it’s called Remote Head Manager and it relies on another Custom Messages script so I added both of these to my modelParent object;

image

That Remote Head Manager test script from the toolkit draws a cube for the head position and it doesn’t draw the remote user’s gaze and so I modified the script to make the prefab that it draws configurable and so that it draws remote user’s gaze with a line. My modified version of the script is as below with MIKET_CHANGE wrapped around the places where I have done something to the original script;

#define MIKET_CHANGE
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See LICENSE in the project root for license information.

using System;
using System.Collections.Generic;
using UnityEngine;
using HoloToolkit.Unity;
using HoloToolkit.Unity.InputModule;

namespace HoloToolkit.Sharing.Tests
{
  /// <summary>
  /// Broadcasts the head transform of the local user to other users in the session,
  /// and adds and updates the head transforms of remote users.
  /// Head transforms are sent and received in the local coordinate space of the GameObject this component is on.
  /// </summary>
  public class RemoteHeadManager : Singleton<RemoteHeadManager>
  {
    public class RemoteHeadInfo
    {
      public long UserID;
      public GameObject HeadObject;
    }

#if MIKET_CHANGE
    public GameObject remoteHeadPrefab;
#endif

    /// <summary>
    /// Keep a list of the remote heads, indexed by XTools userID
    /// </summary>
    private Dictionary<long, RemoteHeadInfo> remoteHeads = new Dictionary<long, RemoteHeadInfo>();

#if MIKET_CHANGE
    private void OnEnable()
    {
      CustomMessages.Instance.MessageHandlers[CustomMessages.TestMessageID.HeadTransform] = UpdateHeadTransform;

      SharingStage.Instance.SessionUsersTracker.UserJoined += UserJoinedSession;
      SharingStage.Instance.SessionUsersTracker.UserLeft += UserLeftSession;
    }
#else
    private void Start()
    {
      CustomMessages.Instance.MessageHandlers[CustomMessages.TestMessageID.HeadTransform] = UpdateHeadTransform;

      // SharingStage should be valid at this point, but we may not be connected.
      if (SharingStage.Instance.IsConnected)
      {
        Connected();
      }
      else
      {
        SharingStage.Instance.SharingManagerConnected += Connected;
      }
    }
    private void Connected(object sender = null, EventArgs e = null)
    {
      SharingStage.Instance.SharingManagerConnected -= Connected;

      SharingStage.Instance.SessionUsersTracker.UserJoined += UserJoinedSession;
      SharingStage.Instance.SessionUsersTracker.UserLeft += UserLeftSession;
    }
#endif

    private void Update()
    {
      // Grab the current head transform and broadcast it to all the other users in the session
      Transform headTransform = Camera.main.transform;

      // Transform the head position and rotation from world space into local space
      Vector3 headPosition = transform.InverseTransformPoint(headTransform.position);

      Quaternion headRotation = Quaternion.Inverse(transform.rotation) * headTransform.rotation;

      CustomMessages.Instance.SendHeadTransform(headPosition, headRotation);
    }

    protected override void OnDestroy()
    {
      if (SharingStage.Instance != null)
      {
        if (SharingStage.Instance.SessionUsersTracker != null)
        {
          SharingStage.Instance.SessionUsersTracker.UserJoined -= UserJoinedSession;
          SharingStage.Instance.SessionUsersTracker.UserLeft -= UserLeftSession;
        }
      }

      base.OnDestroy();
    }

    /// <summary>
    /// Called when a new user is leaving the current session.
    /// </summary>
    /// <param name="user">User that left the current session.</param>
    private void UserLeftSession(User user)
    {
      int userId = user.GetID();
      if (userId != SharingStage.Instance.Manager.GetLocalUser().GetID())
      {
        RemoveRemoteHead(remoteHeads[userId].HeadObject);
        remoteHeads.Remove(userId);
      }
    }

    /// <summary>
    /// Called when a user is joining the current session.
    /// </summary>
    /// <param name="user">User that joined the current session.</param>
    private void UserJoinedSession(User user)
    {
      if (user.GetID() != SharingStage.Instance.Manager.GetLocalUser().GetID())
      {
        GetRemoteHeadInfo(user.GetID());
      }
    }

    /// <summary>
    /// Gets the data structure for the remote users' head position.
    /// </summary>
    /// <param name="userId">User ID for which the remote head info should be obtained.</param>
    /// <returns>RemoteHeadInfo for the specified user.</returns>
    public RemoteHeadInfo GetRemoteHeadInfo(long userId)
    {
      RemoteHeadInfo headInfo;

      // Get the head info if its already in the list, otherwise add it
      if (!remoteHeads.TryGetValue(userId, out headInfo))
      {
        headInfo = new RemoteHeadInfo();
        headInfo.UserID = userId;
        headInfo.HeadObject = CreateRemoteHead();

        remoteHeads.Add(userId, headInfo);
      }

      return headInfo;
    }

    /// <summary>
    /// Called when a remote user sends a head transform.
    /// </summary>
    /// <param name="msg"></param>
    private void UpdateHeadTransform(NetworkInMessage msg)
    {
      // Parse the message
      long userID = msg.ReadInt64();

      Vector3 headPos = CustomMessages.Instance.ReadVector3(msg);

      Quaternion headRot = CustomMessages.Instance.ReadQuaternion(msg);

      RemoteHeadInfo headInfo = GetRemoteHeadInfo(userID);
      headInfo.HeadObject.transform.localPosition = headPos;
      headInfo.HeadObject.transform.localRotation = headRot;

#if MIKET_CHANGE
      var rayLength = maxRayDistance;

      RaycastHit hitInfo;

      if (Physics.Raycast(
        headInfo.HeadObject.transform.position,
        headInfo.HeadObject.transform.forward,
        out hitInfo))
      {
        rayLength = hitInfo.distance;
      }
      var lineRenderer = headInfo.HeadObject.GetComponent<LineRenderer>();
      lineRenderer.SetPosition(1, Vector3.forward * rayLength);
#endif
    }

    /// <summary>
    /// Creates a new game object to represent the user's head.
    /// </summary>
    /// <returns></returns>
    private GameObject CreateRemoteHead()
    {
      GameObject newHeadObj = Instantiate(this.remoteHeadPrefab);
      newHeadObj.transform.parent = gameObject.transform;

#if MIKET_CHANGE
      this.AddLineRenderer(newHeadObj);
#endif
      return newHeadObj;
    }
#if MIKET_CHANGE
    void AddLineRenderer(GameObject headObject)
    {
      var lineRenderer = headObject.AddComponent<LineRenderer>();
      lineRenderer.useWorldSpace = false;
      lineRenderer.startWidth = 0.01f;
      lineRenderer.endWidth = 0.05f;
      lineRenderer.numPositions = 2;
      lineRenderer.SetPosition(0, Vector3.forward * 0.1f);
      var material = new Material(Shader.Find("Diffuse"));
      material.color = colors[this.colorIndex++ % colors.Length];

      lineRenderer.material = material;
    }
#endif

    /// <summary>
    /// When a user has left the session this will cleanup their
    /// head data.
    /// </summary>
    /// <param name="remoteHeadObject"></param>
    private void RemoveRemoteHead(GameObject remoteHeadObject)
    {
      DestroyImmediate(remoteHeadObject);
    }
#if MIKET_CHANGE
    const float maxRayDistance = 5.0f;
    int colorIndex;
    static Color[] colors =
    {
      Color.red,
      Color.green,
      Color.blue,
      Color.cyan,
      Color.magenta,
      Color.yellow
    };
#endif
  }
}

and the prefab in question here is a small model of a HoloLens;

image

which will show up with a ‘ray’ emanating from it at the position where each of the HoloLens devices are present in the scene.

Wrapping Up & Next Steps

This post has got long and so I’ll break off and will follow-up in another post if I can get an extension of this code working with users who are not in the same physical rooms and so need a remote experience enabling.

As a teaser, here’s an early experiment of trying that with my colleague Pete connecting to a cloud-based server where Pete is remotely viewing this model located in my home office and I can remotely see what Pete is viewing;

file

and so the initial experiment is looking pretty good Smile

If you’re interested in following along, the code for the post is here on Github – enjoy!