Hitchhiking the HoloToolkit-Unity, Leg 11–More Steps with Sharing Holographic Experiences

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

In my previous post I experimented with the basics of sharing holographic experiences by writing my own network pieces based on TCP and going through the basics of the APIs that let an application create an anchor for a hologram which can then be shared with another device such that multiple devices can have a shared co-ordinate system and enable holograms and interactions with them to span multiple devices.

That was mostly about my own experimentation and I ended up building quite a bit of code but it was mostly for learning.

In this post, I wanted to follow up on that by solely using pieces from the HoloToolkit to achieve a much more flexible and capable solution and with much less work on my part. My aim is to enable a scenario whereby;

  1. Two or more HoloLens devices sit on a common network.
  2. Those devices share enough information to enable a common co-ordinate system.
  3. Those devices can then use that common co-ordinate system so that some holograms created on one device are visible on a second and third device and vice versa.

Clearly, there’s a lot more that you can do with shared holographic experiences but I just wanted to get this up and running to see how it looks if implemented purely with the HoloToolkit.

Here’s the steps that I went through. As always with these posts, please refer back to the official documentation that I linked to extensively from my previous post and specifically this page on github;


Step 1 – Running the SharingService

I’m going to share data between HoloLens devices by using the SharingService piece of the HoloToolkit-Unity which you’ll find in this folder;


I think the sharing componentry in the HoloToolkit also has the capability to share data across devices without having a separate service acting as a server but I haven’t experimented with that to date and so I’m going to focus on using the sharing service in this post.

This service can be run as either a simple command line exe (via the –local argument) or it can be run as a Windows Service via the –install/-remove arguments. To date, I’ve run it using the –local argument where it displays output something like;


You can see here that the service listens on port 20602 on the various IP addresses that it finds on my system. This sharing service is quite a complex piece of kit and, as a minimum, it offers features including;

  1. Managing a set of ‘sessions’ that multiple clients can join/leave.
  2. Managing a set of ‘rooms’ within ‘sessions’ that clients can join/leave.
  3. Managing a set of ‘anchors’ within ‘rooms’ that clients can upload/download.

The service can do more than this but that’s the core functionality that I’m going to make use of within this post.

Before using the service, I’d strongly recommend making sure that you can connect to it from another machine to ensure that your firewall settings are ok. To help with that, in this folder;


You’ll find a ‘Session Manager’ GUI tool which connects up to that server and offers quite a bit of functionality but, to start with,  it’s a useful, visual way of checking that things are working.

Here’s an example screenshot below although note that I wasn’t running this from one PC to another – both apps here were running on the same PC;


In the screenshot above, you can see that this tool has connected to a session on the localhost, port 20602 and the command line emits debug information to that effect;


With this server up and running, how does an app connect to it?

Step 2 – Connecting to the Sharing Service

With a new project in Unity, I can bring in the HoloToolkit and then make sure that I have set up the project and scene settings as in this video;

and then I can make sure that I have the right capabilities in my project which for my example here means;

  • Spatial perception (to create spatial anchors)
  • Private networks (for connectivity to my SharedService server)
  • Internet client

as below (plus the spatial perception capability which is just off screen here);


From there, I can add some kind of blank 3D object (called ‘Placeholder’ below) into my scene;


and then I can add a couple of components from the toolkit;


The big component here is the SharingStage which brokers/owns the connection to the sharing service and has a tonne of functionality. You’d notice in the above screenshot that I have set up;

  1. The IP address and the port of the sharing service to be connected to.
  2. That the component should connect as soon as it awakes rather than waiting for some manual intervention.

but there’s a lot of options that you can set here.

I’ve also added the Auto Join Session script which (as the name suggests) works on top of the Sharing Stage so as to attempt to automatically join a session named ‘Default’ here and to create it if it’s not already found on the sharing service.

With that in place, I can run this code on my HoloLens and I see this sort of output from the Sharing Service;


and so very quickly I’ve enabled multiple HoloLens devices to connect to this service and also to join a session on the service.

The sharing service will then manage devices joining, leaving sessions.

Step 3 – Establish a Shared World Anchor

Now that I potentially have multiple HoloLens devices all talking to the same Sharing Service, I need to establish some reference point in the environment which all those devices can use to establish a shared coordinate system such that each user sees holograms in the right place.

That’s what spatial anchors are for and I worked through the basics of sharing these across devices in my previous post and it runs something like this;

  1. Add a world anchor to an object, making sure that it is located in the world.
  2. Export that world anchor.
  3. Send it over the network.
  4. Import it onto another device.
  5. Lock some object to that imported world anchor.

In my example here, I want to be able to dynamically create some hologram (e.g. a cube) on one device and have it be visible on other devices and I can achieve this by setting up some parent object whose position and orientation is synchronised across the devices and which all other holograms can then take as their common reference point to position themselves relative to.

If I make my synchronised parent object (an empty 3D object) as below along with an AnchorText prefab from the toolkit;


then I can add a script called Import and Export Anchor Manager from the Toolkit to that SynchronizedParent;


then this script does quite a lot of work for us including;

  1. Creating the named room for us if it finds the need and, for that scenario;
    1. Creating a world anchor for the gameObject that it is attached to.
    2. Exporting that world anchor and uploading it into the room on the sharing service.
  2. Watching for uploaded world anchors in the named room on the sharing service;
    1. Downloading them from the sharing service.
    2. Importing them into the local environment.
    3. Locking the attached gameObject to that world anchor.
  3. Deciding whether to keep the room alive should all the devices ‘leave’ the room.
    1. Downloading world anchors from the room when a device newly arrives into it.

So, this script (which lives in the Sharing\Tests\Scripts folder) is pretty much a one-stop shop in the sense that I can attach it to this SynchronizedParent game object and this script will ‘automatically’ try and make sure that this object is anchored across all the devices that are present in the named room. Note that the Anchor Debug Text here is very useful for monitoring what’s going on here. Note also that there’s a reasonable state machine inside of this script to go through all the stages of importing/exporting world anchors like making sure they are ‘located’ and so on. It does quite a few things for you.

If I then run this code on my device then I see;


and so on a second (and third) device, this anchor would be downloaded, imported and the associated object (SynchronizedObject) would be locked to be in the same position in the room on those devices as it was on the originating device.

That means that it is now possible to create holograms relative to this parent object which will show up in the same position on all devices.

Step 4 – Creating Holograms Relative to the Shared Anchor

It’s pretty easy to put a script onto the SynchronizedObject to handle the Tapped event. There are other ways, but here’s one;


and then in that script, I can handle the Tapped event;

using UnityEngine;
using UnityEngine.VR.WSA.Input;

public class TappedHandler : MonoBehaviour
  void Start()
    this.recognizer = new GestureRecognizer();
    this.recognizer.TappedEvent += OnTapped;
  void OnTapped(InteractionSourceKind source, int tapCount, Ray headRay)
  GestureRecognizer recognizer;

but how to create a hologram here such that hologram will be replicated to the other devices that are connected to the Sharing Service? There are manual ways but it turns out that the sharing part of the toolkit has already thought of this and there’s a base class SpawnManager<T> and a derivation PrefabSpawnManager which helps with this. This is part of a larger set of functionality around having shared data models across devices but I’m only going to use this piece of it in this post.

I can then tailor that previous script to use this support;

using HoloToolkit.Sharing;
using HoloToolkit.Sharing.Spawning;
using HoloToolkit.Unity.InputModule;
using UnityEngine;
using UnityEngine.VR.WSA.Input;

public class TappedHandler : MonoBehaviour
  public PrefabSpawnManager spawnManager;

  void Start()
    this.recognizer = new GestureRecognizer();
    this.recognizer.TappedEvent += OnTapped;
  void OnTapped(InteractionSourceKind source, int tapCount, Ray headRay)
    // If we're networking...
    if (SharingStage.Instance.IsConnected)
      // Make a new cube that is 2m away in direction of gaze but then get that position
      // relative to the object that we are attached to (which is world anchor'd across
      // our devices).
      var newCubePosition =
          (GazeManager.Instance.GazeOrigin + GazeManager.Instance.GazeNormal * 2.0f));

      // Use the span manager to span a 'SyncSpawnedObject' at that position with
      // some random rotation, parent it off our gameObject, give it a base name (MyCube)
      // and do not claim ownership of it so it stays behind in the scene even if our
      // device leaves the session.
        new SyncSpawnedObject(),
  GestureRecognizer recognizer;

and so rather than using the regular Instantiate method here to create an instance of a prefab, we call into the PrefabSpawnManager and ask it to do the work for us and it takes on the heavy lifting to do that in a way that is synchronized across the network. In order to make use of the GazeManager, I added in the InputManager prefab as it brings with it all the necessary dependencies;


I added a PrefabSpawnManager to my Placeholder;


and the essential configuration here tells the script that when asked to create a SyncSpawnedObject it should use the test cube prefab set up in the Prefab property and so we have a mapping that says “SyncSpawnedObject” <-> “SpawnTestCube”. That test cube prefab itself comes from the toolkit so I just borrowed it rather than making my own cube.

With that in place, I can configure it as the Spawn Manager property of my Tapped Handler;


and that’s pretty much all that’s needed to enable the scenario where cubes created by one device will show up in the ‘same position’ on the other devices that are part of the sharing session.

Step 5 – Remote Head Tracking

When multiple users are in a shared environment, it’s useful to be able to see the position and orientation of their heads (and sometimes also their gaze). There’s a script in the HoloToolkit which provides a starting point for this called Remote Head Manager and I added it to my SynchronizedParent object as below. It makes use of a custom message type containing the head position and rotation and so there’s a need to also bring in the Custom Messages script which goes with it;


The script here is hard-wired to automatically create a cube to represent the user’s head position which could easily be changed but I left it as it was although I did make a minor change to move the cube vertically upwards by 30cm so that the cube wasn’t covering the user’s face.


Step 6 – Trying Things Out on Multiple Devices

I happened to be in the same place as my colleague Pete and so we tried this code out to see how it came together and it works out quite nicely;

There’s a lot more to the sharing support in the HoloToolkit but, for now, this was a good experiment to add to my previous post as there’s almost no code for me to write here and the functionality is increased. I’ll come back to this in follow on posts.

Hitchhiking the HoloToolkit-Unity, Leg 10–Baby Steps with Sharing Holographic Experiences

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.


This post builds directly on these two posts;

Windows 10, UWP, HoloLens & A Simple Two-Way Socket Library 

Windows 10, UWP, HoloLens & A Simple Two-Way Socket Library (Part 2)

in that I am going to make use of the socket library that I built in that first post and re-packaged a little in the second post in order to attempt to share a simple holographic experience across devices.

It also more than likely builds on a number of these posts in that I’ll be leaning on some other things that I have previously written about.

Before I get going, I’ll flag that there’s an official lab on this topic;

Holograms 240

and that there’s a really good write up here around the whole concept;

Shared holographic experiences

and some more specifics here around how this works in Unity;

Shared holographic experiences in Unity

and so I hope that has you covered for official documentation links and what I’ll follow with below are just my own experiments in learning how this works without me just picking up someone else’s library and running with it which is largely what I’ve done to date when it comes to ‘shared holographic experiences’.

I chose the title for the blog post reasonably carefully in that I called the post ‘Shared Holographic Experiences’ rather than ‘Shared Holograms’.

It’s perhaps a subtle distinction but to me ‘Shared Holographic Experiences’ implies that it’s the data underpinning the experience that is being  shared across devices rather than some literal copies of the holograms themselves.   This allows for the scenarios outlined in ‘shared holographic experiences’ under the titles of presentation, collaboration and guidance including scenarios where one user may have different visibility/interactions with holograms than another as you might find in a teacher/student scenario.

For this post, I’m going to think about a simple shared holographic experience between two HoloLens devices and I’m going to attempt to give the users of those two devices the same visual representation of the scene but, naturally, each user is likely to be standing in a different place in the scene and interacting in a different way to the other user.

In order to do that, I’m going to explore an approach along the lines of;

  1. Connect two HoloLens devices that are present on the same network.
  2. Provide a mechanism via which a user can create some basic content (a cube, a sphere or some other model).
  3. Ask the device where the content is created to give me some token (a spatial anchor) that contains the details of the location of the content in space.
  4. Send that token over the network to the second device.
  5. Attempt to redeem that token on that second device in order to come to a common understanding across the two devices of the content’s location.
  6. Create the same content on the second device in the same position so as to replicate the experience from the first device.

As a stretch goal, I’d also like to do this in a relatively efficient manner and I’d like the user to be able to manipulate (e.g. move or maybe rotate) the content they have created.

Clearly, the magic here is in being able to do steps (3) and (4) – that is, for a hologram, being able to ask the device to give us some “token” that captures positional information about that hologram which can be understood by another device.

This is handled for us by spatial anchors which I experimented with in this post from a different point of view – that of being able to persist the positions of holograms across instances of an app. That’s quite some feat and, naturally, even more of a leap is to have those anchors be capable of export from one device and import to another but it’s a capability of the device and the SDK Smile

Speaking of which, the APIs involved in this are detailed on that doc page that I referenced earlier and the process looks to run something like;

  1. Ensure that the app has the spatial perception capability
  2. Ensure that the hologram in question is anchored
  3. Use the WorldAnchorTransferBatch.ExportAsync method to export the anchor – this produces a byte[] array which is produced in segments passed to an ‘on progress’ handler before an ‘on completed’ handler is called.
  4. Use the WorldAnchorTransferBatch.ImportAsync method to import the anchor on the secondary device.

Naturally, there are going to be scenarios where the first device might not have enough information to do the export and that’s also going to be true for the importing device and so handling failures gracefully would be important in a real world scenario.

With all of that said, it’s time to get going and build up something simple that shares an experience across two HoloLens devices.

Note that I’m using a PC as my ‘second device’ here as I don’t have two HoloLens devices sitting side by side at the time of writing but I’m fairly confident that what I’m doing here will work across two HoloLens devices, I’ve tried similar code in that set up and I’ll try it again on two devices and update the post with the results when I have the opportunity.

I broke this process down into the following steps…

Steps 1 – 4

The initial steps that I mapped out;

  • Step 1 – Making A Blank App with Settings and Capabilities

  • Step 2 – Importing the HoloToolkit-Unity

  • Step 3 – Adding the Networking Pieces from the Previous Blog Post

  • Step 4 – Establishing a networking connection

all turned out to be pretty much identical to what I did in the video from my previous blog post and so I’m embedding that video again below rather than repeating it;

and I’ll continue from where that video left off below.

Step 4 – Create a Cube when the User Taps

With those pieces set up, my next steps is to create some object (e.g. a cube) when the user does an air-tap and I work through that in the screen capture below;

Step 5 – Adding and Exporting an Anchor

Once I have an object in place, I want to make sure that it is anchored into the world as per this article and also as per my own blog post here. Once anchored, I want to export the data behind the anchor from one of the two networked devices to the other one and I work through that in the screen capture here;

Step 6 – Importing the Anchor, Creating and Locking a Cube

Once I’ve got an anchor exported from my app running on one device, I need to make sure that the same app running on the second device knows how to import that anchor and recreate the object it represents. Let’s work through that;

Step 7 – Anchoring Every Cube?

At this point, it’s becoming apparent that exporting and importing an anchor takes a little bit of time. In my experiments here, anchors seem to serialize to around 1MB of data but I’ve seen them serialize to over 10MB of data and even in my case here it’s clear that the exporting of the anchor from the system (before the network transfer) can take some time.

Do I really need to create an anchor for every cube that I create or can I be more efficient and create a few anchors and then position subsequent cubes relative to those anchors? The docs suggest that a spatial anchor can be used to anchor content within about 3 metres of it and so perhaps I can be more efficient here and only anchor the first cube (for demo purposes) and hang any subsequent cubes off that.

With that in mind, I want to change my code such that;

  1. The first cube created and exported/imported becomes an anchor for both devices.
  2. All subsequent cubes will not have world anchors applied to them. Instead, their relative position to the main anchor will be calculated, shipped over the wire and used to recreate that cube on the other side of the wire.

I work through those additions in the screen capture below;

Step 8 – Rotating the Cube

Finally, I wanted to include some manipulation of the cubes and so I decided that I’d add some manipulations to everything but the anchor cubes (i.e. all cubes which are positioned relative to the anchor cube) so that they can be rotated. I work through those changes in the screen capture below;

Wrapping Up

This post got long but I feel that I learned quite a lot about the process of anchoring holograms, exporting those anchors and then bringing them in to another device.

Next steps for me would be to try this on two HoloLens devices at the same time to see how things work there – I’ll do that and will update the post once that’s done and share the code.

Windows 10, UWP, HoloLens & A Simple Two-Way Socket Library (Part 2)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following on from my previous blog-post;

Windows 10, UWP, HoloLens & A Simple Two-Way Socket Library

I took some of the pieces and re-configured them slightly such that it was easier for me to build upon and I added a Packages folder to the github repository with some pieces packaged which make it easier for me to;

  1. Start a blank project
  2. Add two pieces of UI for the case where the device has connected to a remote socket and the case where it hasn’t.

and I tested that out in this little screen capture here so that I can build upon it and refer back to it in the future;

I’m going to try and then build on this towards ‘shared holographic experiences in a follow-on post…