Experiments with Shared Holograms and Azure Blob Storage/UDP Multicasting (Part 7)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

A follow-up to my previous post around experiments with shared holograms using Azure blob storage and UDP multicasting techniques.

At the end of the previous post, I said that I might return and make a slightly better ‘test scene’ for the Unity project this post is my write up of my attempts to do that.

What’s in the New Test Scene?

I found a model of a house on Remix3D.com;

image

and I made the test scene about visualising that model in a consistent place on multiple devices with the ability to rotate, scale and move it such that the multiple devices keep a consistent view.

What I built is pretty simple and the essential steps involved in the scene are;

  • The app runs and waits for the underlying library to tell it whether there are already other devices on the same network or not. During this period, it displays a ‘waiting screen’ for up to 5 seconds if it doesn’t receive notification that there are other devices on the network.

20180110_130146_HoloLens

  • If the app determines that no-other devices are on the network then it pops up a model of a house gaze-locked to the device so that the user can potentially move it around and say ‘done’ to place it.

20180110_125124_HoloLens

  • Once positioned, the app replaces the model displayed by using the APIs detailed in the previous posts to create a shared hologram which is exactly the same as the house and in the same position etc. At this point, its creation will be multicast around the network and the blob representing its world anchor will be uploaded to Azure.
  • If the app determines that there are other devices on the network at start-up time then it will inform the user of this;

20180110_125554_HoloLens

  • and it will stop the user from positioning the model while waiting to bring the position data (world anchor) from Azure. The same thing should happen in the race condition where multiple users start the app at the same time and then one of them becomes the first to actually position the model.

20180110_125733_HoloLens

  • Once the model has been positioned on the local device (in whichever way) it enters into a mode which allows for voice commands to be used to enter ‘rotate’, ‘scale’ and ‘move’ modes to move it around;

20180110_125155_HoloLens

  • those transformations are then multicast to other devices on the network such that they all display the same model of a house in the same place.

and that’s pretty much it Smile

How’s the Test Scene Structured?

I already had a test scene within the Unity project that I’d published to github and so I just altered it rather than starting from scratch.

It’s very simple – the scene starts with the main camera parenting both a text object (to give a very poor Heads-Up-Display) and the model of the house (to give a very poor gaze-locked positioning system) as below;

image

there is then one object called ScriptHolder which has an instance of the Shared Hologram Controller component (and its dependency) that I discussed in the previous posts;

image

I’ve ommitted the details of my own Azure configuration so that would need to be filled in to specify the storage details and I’ve also told the script that I want to synchronise transforms on a fairly high frequency which, realistically, I think I could drop down a little.

Beyond that, I also have a script here called Main Script which contains the logic for the scene with the positive part of it being that there’s not too much of it;

using SharedHolograms;
using System;
using System.Linq;
using UnityEngine;
using UnityEngine.Windows.Speech;

public class MainScript : MonoBehaviour, ICreateGameObjects
{
    // Text to display output messages on
    public TextMesh StatusDisplayTextMesh;

    // GameObject to use as a marker to position the model (i.e. the house)
    public GameObject PositionalModel;

    // Implementation of ICreateGameObject - because we are not creating a Unity primitive
    // I've implemented this here and 'plugged it in' but our creation is very simple in
    // that we duplicate the object that we're using as the PositionalModel (i.e. the
    // house in my version).
    public void CreateGameObject(string gameObjectSpecifier, Action<GameObject> callback)
    {
        // Right now, we know how to create one type of thing and we do it in the most
        // obvious way but we could do it any which way we like and even get some other
        // componentry to do it for us.
        if (gameObjectSpecifier == "house")
        {
            var gameObject = GameObject.Instantiate(this.PositionalModel);
            gameObject.SetActive(true);
            callback(gameObject);
        }
        else
        {
            // Sorry, only know about "house" right now.
            callback(null);
        }
    }
    void Start()
    {
        // Set up our keyword handling. Originally, I imagined more than one keyword but
        // we ended up just with "Done" here.
        var keywords = new[]
        {
            new { Keyword = "done", Handler = (Action)this.OnDoneKeyword }
        };
        this.keywordRecognizer = new KeywordRecognizer(keywords.Select(k => k.Keyword).ToArray());

        this.keywordRecognizer.OnPhraseRecognized += (e) =>
        {
            var understood = false;

            if ((e.confidence == ConfidenceLevel.High) ||
                (e.confidence == ConfidenceLevel.Medium))
            {
                var handler = keywords.FirstOrDefault(k => k.Keyword == e.text.ToLower());

                if (handler != null)
                {
                    handler.Handler();
                    understood = true;
                }
            }
            if (!understood)
            {
                this.SetStatusDisplayText("I might have missed what you said...");
            }
        };
        // We need to know when various things happen with the shared holograms controller.
        SharedHologramsController.Instance.SceneReady += OnSceneReady;
        SharedHologramsController.Instance.Creator.BusyStatusChanged += OnBusyStatusChanged;
        SharedHologramsController.Instance.Creator.HologramCreatedRemotely += OnRemoteHologramCreated;
        SharedHologramsController.Instance.Creator.GameObjectCreator = this;

        // Wait to see whether we should make the positional model active or not.
        this.PositionalModel.SetActive(false);
        this.SetStatusDisplayText("waiting...");
    }
    void OnDoneKeyword()
    {
        if (!this.busy)
        {
            this.keywordRecognizer.Stop();

            this.SetStatusDisplayText("working, please wait...");

            if (this.PositionalModel.activeInHierarchy)
            {
                // Get rid of the placeholder.
                this.PositionalModel.SetActive(false);

                // Create the shared hologram in the same place as the placeholder.
                SharedHologramsController.Instance.Creator.Create(
                    "house",
                    this.PositionalModel.transform.position,
                    this.PositionalModel.transform.forward,
                    Vector3.one,
                    gameObject =>
                    {
                        this.SetStatusDisplayText("object created and shared");
                        this.houseGameObject = gameObject;
                        this.AddManipulations();
                    }
                );
            }
        }
    }
    void OnBusyStatusChanged(object sender, BusyStatusChangedEventArgs e)
    {
        this.busy = e.Busy;

        if (e.Busy)
        {
            this.SetStatusDisplayText("working, please wait...");
        }
    }
    void OnSceneReady(object sender, SceneReadyEventArgs e)
    {
        // Are there other devices around or are we starting alone?
        if (e.Status == SceneReadyStatus.OtherDevicesInScene)
        {
            this.SetStatusDisplayText("detected other devices, requesting sync...");
        }
        else
        {
            this.SetStatusDisplayText("detected no other devices...");

            // We need this user to position the model so switch it on
            this.PositionalModel.SetActive(true);
            this.SetStatusDisplayText("walk to position the house then say 'done'");

            // Wait for the 'done' keyword.
            this.keywordRecognizer.Start();
        }
    }
    void OnRemoteHologramCreated(object sender, HologramEventArgs e)
    {
        // Someone has beaten this user to positioning the model
        // turn off the model.
        this.PositionalModel.SetActive(false);

        this.SetStatusDisplayText("sync'd...");

        // Stop waiting for the 'done' keyword (if we are)
        this.keywordRecognizer.Stop();

        this.houseGameObject = GameObject.Find(e.ObjectId.ToString());

        // Make sure we can manipulate what the other user has placed.
        this.AddManipulations();
    }
    void AddManipulations()
    {
        this.SetStatusDisplayText("say 'move', 'rotate' or 'scale'");

        // The Manipulations script contains a keyword recognizer for 'move', 'rotate', 'scale'
        // and some basic logic to wire those to hand manipulations
        this.houseGameObject.AddComponent<Manipulations>();
    }
    void SetStatusDisplayText(string text)
    {
        if (this.StatusDisplayTextMesh != null)
        {
            this.StatusDisplayTextMesh.text = text;
        }
    }
    KeywordRecognizer keywordRecognizer;
    GameObject houseGameObject;
    bool busy;
}

if someone (anyone! please! please! Winking smile) had been following the previous set of blog scripts closely they might have noticed that in order to write that code I had to change my existing code to at least;

  • Fire an event when the device joins the network such that code can be notified of whether the messaging layer has seen other devices on the network or not.
  • Fire events when other devices on the network create/delete holograms causing them to be imported and created by the local device.
  • Fire an event as/when the underlying code is ‘busy’ doing some downloading or uploading or similar.

Having tried to implement this scene it was immediately obvious to me that this was needed but it wasn’t so obvious to me that I implemented those pieces beforehand and so that was a useful output of writing this test scene.

The other thing that’s used in the scene is a MonoBehaviour named Manipulations. This is a version of a script that I’ve used in a few places in the past and it’s a very cheap and cheerful way to provide rotate/scale/move behaviour on a focused object in response to voice commands and hand manipulations.

I placed this script and the other script that is specific to the test scene in the ‘Scene Specific’ folder;

image

and the Manipulations script has a dependency on the 3 materials in the Resources folder that it uses for drawing different coloured boxed around an object while it is being rotated/scaled/moved;

image

and that’s pretty much it.

One thing that I’d note is that when I’d used this Manipulations scripts before it was always in projects that were making use of the Mixed Reality Toolkit for Unity and, consequently, I had written the code to depend on some items of the toolkit – specifically around the IManipulationHandler interface and the IInputClickHandler interface.

I don’t currently have any use of the toolkit in this test project and it felt like massive overkill to add it just to enable this one script and so I reworked the script to move it away from having a dependency on the toolkit and I was very pleased to find that this was only a small piece of work – i.e. the toolkit had mostly done a bit of wrapping on the raw Unity APIs and so it wasn’t difficult to unpick that dependency here.

Wrapping Up

I don’t intend to write any more posts in this mini-series around using Azure blob storage and UDP multicasting to enable shared holograms, I think I’ve perhaps gone far enough Smile

The code is all up on github should anyone want to explore it, try it, take some pieces for their own means.

I’m always open to feedback so feel free to do that if you want to drop me a line and be aware that I’ve only tested this code in a limited way as I wrote it all on a single HoloLens device using the (supplied) test programs to simulate responses from a second device but I’m ‘reasonably’ happy that it’s doing sensible things.

Experiments with Shared Holograms and Azure Blob Storage/UDP Multicasting (Part 6)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

A follow-up to my previous post around experiments with shared holograms using Azure blob storage and UDP multicasting techniques.

I doubt that anyone’s following along in great detail Smile but at the end of “Part 2” in this little series of posts I had ended the post with a bit of a “to do” list on my experiments which was as below;

  • Create objects other than primitives – I added something around this in Part 3.
  • Transform objects after they are created – I added something around this in Part 4 and Part 5.
  • Have some ‘memory’ of messages that a client has missed such that not all clients have to joint a scene at the same time.

I wanted to return and make some notes on that last point around a ‘memory’.

Prior to this post, I’ve set up some scripts and a library such that code based on my scripts running on one HoloLens device can be used to dynamically instantiate holograms in various places around the physical world and the scripts make it relatively easy to;

  • Create a shared hologram
    • A simple Create() API takes the type of the hologram and its position, scale which…
      • Creates the hologram
      • Automatically parents it from a world-anchored object such that no hologram is more than 3m from its world anchor, dynamically creating and anchoring the parent if necessary.
      • Exports the details of any newly created anchor to Azure blob storage.
      • Multicasts a message around the network to let other devices respond and create their own replica of the hologram using the world anchor downloaded from Azure etc.
      • Optionally attaches a ‘behaviour’ which will multicast changes to the local position, rotation, scale of the hologram around the network on some frequency so that changes made to those values will reflect across all the devices.
  • Delete a shared hologram
    • A simple Delete() API which…
      • Removes the object from the scene
      • Multicasts a message around the network to let other devices remove the object locally.

and that all seems to work reasonably well.

However, there’s a lack of ‘memory’ in the sense that if an app based on this code was to run on one device and take actions such as creating, transforming, deleting holograms prior to the app running on a second device then there’s no mechanism via which that second device can join the scene and catch up with what’s been happening on the first device.

There’s no way to sync beyond having all the apps running at the same time which isn’t very realistic.

I wanted to try and address this – there’s no doubt lots of different ways of doing it but I considered;

  • Adding some centralised state such that some blob/table in the cloud records the current state of play and any device can update/query it at any time
  • Adding some centralised state such that one ‘master’ device maintains a list that can be queried by other devices
  • Making minimal changes such that the de-centralised state already present on each device can be used to reconstruct the scene on a newly-arrived device

I went with the 3rd option as it felt like a relatively small change to what I already had in my code.

With that in mind, I didn’t make any changes to my MessagingLibrary project but I added new messages into the Unity project;

image

With the essential scheme being something along the lines of;

  • When a device first runs up it creates a GUID to identify itself and multicasts a NewDeviceAnnouncementMessage
  • Other devices respond to those messages by replying with a ExistingDeviceMessage which contains their own ID
  • A new device that receives such responses within the first few seconds of start-up can choose one of the replies and construct a SceneRequestMessage and multicast it (it contains both the destination device ID and the source device ID)
  • The device that receives the SceneRequestMessage multicasts back a sequence of SceneResponseObjectMessage messages, one for each shared hologram in the scene. These messages also contain the intended recipient device ID so that other devices can ignore them.

The SceneResponseObjectMessage is essentially the same as the initial CreatedObjectMessage which is multicast when the Create() API creates a shared hologram and so the handling of those messages doesn’t require lots of new code – it’s the same code that would handle the creation messages if the receiving app had been alive at the time that the holograms were created.

The changes to send/receive/process these messages then become relatively minor and the code’s up to date on github.

I also updated the console-based test application that I’ve been using to test out the code when only running with one HoloLens although I must admit that the code in that application is perhaps only really usable by me – it’d need some detailed explanation for someone else to pick it up and figure out what the heck I had in mind for using this test application but it has helped a lot along the way.

I’m not planning to add more code into this series of posts. The only addition that I’d like to make (beyond testing properly on multiple devices Smile) is to add a better test scene.

The one that I have in the Unity project really is only there for me to test out my code, I’d like to replace it with one that someone coming new to this code could easily run, understand and use to get a basic shared hologram app up and running on multiple devices in a short time. If I get a chance to look into this then I’ll add one more post to this series when I’ve got that new test scene put in place…

Experiments with Shared Holograms and Azure Blob Storage/UDP Multicasting (Part 5)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

A quick follow-up to my previous post where I’d done a little testing around changing the rotation of a shared hologram.

I modified the script in the solution named Rotate.cs to become a script named MoveScaleRotateTest.cs and changed the behaviour so as to do some scaling and translation alongside the rotation.

I then adopted the same ‘strategy’ of trying to test this out using one HoloLens and my console-based recording application;

  • Create some shared holograms on the HoloLens which sends an ‘object created’ message to the console application and a world anchor blob to the cloud.
  • Tap on a shared hologram which starts it slowly rotating, scaling and translating and sending ‘object transformed’ messages to the console application.
  • Tap again on a shared hologram which sends an ‘object deleted’ message to the console application.
  • Restart the HoloLens application.
  • Replay all the recorded messages from the console application to see if the HoloLens application behaves correctly when it receives those messages by creating the hologram in the right place, transforming it in the right way before ultimately deleting it again.

and things seem to work out reasonably well Smile so I pushed those changes back to github.

With that, I’m reasonably happy that this seems to work although I’d like to test on multiple devices.

However, there’s still a big item missing from the ‘TODO’ list that I had at the end of this post and that’s the notion of having some kind of ‘memory’ of the shared scene that’s being built up.

So far, the code that I’ve written can cope with a scenario where N devices are on a network and holograms are created, deleted and manipulated (rotate, scale, translate) while all the devices are alive and receiving multicast messages.

What it doesn’t handle at all is a scenario where one device comes along, sets up a ‘scene’ with some shared holograms, world anchors, transformations etc. and then another device joins into that scene at a later point in time having missed all the detailed messages that have been multicast while it wasn’t “present” on the network.

How does a joining device get an up to date picture of the shared holograms that other devices have already created? Right now, that’s a “TBD” for this particular set of blog posts but I’ll come back to it if I can find some time to put something in place…