Experiments with Shared Holograms and Azure Blob Storage/UDP Multicasting (Part 6)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

A follow-up to my previous post around experiments with shared holograms using Azure blob storage and UDP multicasting techniques.

I doubt that anyone’s following along in great detail Smile but at the end of “Part 2” in this little series of posts I had ended the post with a bit of a “to do” list on my experiments which was as below;

  • Create objects other than primitives – I added something around this in Part 3.
  • Transform objects after they are created – I added something around this in Part 4 and Part 5.
  • Have some ‘memory’ of messages that a client has missed such that not all clients have to joint a scene at the same time.

I wanted to return and make some notes on that last point around a ‘memory’.

Prior to this post, I’ve set up some scripts and a library such that code based on my scripts running on one HoloLens device can be used to dynamically instantiate holograms in various places around the physical world and the scripts make it relatively easy to;

  • Create a shared hologram
    • A simple Create() API takes the type of the hologram and its position, scale which…
      • Creates the hologram
      • Automatically parents it from a world-anchored object such that no hologram is more than 3m from its world anchor, dynamically creating and anchoring the parent if necessary.
      • Exports the details of any newly created anchor to Azure blob storage.
      • Multicasts a message around the network to let other devices respond and create their own replica of the hologram using the world anchor downloaded from Azure etc.
      • Optionally attaches a ‘behaviour’ which will multicast changes to the local position, rotation, scale of the hologram around the network on some frequency so that changes made to those values will reflect across all the devices.
  • Delete a shared hologram
    • A simple Delete() API which…
      • Removes the object from the scene
      • Multicasts a message around the network to let other devices remove the object locally.

and that all seems to work reasonably well.

However, there’s a lack of ‘memory’ in the sense that if an app based on this code was to run on one device and take actions such as creating, transforming, deleting holograms prior to the app running on a second device then there’s no mechanism via which that second device can join the scene and catch up with what’s been happening on the first device.

There’s no way to sync beyond having all the apps running at the same time which isn’t very realistic.

I wanted to try and address this – there’s no doubt lots of different ways of doing it but I considered;

  • Adding some centralised state such that some blob/table in the cloud records the current state of play and any device can update/query it at any time
  • Adding some centralised state such that one ‘master’ device maintains a list that can be queried by other devices
  • Making minimal changes such that the de-centralised state already present on each device can be used to reconstruct the scene on a newly-arrived device

I went with the 3rd option as it felt like a relatively small change to what I already had in my code.

With that in mind, I didn’t make any changes to my MessagingLibrary project but I added new messages into the Unity project;

image

With the essential scheme being something along the lines of;

  • When a device first runs up it creates a GUID to identify itself and multicasts a NewDeviceAnnouncementMessage
  • Other devices respond to those messages by replying with a ExistingDeviceMessage which contains their own ID
  • A new device that receives such responses within the first few seconds of start-up can choose one of the replies and construct a SceneRequestMessage and multicast it (it contains both the destination device ID and the source device ID)
  • The device that receives the SceneRequestMessage multicasts back a sequence of SceneResponseObjectMessage messages, one for each shared hologram in the scene. These messages also contain the intended recipient device ID so that other devices can ignore them.

The SceneResponseObjectMessage is essentially the same as the initial CreatedObjectMessage which is multicast when the Create() API creates a shared hologram and so the handling of those messages doesn’t require lots of new code – it’s the same code that would handle the creation messages if the receiving app had been alive at the time that the holograms were created.

The changes to send/receive/process these messages then become relatively minor and the code’s up to date on github.

I also updated the console-based test application that I’ve been using to test out the code when only running with one HoloLens although I must admit that the code in that application is perhaps only really usable by me – it’d need some detailed explanation for someone else to pick it up and figure out what the heck I had in mind for using this test application but it has helped a lot along the way.

I’m not planning to add more code into this series of posts. The only addition that I’d like to make (beyond testing properly on multiple devices Smile) is to add a better test scene.

The one that I have in the Unity project really is only there for me to test out my code, I’d like to replace it with one that someone coming new to this code could easily run, understand and use to get a basic shared hologram app up and running on multiple devices in a short time. If I get a chance to look into this then I’ll add one more post to this series when I’ve got that new test scene put in place…

Experiments with Shared Holograms and Azure Blob Storage/UDP Multicasting (Part 5)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

A quick follow-up to my previous post where I’d done a little testing around changing the rotation of a shared hologram.

I modified the script in the solution named Rotate.cs to become a script named MoveScaleRotateTest.cs and changed the behaviour so as to do some scaling and translation alongside the rotation.

I then adopted the same ‘strategy’ of trying to test this out using one HoloLens and my console-based recording application;

  • Create some shared holograms on the HoloLens which sends an ‘object created’ message to the console application and a world anchor blob to the cloud.
  • Tap on a shared hologram which starts it slowly rotating, scaling and translating and sending ‘object transformed’ messages to the console application.
  • Tap again on a shared hologram which sends an ‘object deleted’ message to the console application.
  • Restart the HoloLens application.
  • Replay all the recorded messages from the console application to see if the HoloLens application behaves correctly when it receives those messages by creating the hologram in the right place, transforming it in the right way before ultimately deleting it again.

and things seem to work out reasonably well Smile so I pushed those changes back to github.

With that, I’m reasonably happy that this seems to work although I’d like to test on multiple devices.

However, there’s still a big item missing from the ‘TODO’ list that I had at the end of this post and that’s the notion of having some kind of ‘memory’ of the shared scene that’s being built up.

So far, the code that I’ve written can cope with a scenario where N devices are on a network and holograms are created, deleted and manipulated (rotate, scale, translate) while all the devices are alive and receiving multicast messages.

What it doesn’t handle at all is a scenario where one device comes along, sets up a ‘scene’ with some shared holograms, world anchors, transformations etc. and then another device joins into that scene at a later point in time having missed all the detailed messages that have been multicast while it wasn’t “present” on the network.

How does a joining device get an up to date picture of the shared holograms that other devices have already created? Right now, that’s a “TBD” for this particular set of blog posts but I’ll come back to it if I can find some time to put something in place…

Experiments with Shared Holograms and Azure Blob Storage/UDP Multicasting (Part 4)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following up on my previous post, one of the “to do” items was to allow a hologram to move, rotate, scale and have that change in its transform carried across the network to other devices.

I’m going to assume for this post that the hologram is not going to want to move so much as to cause it to change its parent and move to another world-anchored parent as that’d require more work right now so I’m thinking of small movements relative to the parent object that are reflected by changes in the local position, rotation and scale properties of a transform.

I added a settable property to my SharedHologramsController to flag whether the code should attempt to synchronise transforms on the shared objects that it has created;

image

and an interval at which to attempt synchronisation.

Naturally, a lot of GameObjects being shared over the network with a high interval is going to equate to a lot of network messages so there’s a trade-off to be made there.

I then added a MonoBehaviour derived script to actually poll a GameObject and watch for changes to its Transform.LocalPosition, LocalRotation and LocalScale properties before dispatching them over the network using the TransformMessage message that already existed in the project.

That script is called TransformSynchronizer and it’s fairly simple (and perhaps too simple! Smile).

That’s all the changes to enable that sort of behaviour but I’d really like to test it properly with multiple devices as so far I’ve only been able to use the approach that I mentioned in an earlier post of having the HoloLens multicast messages to a console application which then sends them back again as a way of simulating a second device.

To start to test this out, I modified the test scene in the Unity project to add a behaviour such that the first tap on a cube will start it slowly rotating whereas the second tap on a cube will now delete it. That let me test changes to the cube’s rotation changing and I need to add some more code to test out changes to position and scale but rotation seems to work “reasonably”.

All of that code is contained in the TestScript in the Unity project.

I’ll make subsequent updates if I find that changes to local position and local scale don’t behave in a suitable way.