NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.
A quick follow-up to my previous post where I’d done a little testing around changing the rotation of a shared hologram.
I modified the script in the solution named Rotate.cs to become a script named MoveScaleRotateTest.cs and changed the behaviour so as to do some scaling and translation alongside the rotation.
I then adopted the same ‘strategy’ of trying to test this out using one HoloLens and my console-based recording application;
- Create some shared holograms on the HoloLens which sends an ‘object created’ message to the console application and a world anchor blob to the cloud.
- Tap on a shared hologram which starts it slowly rotating, scaling and translating and sending ‘object transformed’ messages to the console application.
- Tap again on a shared hologram which sends an ‘object deleted’ message to the console application.
- Restart the HoloLens application.
- Replay all the recorded messages from the console application to see if the HoloLens application behaves correctly when it receives those messages by creating the hologram in the right place, transforming it in the right way before ultimately deleting it again.
and things seem to work out reasonably well so I pushed those changes back to github.
With that, I’m reasonably happy that this seems to work although I’d like to test on multiple devices.
However, there’s still a big item missing from the ‘TODO’ list that I had at the end of this post and that’s the notion of having some kind of ‘memory’ of the shared scene that’s being built up.
So far, the code that I’ve written can cope with a scenario where N devices are on a network and holograms are created, deleted and manipulated (rotate, scale, translate) while all the devices are alive and receiving multicast messages.
What it doesn’t handle at all is a scenario where one device comes along, sets up a ‘scene’ with some shared holograms, world anchors, transformations etc. and then another device joins into that scene at a later point in time having missed all the detailed messages that have been multicast while it wasn’t “present” on the network.
How does a joining device get an up to date picture of the shared holograms that other devices have already created? Right now, that’s a “TBD” for this particular set of blog posts but I’ll come back to it if I can find some time to put something in place…
Pingback: Experiments with Shared Holograms and Azure Blob Storage/UDP Multicasting (Part 6) – Mike Taulty