“Hello World” Mixed Reality Demo from the UK TechKnowDay Event 2018

I had the privilege to be invited to speak at the UK TechKnowDay Event today as part of International Women’s Day;

and I went along with my colleague, Pete, and talked to the attendees about Windows Mixed Reality.

As part of that, I’d put together a very simple “Hello World” demo involving taking a 3D model of an avatar who appeared when air-tapped on a HoloLens and then fell with a parachute to the floor. This is really just a way of showing the basics of using the Unity toolkit, the Mixed Reality Toolkit and Visual Studio to make something that runs on HoloLens and which blends the digital with the physical.

At the event, we shortened the demo because we were running a little low on time and so I promised to include the materials on the web somewhere and that’s what this post is about.

First, I made 3 models using Paint3D and so I wanted to include that little video here – it’s intended to be spoken over so there’s no audio on it;

and then there’s a little video showing me working through in Unity to bring in the assets from Paint3D and add some very, very limited interactivity to them using Unity and the Mixed Reality Toolkit.

The way the app is supposed to work is that an air tap will cause the creation of an instance of the avatar. She will then fall under (reduced) gravity landing on a surface when her parachute should disappear and then she might sort of ‘snowboard’ to a stop where her snowboard should also disappear Smile

I’m not sure that anyone would want this coding masterpiece Smile but if they did then it’s on github over here;

https://github.com/mtaulty/parachutes

Feel very free to re-use, share or whatever you like with this if it’s of use to you.

Experiments with Shared Holograms and Azure Blob Storage/UDP Multicasting (Part 1)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

I’ve written a number of posts about the idea of shared holographic experiences including the posts below;

Experiments with Shared Holographic Experiences and Photon Unity Networking

Experiments with Shared Holographic Experiences and AllJoyn (Spoiler Alert- this one does not end well!)

Hitchhiking the HoloToolkit-Unity, Leg 13–Continuing with Shared Experiences

Hitchhiking the HoloToolkit-Unity, Leg 11–More Steps with Sharing Holographic Experiences

Hitchhiking the HoloToolkit-Unity, Leg 10–Baby Steps with Sharing Holographic Experiences

Hitchhiking the HoloToolkit-Unity, Leg 5–Baby Steps with World Anchors and Persisting Holograms

It sometimes feels like I can’t leave this topic alone Smile but that’s perhaps to be expected as I’ve seen a lot of developer interest in applications that offer a shared holographic experience and so maybe it’s right that I keep coming back to it.

Additionally, when I’ve worked with developers to set up shared holograms in their applications I’ve found that they sometimes get bogged down in the enabling technologies that underpin the Mixed Reality Toolkit (for Unity) such as UNET or the networked sharing service.

With that in mind, I wanted to return to the shared holograms topic again with some experiments around enabling shared holograms without using the Mixed Reality Toolkit/Unity and without using some external enabling library such as UNET or Photon.

This is the first in a small series of posts taking a look at that type of experiment and it starts with thinking about what needs to be passed over the network.

Enabling Shared Holograms

In order to enable shared holograms, we need some kind of networking that enables two things;

  1. The ability to quickly send messages between HoloLens devices on a common network – examples might be “Hologram has been created” or “Hologram has moved”.
    • I’m assuming that ‘common network’ is ok here and not considering the case when the HoloLens devices are separated by network infrastructure or the internet etc.
  2. The ability to send serialized blobs representing HoloLens ‘spatial anchors’ between HoloLens devices on a common network.
    • This is necessary to establish a common co-ordinate system between devices.
    • I think it’s safe to assume that these serialized blobs can easily grow into 1-10MB in size based on my past experiments which can mean that transfers can take some time and also that some networking technologies might not really be appropriate.

There are many, many ways in which this can be achieved and some of them appear in the previous posts that I’ve referred to above.

For this small set of posts I’m going to use two technologies to try and implement 1 and 2 above;

  1. I’m going to use UDP multicasting to distribute small network messages to a set of HoloLens devices.
  2. I’m going to use Azure Blob Storage to store serialized spatial anchor blobs.

In many ways, this is similar to what I did in the blog post;

Experiments with Shared Holographic Experiences and AllJoyn (Spoiler Alert- this one does not end well!)

except that I’m taking AllJoyn out of the equation and using UDP multicasting instead.

Against that backdrop, I needed a library to enable UDP multicasting of messages between devices and so I made one…

A Quick UDP Multicasting Library

I wrote a quick library for UDP Multicasting of messages and dropped it onto github over here.

There’s nothing radical or clever in there, the intention is to multicast send single UDP packet-sized messages around the network making it easy to send them and pick them up again (on different machines, the library doesn’t currently support multiple processes on the same machine).

I wanted the library to target both UWP (specifically SDK level 14393 for HoloLens) and .NET Framework (specifically I’ve targeted the “Unity 3.5 .net Subset BCL”) and so inside the solution are a few projects;

image

and so the code is 100% in the SharedCode folder referenced from the other 2 projects and at the end of the build process out should pop a library targeting .NET 3.5 for Unity and another targeting UWP.

Despite the (bad!) naming of the solutions and projects in the screenshot above, both projects should build a library called BroadcastMessaging.dll and that’s intentional because it will hopefully allow me to use the .NET 3.5 version as a placeholder library for Unity’s editor while the UWP library will be the library actually used at runtime. Unity likes placeholder and replacement libraries to have the same name.

The way that the library is intended to work is by requiring both message senders and receivers to have common types derived from the the base class Message in the library, with any state serialized by overriding the Load/Save methods as below in this example message class below which is intended to carry around a GUID;

namespace SharedTestAppCode
{
    using BroadcastMessaging;
    using System;
    using System.IO;

    public class TestGuidMessage : Message
    {
        public override void Load(BinaryReader reader)
        {
            this.Id = Guid.Parse(reader.ReadString());
        }
        public override void Save(BinaryWriter writer)
        {
            writer.Write(this.Id.ToString());
        }
        public Guid Id { get; set; }
    }
}

In fact, identification of a message type is done purely by its short, unqualified .NET type name and so MyNamespaceOne.MyMessage and MyNamespaceTwo.MyMessage would be confused by my library so, clearly, it could be a lot more robust than it currently is.

Once a message type is defined there’s a type called a MessageRegistrar which needs to know how to create these messages and so there’s a call to register them;

            MessageRegistrar messageRegistrar = new MessageRegistrar();

            messageRegistrar.RegisterMessageFactory<TestGuidMessage>(
                () => new TestGuidMessage());

and that means that the system can now create one of these message types should it need to. The registrar also deals with adding callbacks if code wants to be notified when one of these message types arrives on the network;

            messageRegistrar.RegisterMessageHandler<TestGuidMessage>(
                msg =>
                {
                    var testMessage = msg as TestGuidMessage;

                    if (testMessage != null)
                    {
                        Console.WriteLine(
                            $"\tReceived a message from {testMessage.Id}");
                    }
                }
            );

and once that is set up there’s a class named MessageService which does the work of sending/receiving messages and it’s instantiated as below;

            var messageService = new MessageService(messageRegistrar, ipAddress);

and there are extra parameters here to control the multicast address being used (it defaults to 239.0.0.0) and the port being used (it defaults to 49152) and there’s also a need to pass the local IP address on which the UdpClient that this library uses should bind itself to.

I find that on my machines getting hold of that local IP address isn’t always “so easy” across .NET Framework and UWP so I added another type into my library to encode some process for trying to get hold of it;

            var ipAddress =
                NetworkUtility.GetConnectedIpAddresses(false, true, AddressFamilyType.IP4)
                .FirstOrDefault()
                .ToString();

and the library routine NetworkUtility.GetConnectedIpAddresses takes parameters controlling whether to only consider WiFi networks, whether to rule out networks that appear to be ‘virtual’ and which address family (IP4/6) to consider. I’m sure that this code will have problems on some systems and could be improved but it seems to work reasonably well on the PCs that I’ve tried it on to date.

With that all set up, the only remaining thing to do is to send messages via the MessageService and that code looks something like;

 messageService.Open();
Console.WriteLine("Hit X to exit, S to send a message");

var msg = new TestGuidMessage()
{
    Id = guid
};
messageService.Send(msg,
    sent =>
    {
        Console.WriteLine($"\tMessage sent? {sent}"); 
    }
);

with the callback being an optional part of sending a message. I’ve used callbacks here rather than exposing Task-based asynchronous code because I want the API surface to be the same across .NET Framework 3.51 and UWP clients. For similar reasons, I ended up with some methods taking object based parameters as I struggled with some co/contravariance pieces across the two different .NETs.

The other projects in the solution here are test apps with one being a .NET Console application targeting 3.5 and a UWP application targeting 14393.

Both applications create a GUID to identify themselves and then multicast it over the network, logging when they receive a message from somewhere else on the network.

You can see those “in action” ( Winking smile ) below – here’s the console app receiving/sending messages;

image

and here’s the UWP app sending/receiving messages;

image

And that’s pretty much it for this post. In the next post, I’ll start to add some code inside of Unity that makes use of this layer and ties it in with Azure blob storage to build up the basics of some shared holograms.

HoloLens<->Immersive Headset Tracking Experiment Part 2

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Just a small update to a recent post where I used the sharing service from the Mixed Reality Toolkit in order to experiment with the idea of creating a common co-ordinate system between an app running on an immersive Mixed Reality headset and the same app running on the holographic HoloLens headset.

As part of that, I had the immersive headset broadcast its position to the  HoloLens such that the HoloLens could track it as it moved around.

It seemed obvious that this should also work in the other direction – i.e. the HoloLens could also broadcast its position such that the immersive headset could track it as it moved around and so I’ve modified the code to make that change.

The code is where it was previously, it’s quite rough but quite good fun Smile

The screenshot below shows capture from the Unity editor where the app is running on an immersive headset and the glasses represent where the immersive headset thinks the HoloLens is in the ‘real’ world around it – it seems to work reasonably well.

holo