Exploring the Mixed Reality Design Labs–Experiment #2

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following up from this previous post;

Exploring the Mixed Reality Design Labs–Introduction and Experiment #1

I wanted to continue to explore some of the things present in the Mixed Reality Design Labs work and, since my last post, I’d revisited the github and had found this doc page which I hadn’t read last time I’d visited the site and it’s a great read as without it I’d felt a little like I was wandering without a map. I’m not quite sure how I missed it the first time around;

MRDL – Examples Write Up Including Interactable Objects, Object Collection, Progress, App Bar and Bounding Box

That’s definitely a good read and I’d also missed this document about including the MRDL as a submodule;

https://github.com/Microsoft/MRDesignLabs_Unity_Tools

and yet another thing that I’d missed was that the MRDL inserts a custom menu into Unity;

image

which can be used to insert the HoloLens prefab I mentiond in the previous post (from the Interace> menu) and to create the other areas of functionality listed there on the menu including quite a few buttons, receivers and cursors.

Exploring

The rest of this post is just what I’ve written down as rough notes while exploring one area of the MRDL and I chose to experiment with buttons as UIs often seem to end up with one type of button or another and I figured that I would poke around in the code and then start with the Button type;

Button on github

and that told me that there’s an abstract base class here which has (at least);

  • ButtonState (pressed, targeted, disabled, etc)
  • Whether the button requires the gaze to be on it or not
  • Events for when the state changes, when it is pressed, held, released, cancelled

along with a few private/implementation pieces. It all feels fairly ‘expected’ but there’s a relationship here with an InteractionManager;

InteractionManager on github

which looks to be a singleton handling things like tapping, manipulation, navigation events and somehow routing them (via Unity’s SendMessage) on via an AFocuser object.

AFocuser on github

This looks to be a perhaps more developed form of what’s present in the HoloToolkit-Unity done by types there like the GazeManager and so on and so it’s “interesting” that this framework looks to be reworking these particular wheels rather than picking up those bits from the HoloToolkit.

There would be quite a lot to explore here and I didn’t dig into all of it, that’ll have to be for another day. For today, I went back to exploring buttons and the types derived look to be;

  • KeyButton
  • AnimButton
  • SpriteButton
  • MeshButton
  • CompoundButton
  • AnimControllerButton
  • BoundingBoxHandle
  • ObjectButton

and I went back to reading the document on these and also had a good poke around the Interactable Object sample;

image

and I think I started to get a little bit of a grip of what was going on but I daresay I’ve got a bit more to learn here!

I tentatively added an empty parent object and a cube to my scene;

image

and then added the Compound Button script to my GameObject and it moaned at me (in a good way);

image

So I took away the box collider that comes by default with my cube and it said;

image

and so I added a box collider to the empty parent game object and the button became ‘happy’ Smile

image

I then got a bit adventurous, having noticed the notion of ‘receivers’ which look to be a form of event relay and I added a sphere to my scene and set up a “Color Receiver” on my empty game object;

image

and, sure enough, when I click on my cube my sphere toggles red/white;

image

but, equally, I think I could just handle this event by either writing code – e.g.

  private void Start()
  {
    var button = this.GetComponent<CompoundButton>();
    button.OnButtonPressed += this.OnPressed;
  }

and that seems to work just fine. I did then wonder whether I could create some hierarchy like this in my scene;

image

and then could I handle the button press by adding a script to the GrandParent object? I tried adding something like this;

using HUX.Interaction;

public class Startup : InteractibleObject
{
  private void Start()
  {
  }
  protected void FocusEnter()
  {
  }
  protected void FocusExit()
  {
  }
  protected void OnTapped(InteractionManager.InteractionEventArgs eventArgs)
  {
  }
}

but the debugger didn’t suggest that my OnTapped method was called. However, the FocusEnter and FocusExit calls do happen at this ‘grand parent’ level and this seems to be in line with the comments inside of the source code;

InteractibleObject on github

which says;

/// FocusEnter() & FocusExit() will bubble up through the hierarchy, starting from the Prime Focus collider.

///

/// All other messages will only be sent to the Prime Focus collider

and this notion of the ‘Prime Focus collider’ led me to go and take a look at the source for;

AFocuser on github

where the UpdateFocus method actually walks the hierarchy to build up the list of parent objects that will need to be notified of focus loss/gain while it updates its notion of the PrimeFocus and so (from a quick look) that all seems to tie up.

I think I could achieve what I wanted though by making by grand parent script an InteractionReceiver (as the sample does) and then I can pick up the button press that way – i.e.

public class Startup : InteractionReceiver
{
  private void Start()
  {
  }
  protected override void OnTapped(GameObject obj, InteractionManager.InteractionEventArgs eventArgs)
  {
    base.OnTapped(obj, eventArgs);
  }
}

and if I marry this up with the right settings in the UI to tell that script which interactible objects I want it to receive from;

image

then that seems to work out fine.

Quite commonly in a Mixed Reality app, I’d like to use speech in addition to moving my gaze and air-tapping and so it looks like the MRDL makes that easy in that I can add;

image

although I found that when I did this, I hit a snag in that the ColorReceiver that I’d previously added seemed to work fine when invoked by an air-tap but didn’t work when invoked by the speech command ‘click’ and that seemed to come down to this runtime error;

Failed to call function OnTapped of class ColorReceiver
Calling function OnTapped with no parameters but the function requires 2.

so maybe that’s a bug or maybe I’m misunderstanding how it’s meant to work but if I take the ColorReceiver away and handle the button OnButtonPressed event myself then I still see something similar – i.e. my code runs when I tap on the button but when I say “click” it doesn’t run but, instead, I see the debug output saying;

Keyword handler called in GameObject for keyword click with confidence level High

and I saw the same thing if I went back to having my code be an InteractionReceiver in that the air-tap seems to result in one call whereas the voice command “click” seems to result in another as below;

public class Startup : InteractionReceiver
{
  protected override void OnTapped(GameObject obj, InteractionManager.InteractionEventArgs eventArgs)
  {
    // This is called when I air-tap
    base.OnTapped(obj, eventArgs);
  }
  void OnTapped()
  {
    // This is called when I say 'click'
  }
}

and, again, I’m unsure whether that’s my understanding or whether it’s not quite working right but I figured I’d move on as I’d noticed that the “Compound Button Speech” script took two keyword sources – one was the local override I’ve used above where I can simply set the text but the other looks for a Compound Button Text;

image

and so I added one of those in, chose the provided profile and fed it a 3DTextMesh and then I selected that I wanted to override the Offset property and just dragged my text mesh around a little in Unity to try and position it ‘sensibly’;

image

and that all seemed to work fine. It’d be great to have my button give audible cues when the user interacted with it and so I also added in a Compound Button Sounds script which then wants a ButtonSoundProfile and I played with creating my own versus using the one that ships in the library;

image

and that worked fine once I’d managed to figure out how to get the sounds to come out properly over the holographic remoting app from Unity.

At this point, I’d added quite a lot of scripts to my original cube and so I reset things and went and grabbed a 3D object from Remix3D, this firefighter;

image

and dropped it into my scene as a child of my GameObject;

image

and then added back the Compound Button script and a Box Collider and then went and added the Compound Button Mesh script and tried to set up some scale and colour changes based on the states within;

image

and that seemed to work out fine – i.e. when I pressed on the button, the fireman got scaled up and the mesh got rendered in red;

image

so, that’s all really useful.

I then threw away my scene again, went back to just having a cube and set up a couple of animations – one which rotated the cube by 45 degrees and another which put it back to 0 and I built an animator around those with the transitions triggered by a change in the Targeted boolean parameter;

image

and then dragged an Animator and a Compound Button Anim component onto my GameObject;

image

and that seemed to give me the basics of having my cube animate into rotation when I focus on it and animate back from rotation when I take the focus away from it – seemed like a very useful tool to have in the toolbox Smile I noticed that Object Button seems to do something similar except it looks to model the various states via a set of different prefabs – i.e.

image

The last one of these Compound Button X types that I wanted to get my head around for this post was the Compound Button Icon type. This feels a little bit like the Text variant in that I can create an empty GameObject and then make it into a Compound Button (+Icon) as below;

image

and this seems to be driven off a ButtonIconProfile which can either be font based or texture based so I set up one that was font based;

image

and then there’s a need here for something to render the icon and I found it “interesting” to add a Cube as a child of my button object and then toggle the dropdown here to select my Cube as the rendering object. The component made a few changes on my behalf!

Here’s the before picture of my cube;

image

and this is what happens to it when I choose it as the renderer for my icon;

image

so – the mesh filter has gone and the material/shader has been changed for me and I can then go back to the Compound Button Icon component and choose the icon;

image

Very cool.

Wrapping Up

Having done a bit of exploring, I can now start to get some idea of what the tooling is doing if I use an option like;

image

and create myself a “Rectangle Button” which gives me all the glory of;

image

and so I’ve got compound, mesh, sounds, icon, text and speech all in one go and ready to be used and it takes me only a second or two in order to get buttons created;

image

and there’s a lot of flexibility in there.

As I said at the start of the post, I’m just experimenting here and I may well be getting things wrong so feel free to let me know and I’ll carry on my exploring in later posts…

Experiments with Shared Holographic Experiences and AllJoyn (Spoiler Alert: this one does not end well!)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

This post is an interesting one in that it represents two sides of a day or so of technical failure! I tried to get something up and running and it didn’t quite work out but I thought it was worth sharing regardless but apply a large pinch of salt as the result isn’t anything that works too well Smile

Background – Catching up with AllJoyn

It’s been a little while since I looked at AllJoyn and the AllSeen Alliance.

In fact, long enough that the landscape has changed with the merger between the OCF (IoTivity) and the AllSeen Alliance (AllJoyn) with the result that (from the website);

Both projects will collaborate to support future versions of the OCF specification in a single IoTivity implementation that combines the best of both technologies into a unified solution. Current devices running on either AllJoyn or IoTivity solutions will be interoperable and backward-compatible. Companies already developing IoT solutions based on either technology can proceed with the confidence that their products will be compatible with the unified IoT standard that the industry has been asking for.

and so I guess that any use of AllJoyn at this point has to be seen against that backdrop.

Scenario – Using AllJoyn to Share Holograms Across Devices

With that said, I still have AllJoyn APIs in Windows 10 and I wanted to see whether I could use those as a basis for the sharing of Holograms across devices.

The idea would be that a HoloLens app becomes both an AllJoyn consumer and producer and so each device on a network could find all the other devices and then share data such that a common co-ordinate system could be established via world anchors and hologram creation/removal and positioning could also be shared.

If you’re not so familiar with this idea of shared holographic experiences then the official documentation lives here;

https://developer.microsoft.com/en-us/windows/mixed-reality/development 

and I’ve written a number of posts around this area of which this one would be the most recent;

Hitchhiking the HoloToolkit-Unity, Leg 13–Continuing with Shared Experiences

My idea for AllJoyn was to avoid any central server and let it do the work of having devices discover each other and connect so that they could form a network where they consume each others’ services as below;

image

Now, I don’t think that I’d tried this previously but there was something nagging at the back of my mind about the AllJoyn APIs not being something that I could use in the way that I wanted to on HoloLens and I should really have taken heed.

Instead, I ploughed ahead…

Making a “Hello World” AllJoyn Interface

I figured that the “Hello World” here might involve a HoloLens app offering an interface whereby a remote caller could create some object (e.g. a cube) at a particular location in world space.

That seemed like a simple enough thing to figure out and so I sketched out a quick interface for AllJoyn to do something like that;

<interface name=”com.taulty.RemoteHolograms”>
   <method name=”CreateCube”>
     <arg name=”x” type=”d” direction=”in” />
     <arg name=”y” type=”d” direction=”in” />
     <arg name=”z” type=”d” direction=”in” />
   </method>
</interface>

I then struggled a little with respect to knowing where to go next in that the tool alljoyncodegen.exe which I used to use to generate UWP code from an AllJoyn interface seemed to have disappeared from the SDKs.

I found this doc page which suggested that the tool was deprecated and for a while I landed on this page which sounded similar but turned out have only been tested on Linux and not to generate UWP code so was actually very different Smile

I gave up on the command line tool and went off to try and find AllJoyn Studio which does seem to still exist but only for Visual Studio 2015 which was ‘kind of ok’ because I still have VS2015 on my system alongside VS2017.

Whether all these changes are because of the AllJoyn/IoTivity merging, I’ve no idea but it certainly foxed me for a little while.

Regardless, I fed AllJoyn Studio in Visual Studio 2015 my little XML interface definition;

image

and it spat out some C++/CX code providing me a bunch of boiler plate AllJoyn implementation code which I retargeted to platform version 14393 as the tool seemed to favour 10586;

image

Writing Some Code for Unity to Consume

At this point, I was a little over-zealous in that I thought that the next step would be to try and write a library which would make it easy for Unity to make use of the code that I’d just had generated.

So, I went off and made a C# library project that referenced the newly generated code and I wrote this code to sit on top of it although I never really got around to figuring out whether this code worked or not as we’ll see in a moment;

namespace AllJoynHoloServer
{
  using com.taulty.RemoteHolograms;
  using System;
  using System.Threading.Tasks;
  using Windows.Devices.AllJoyn;
  using Windows.Foundation;

  public class AllJoynCreateCubeEventArgs : EventArgs
  {
    internal AllJoynCreateCubeEventArgs()
    {
        
    }
    public double X { get; internal set; }
    public double Y { get; internal set; }
    public double Z { get; internal set; }
  }

  public class ServiceDispatcher : IRemoteHologramsService
  {
    public ServiceDispatcher(Action<double, double, double> handler)
    {
      this.handler = handler;
    }
    public IAsyncOperation<RemoteHologramsCreateCubeResult> CreateCubeAsync(
      AllJoynMessageInfo info,
      double x,
      double y,
      double z)
    {
      return (this.CreateCubeAsyncInternal(x, y, z).AsAsyncOperation());
    }
    async Task<RemoteHologramsCreateCubeResult> CreateCubeAsyncInternal(
      double x, double y, double z)
    {
      // Likelihood that the thread we're calling from here is going to
      // break Unity's threading model.
      this.handler?.Invoke(x, y, z);

      return (RemoteHologramsCreateCubeResult.CreateSuccessResult());
    }
    Action<double, double, double> handler;
  }
  public static class AllJoynEventAdvertiser
  {
    public static event EventHandler<AllJoynCreateCubeEventArgs> CubeCreated;

    public static void Start()
    {
      if (busAttachment == null)
      {
        busAttachment = new AllJoynBusAttachment();
        busAttachment.AboutData.DateOfManufacture = DateTime.Now;
        busAttachment.AboutData.DefaultAppName = "Remote Holograms";
        busAttachment.AboutData.DefaultDescription = "Creation and Manipulation of Holograms";
        busAttachment.AboutData.DefaultManufacturer = "Mike Taulty";
        busAttachment.AboutData.ModelNumber = "Number One";
        busAttachment.AboutData.SoftwareVersion = "1.0";
        busAttachment.AboutData.SupportUrl = new Uri("http://www.mtaulty.com");

        producer = new RemoteHologramsProducer(busAttachment);
        producer.Service = new ServiceDispatcher(OnCreateCube);
        producer.Start();
      }
    }
    public static void Stop()
    {
      producer?.Stop();
      producer?.Dispose();
      busAttachment?.Disconnect();
      producer = null;
      busAttachment = null;
    }
    static void OnCreateCube(double x, double y, double z)
    {
      CubeCreated?.Invoke(
        EventArgs.Empty,
        new AllJoynCreateCubeEventArgs()
        {
          X = x,
          Y = y,
          Z = z
        });
    }
    static AllJoynBusAttachment busAttachment;
    static RemoteHologramsProducer producer;
  }
}

There’s nothing particularly exciting or new here, it’s just a static class which hides the underlying AllJoyn code from anything above it and it offers the IRemoteHologramsService over the network and fires a static event whenever some caller remotely invokes the one method on that interface.

I thought that this would be pretty easy for Unity to consume and so I dragged the DLLs into Unity (as per this post) and then added the script below to a blank GameObject to run when the app started up;

#if UNITY_UWP && !UNITY_EDITOR
using AllJoynHoloServer;
#endif
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class Startup : MonoBehaviour
{
  // Use this for initialization
  void Start()
  {
#if UNITY_UWP && !UNITY_EDITOR
    AllJoynEventAdvertiser.CubeCreated += this.OnCubeCreated;
    AllJoynEventAdvertiser.Start();
#endif
  }
  #if UNITY_UWP && !UNITY_EDITOR
  void OnCubeCreated(object sender, AllJoynCreateCubeEventArgs args)
  {

  }
  
#endif

}

Clearly, this isn’t fully formed or entirely thought through but I wanted to just see if I could get something up and running and so I tried to debug this code having, first, made sure that my Unity app was asking for the AllJoyn security capability.

Debugging the AllJoyn Experience

I used the IoT AllJoyn Explorer to try and see if my Unity app from the HoloLens was advertising itself correctly on the local network.

That app still comes from the Windows Store and I’ve used it before and it’s always been pretty good for me.

It took me a little while to remember that I need to think about loopback exemption when it comes to this app so that’s worth flagging here.

I found that when I ran my Unity code on the HoloLens, I didn’t seem to see the service being advertised on the AllJoyn network as displayed by the IoT Explorer. I only ended up seeing a blank screen;

image

in order to sanity check this, I ended up lifting the code that I had built into Unity out of that environment and into a 2D XAML experience to run on my local PC where things lit up as I’d expect – i.e. IoT Explorer then shows;

image

and so the code seemed to be ok and, at this point, I realised that I could have tested out the whole concept in 2D XAML and never dropped into Unity at all – there’s a lesson in there somewhere! Smile

Having proved the point on my PC, I also ran the same code on my phone and saw a similar result – the service showed up on the network.

However, no matter how I went about it I couldn’t get HoloLens to advertise this AllJoyn service and so I have to think that perhaps that part of AllJoyn isn’t present on HoloLens today.

That doesn’t surprise me too much and I’ve tried to confirm it and will update this post if I get a confirmation either way.

If this is the case though, what might be done to achieve my original aim which was to use AllJoyn as the basis of a shared holographic experience across devices.

I decided that there was more than one way to achieve this…

HoloLens as AllJoyn Consumer, not Producer

It wasn’t what I’d originally had in mind but I figured I could change my scenario such that the HoloLens did not offer an AllJoyn service to be consumed but, instead, consumed an AllJoyn service offered by a device (like a PC or Phone) which could offer a service onto the network. The diagram then becomes…

image

and so there’s a need for an extra player here (the PC) and also a need for using events or signals in AllJoyn to inform devices when something has happened on ‘the server’ that they need to know about such as a world anchor being created and uploaded or a new hologram being created.

A New Interface with Signals

I figured that I was onto a winner and so set about implementing this. Firstly, I conjured up a basic interface to try and model the code scenarios of;

  1. Devices join/leave the network and other devices might want to know how many devices they are in a shared experience with.
  2. World anchors get uploaded to the network and other devices might want to import them in order to add a common co-ordinate system.
  3. Holograms get added by a user (I’m assuming that they would be added as a child of a world anchor and so there would be an association there).
  4. Holograms get removed by a user.

In addition, I’d also want to add the ability for the transforms (scale, rotate, translate) of a hologram to be changed but I left that to one side while trying to see if I could get these bits to work.

With this in mind, I created a new AllJoyn interface as below;

AllJoyn Interface on Github

and once again I should perhaps have realised that things weren’t going too well here.
Everything about that interface is probably self-explanatory except a couple of small things;
  1. I used GUIDs to identify world anchors and holograms.
  2. I’m assuming that types of holograms can be represented by simple strings – e.g. device 1 may create a hologram of an Xmas tree and I’m assuming that device 2 will get a string “Xmas tree” and know what to do with it.
  3. I wanted to have a single method which returns the [Id/Name/Type/Position] of a hologram but I found that this broke the code generation tool hence my methods GetHologramIdsAndNames and GetHologramTransforms – these should really be one method.

and a larger thing;

  • My original method for AddWorldAnchor simply took a byte array but I later discovered that AllJoyn seems to have a maximum message size of 128K whereas world anchors can be megabytes and so I added a capability here to “chunk” the world anchor into pieces and that affected this method and also the GetWorldAnchor method.

I should have stopped at this point as this limitation of 128K was flashing a warning light but I ignored it and pressed ahead.

A ‘Server’ App to Implement the Interface

Having generated the code from that new interface (highlighted blue below) I went ahead and generated a UWP app which could act as the ‘server’ (or producer) on the PC (highlighted orange below);

image

That involved implementing the generated IAJHoloServerService interface which I did as below;

AJHoloService implementation on Github

and then I can build this up (with the UI XAML which isn’t really worth listing) and have it run on my desktop;

image

waiting for connections to come in.

A Client Library to make it ‘Easier’ from Unity

I also wanted to have a small client library my life easier in the Unity environment in terms of consuming this service and so I added a 3rd UWP project to my solution;

image

and added a static class which gathered up the various bits of code needed to get the AllJoyn pieces working;

AJHoloServerConnection Implementation on Github

and that relied on this simple callback interface in order to call back into any user of the class which would be my Unity code;

Callback interface on Github

Consuming from Unity

At this point, I should have really tested what it was like to consume this code from a 2D app as I’d have learned a lot while expending little effort but I didn’t do that. Instead, I went and built a largely empty scene in Unity;

image

Where the Parent object is an empty GameObject and the UITextPrefab is straight from the HoloToolkit-Unity with a couple of voice keywords attached to it via the toolkit’s Keyword Manager;

image

I made sure my 2 DLLs (the AllJoyn generated DLL plus my client library) were present in a Plugins folder;

image

with appropriate settings on them;

image

and made sure that my project was requesting the AllJoyn capability. At this point, I realised that I needed to have some capability where I could relatively easily import/export world anchors in Unity without getting into a lot of code with Update() loops and state machines.

Towards that end, I wrote a couple of simple classes which I think function ok outside of the Unity editor in the UWP world but which wouldn’t work in the editor. I wrote a class to help with exporting world anchors;

WorldAnchorExporter implementation on Github

and a class to help with importing world anchors;

WorldAnchorImporter implementation on Github

and they rely on this a class which tries to do a simplistic bridge between the Update() oriented loop world of Unity and the async world of UWP as I wanted to be able to write async/await code which ‘waited’ for the isLocated flag on a WorldAnchor to be set true;

GameObjectUpdateWatcher implementation on Github

With that all done, I could attach a script to my empty GameObject to form the basis of my logic stringing together the handling of the voice keywords with the import/export of the world anchors, creation of a single, simple type of hologram (a cube) and calls to the AllJoyn service on the PC;

Startup script on Github

That took quite a bit of effort, but was it worth it? Possibly not…

Testing – AllJoyn and Large Buffers

In testing, things initially looked good. I’d run the consumer app on the PC, the client app on the HoloLens and I’d see it connect;

image

and disconnect when I shut it down and my voice command of “lock” seemed to go through the motions of creating a world anchor and it was quickly exported from the device and then it started to transfer over the network.

And it transferred…

And it transferred…

And it took a long time…minutes…

I spent quite some time debugging this. My first investigation was to look at the network performance graph from the HoloLens and it looked like this while transferring a 2-3MB world anchor across the network in chunks (64KB) using the AllJoyn interface that I’d written;

image

It’s the classic sawtooth and I moved my same client code onto both a Phone and a PC to see if it showed the same behaviour there and, to some extent, it did although it wasn’t as pronounced.

I played quite a bit with the size of the chunks that were being sent over the network and I could see that the gaps between the bursts of traffic seemed to be directly related to the size of the buffer and some native debugging combined with a quick experiment with the Visual Studio profiler pointed to this function in the generated code as being the cause of all my troubles;

image

and in so far as I can tell this runs some relatively expensive conversion function on every element in the array which burns up a bunch of CPU cycles on the HoloLens prior to the transmission of the data over the network. This seemed to slow down the world anchor transfers dramatically – I would find that it might take minutes for a world anchor to transfer which, clearly, isn’t very practical.

Putting the terrible performance that I’ve created to one side, I hit another problem…

Testing – Unexpected Method Signatures

Once I’d managed to be patient enough to upload a world anchor to my server, I found a particular problem testing out the method which downloads it back to another device. It’s AllJoyn signature in the interface looks like this;

<method name=”GetWorldAnchor”>

    <arg name=”anchorId” type=”s” direction=”in”/>

    <arg name=”byteIndex” type=”u” direction=”in”/>

    <arg name=”byteLength” type=”u” directon=”in”/>

    <arg name=”anchorData” type=”ay” direction=”out”/>

  </method>

and I found that when I called this at runtime from the client then my method’s implementation code on the server side wasn’t getting hit. All I was seeing was a client-side AllJoyn error;

ER_BUS_UNEXPECTED_SIGNATURE = 0x9015

now my method has 3 incoming parameters and a return value but it was curious to me that if I used the IoT Explorer on that interface it was showing up differently;

image

and so IoT Explorer was showing my method as having 2 inward parameters and 2 outgoing parameters or return values which isn’t what the interface specification actually says Confused smile

I wondered whether this was a bug in IoT Explorer or a bug in the runtime pieces and, through debugging the native code, I could use the debugger to cause the code to send 2 parameters rather than 3 and I could see that if I did this then the method call would cross the network and fail on the server side so it seemed that the IoT Explorer was right and my function expected 2 inward parameters and 2 outward parameters.

What wasn’t clear was…why and how to fix?

I spent about an hour before I realised that this problem came down to a type in the XML where the word “direction” had been turned into “directon” and that proved to be causing my problem – there’s a lesson in there somewhere too as the code generation tool didn’t seem to tell me anything about it and just defaulted the parameter to be outgoing.

With that simple typo fixed, I could get my code up and running properly for the first time.

Wrapping Up

Once I’d got past these stumbling blocks, the code as I have it actually seems to work in that I can run through the steps of;

  1. Run the server.
  2. Run the app on my HoloLens.
  3. See the server display that the app has connected.
  4. Use the voice command “lock” to create a world anchor, export it and upload it to the server taking a very long time.
  5. Use the voice command “cube” in a few places to create cubes.
  6. Shutdown the app and repeat 2, 3 above to see the device connect, download the anchor (taking a very long time), import it and recreate the cubes from step 5 where they were previously.

and I’ve placed the source for what I built up here;

https://github.com/mtaulty/AllJoynHoloSharing

but, because of the bad performance on copying the world anchor buffers around, I don’t think this would be a great starting point for implementing a shared holographic server and I’d go back to using the regular pieces from the HoloToolkit rather than trying to take this forward.

That said, I learned some things in playing with it and there’s probably some code in here that I’ll re-use in the future so it was worth while.

Exploring the Mixed Reality Design Labs–Introduction and Experiment #1

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

On a recent project, I was trying to avoid having to write a simplified version of the control that the Holograms app offers on selected holograms for moving, rotating and scaling. This control here;

Image result for holograms app rotate scale

It’s essentially doing a job analogous to the resize handles that you get on a Window in the 2D world and I’d imagine that it took quite a bit of work to write, debug and get right.

I didn’t have time to try and reproduce it in full fidelity and so I really wanted to just re-use it because I knew that to write even a limited version would take me a few hours and, even then, it’d not be as good as the original.

I didn’t find such a control and so, between us, myself and Pete wrote one which ended up looking a little like this;

image

and it was pretty functional but it wasn’t as good as the one in the Holograms app and it took a bunch of time to write and debug.

However, just after it was completed someone inside of the company pointed me to a place where I could have taken a better control straight off-the-shelf and that’s what I wanted to write about in this post.

The Mixed Reality Design Labs

This came completely out-of-the-blue to me in that I hadn’t seen the Mixed Reality Design Labs before – here a quick screenshot from their home page on github;

image

I’m going to shorten “Mixed Reality Design Labs” to MRDL (rhymes with TURTLE) in this post because, otherwise, it gets too long.

Straight away I could see that on the home page are listed a number of very interesting sounding controls including these two, the Holobar and the Bounding Box.

 image

and my first thought was as to whether those controls just draw the visuals or do they also implement the behaviour of scale/rotate/translate but, either way, they’d be saving me a tonne of work Smile

I noticed also that this repo is linked from the mixed reality development documentation over here;

image

but it doesn’t look like it’s a simple 1:1 between the MRDL and this document page because (e.g.) the guidance here around ‘Text in Unity’ actually points into pieces from the HoloToolkit-Unity rather than controls within the MRDL.

By contrast, I think the other 4 sections listed here do point into the MRDL so it’s nearly a 1:1 Smile

Including the MRDL

I wanted to try some of these things out and so I spun up a blank Unity project, added the HoloToolkit-Unity and set it up for mixed reality development (i.e. set the project, scene, capability settings using the toolkit) before going to look at how I could start to make use of these pieces.

As there didn’t seem to be any Unity packages or similar in the repo, I figured it was just a matter of copying the MRDesignLabs_Unity folder into my project and I went with that and after Unity had seen those files I got a nice helpful dialog around the HoloLens MDL2 Symbols font;

image

and so I went with downloading and installing that but then I realised that I was perhaps meant to import that .ttf file as an asset into Unity here and then (using the helpful button on the dialog) assign it as the font for buttons as per below;

capture2

That was easy enough but I then dropped off down a bit of a black hole as there seems to be quite a lot in this project;

image

and I find this quite regularly with Unity – when someone delivers you a bunch of bits, how do you figure out which pieces they intended you to use directly and which bits are just supporting infrastructure for what they have built – i.e. there seems to be a lack of [private/public/internal] type visibility which would guide the user as to what was going on.

Rather than get bewildered though, I figured I’d go back to trying to solve my original problem – selecting, rotating, scaling, translating a 3D object…

Manipulations, Bounding Boxes, App Bars

I must admit, I got stuck pretty quickly.

Like the HoloToolkit, there are a lot of pieces here and it’s not easy to figure out how they all sit together and so I also opened up the Examples project and had a bit of a look at the scene called ManipulationGizmo_Examples which gave me more than enough clues to feel confident to play around in my own blank scene.

I started off by dragging out the HoloLens prefab which brings with it a tonne of pieces;

image

I did have a little dig around into where these pieces were coming from and what they might do but I’m not going to attempt to write that up here. What I think I noticed is that these pieces look to be largely independent rather than taking dependencies on the HoloToolkit-Unity but I may be wrong there as I haven’t spent a lot of time on it just yet.

Where I did focus in was on the Manipulation Manager which looked like the piece that I was going to need in order to get my scale/rotate/translate functionality working and it’s a pretty simple script which simply instantiates the ‘Bounding Box Prefab’ and the ‘App Bar Prefab’ and keeps them around as singletons for later use which seems to imply that only one object would have the bounding box wrapped around it at a time (i.e. a single selection model) which seems reasonable to me. I also noticed in that script that it takes a dependency on the Singleton<T> class from the HoloToolkit-Unity so that starts to go against my earlier thoughts around dependencies;

image

To try this out, I added a quick cube into my scene;

image

and (from the example scene) I’d figured that I needed to add a Bounding Box Target component to my cube in order to apply the bounding box behaviour;

image

I like all the options here but I left them as their default values for now and I ran up the code on my HoloLens using the Unity’s Holographic Remoting feature and, sure enough, when I can tap on the cube to select it I get the bounding box and I can then use manipulations in order to translate, rotate and scale matching the functionality that I asked for in the editor.

This screenshot comes from Unity displaying what’s happening remotely on the HoloLens and you can see the AppBar with its Done/Remove buttons and the grab handles which are facilitating scale/rotate/translate. What’s not visible is that the control has animations and highlights which make for a quality, production feel rather than a “Mike just hacked this together in a hurry” feel Smile

image

I’m going to try and find some time to explore more of the MRDL and I’ll write some supplementary posts as/when those work out but if you’re building for Windows Mixed Reality then I think you should have an eye on this project as it has the potential to save you a tonne of work Smile