Exploring the Mixed Reality Design Labs–Experiment #2

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following up from this previous post;

Exploring the Mixed Reality Design Labs–Introduction and Experiment #1

I wanted to continue to explore some of the things present in the Mixed Reality Design Labs work and, since my last post, I’d revisited the github and had found this doc page which I hadn’t read last time I’d visited the site and it’s a great read as without it I’d felt a little like I was wandering without a map. I’m not quite sure how I missed it the first time around;

MRDL – Examples Write Up Including Interactable Objects, Object Collection, Progress, App Bar and Bounding Box

That’s definitely a good read and I’d also missed this document about including the MRDL as a submodule;

https://github.com/Microsoft/MRDesignLabs_Unity_Tools

and yet another thing that I’d missed was that the MRDL inserts a custom menu into Unity;

image

which can be used to insert the HoloLens prefab I mentiond in the previous post (from the Interace> menu) and to create the other areas of functionality listed there on the menu including quite a few buttons, receivers and cursors.

Exploring

The rest of this post is just what I’ve written down as rough notes while exploring one area of the MRDL and I chose to experiment with buttons as UIs often seem to end up with one type of button or another and I figured that I would poke around in the code and then start with the Button type;

Button on github

and that told me that there’s an abstract base class here which has (at least);

  • ButtonState (pressed, targeted, disabled, etc)
  • Whether the button requires the gaze to be on it or not
  • Events for when the state changes, when it is pressed, held, released, cancelled

along with a few private/implementation pieces. It all feels fairly ‘expected’ but there’s a relationship here with an InteractionManager;

InteractionManager on github

which looks to be a singleton handling things like tapping, manipulation, navigation events and somehow routing them (via Unity’s SendMessage) on via an AFocuser object.

AFocuser on github

This looks to be a perhaps more developed form of what’s present in the HoloToolkit-Unity done by types there like the GazeManager and so on and so it’s “interesting” that this framework looks to be reworking these particular wheels rather than picking up those bits from the HoloToolkit.

There would be quite a lot to explore here and I didn’t dig into all of it, that’ll have to be for another day. For today, I went back to exploring buttons and the types derived look to be;

  • KeyButton
  • AnimButton
  • SpriteButton
  • MeshButton
  • CompoundButton
  • AnimControllerButton
  • BoundingBoxHandle
  • ObjectButton

and I went back to reading the document on these and also had a good poke around the Interactable Object sample;

image

and I think I started to get a little bit of a grip of what was going on but I daresay I’ve got a bit more to learn here!

I tentatively added an empty parent object and a cube to my scene;

image

and then added the Compound Button script to my GameObject and it moaned at me (in a good way);

image

So I took away the box collider that comes by default with my cube and it said;

image

and so I added a box collider to the empty parent game object and the button became ‘happy’ Smile

image

I then got a bit adventurous, having noticed the notion of ‘receivers’ which look to be a form of event relay and I added a sphere to my scene and set up a “Color Receiver” on my empty game object;

image

and, sure enough, when I click on my cube my sphere toggles red/white;

image

but, equally, I think I could just handle this event by either writing code – e.g.

  private void Start()
  {
    var button = this.GetComponent<CompoundButton>();
    button.OnButtonPressed += this.OnPressed;
  }

and that seems to work just fine. I did then wonder whether I could create some hierarchy like this in my scene;

image

and then could I handle the button press by adding a script to the GrandParent object? I tried adding something like this;

using HUX.Interaction;

public class Startup : InteractibleObject
{
  private void Start()
  {
  }
  protected void FocusEnter()
  {
  }
  protected void FocusExit()
  {
  }
  protected void OnTapped(InteractionManager.InteractionEventArgs eventArgs)
  {
  }
}

but the debugger didn’t suggest that my OnTapped method was called. However, the FocusEnter and FocusExit calls do happen at this ‘grand parent’ level and this seems to be in line with the comments inside of the source code;

InteractibleObject on github

which says;

/// FocusEnter() & FocusExit() will bubble up through the hierarchy, starting from the Prime Focus collider.

///

/// All other messages will only be sent to the Prime Focus collider

and this notion of the ‘Prime Focus collider’ led me to go and take a look at the source for;

AFocuser on github

where the UpdateFocus method actually walks the hierarchy to build up the list of parent objects that will need to be notified of focus loss/gain while it updates its notion of the PrimeFocus and so (from a quick look) that all seems to tie up.

I think I could achieve what I wanted though by making by grand parent script an InteractionReceiver (as the sample does) and then I can pick up the button press that way – i.e.

public class Startup : InteractionReceiver
{
  private void Start()
  {
  }
  protected override void OnTapped(GameObject obj, InteractionManager.InteractionEventArgs eventArgs)
  {
    base.OnTapped(obj, eventArgs);
  }
}

and if I marry this up with the right settings in the UI to tell that script which interactible objects I want it to receive from;

image

then that seems to work out fine.

Quite commonly in a Mixed Reality app, I’d like to use speech in addition to moving my gaze and air-tapping and so it looks like the MRDL makes that easy in that I can add;

image

although I found that when I did this, I hit a snag in that the ColorReceiver that I’d previously added seemed to work fine when invoked by an air-tap but didn’t work when invoked by the speech command ‘click’ and that seemed to come down to this runtime error;

Failed to call function OnTapped of class ColorReceiver
Calling function OnTapped with no parameters but the function requires 2.

so maybe that’s a bug or maybe I’m misunderstanding how it’s meant to work but if I take the ColorReceiver away and handle the button OnButtonPressed event myself then I still see something similar – i.e. my code runs when I tap on the button but when I say “click” it doesn’t run but, instead, I see the debug output saying;

Keyword handler called in GameObject for keyword click with confidence level High

and I saw the same thing if I went back to having my code be an InteractionReceiver in that the air-tap seems to result in one call whereas the voice command “click” seems to result in another as below;

public class Startup : InteractionReceiver
{
  protected override void OnTapped(GameObject obj, InteractionManager.InteractionEventArgs eventArgs)
  {
    // This is called when I air-tap
    base.OnTapped(obj, eventArgs);
  }
  void OnTapped()
  {
    // This is called when I say 'click'
  }
}

and, again, I’m unsure whether that’s my understanding or whether it’s not quite working right but I figured I’d move on as I’d noticed that the “Compound Button Speech” script took two keyword sources – one was the local override I’ve used above where I can simply set the text but the other looks for a Compound Button Text;

image

and so I added one of those in, chose the provided profile and fed it a 3DTextMesh and then I selected that I wanted to override the Offset property and just dragged my text mesh around a little in Unity to try and position it ‘sensibly’;

image

and that all seemed to work fine. It’d be great to have my button give audible cues when the user interacted with it and so I also added in a Compound Button Sounds script which then wants a ButtonSoundProfile and I played with creating my own versus using the one that ships in the library;

image

and that worked fine once I’d managed to figure out how to get the sounds to come out properly over the holographic remoting app from Unity.

At this point, I’d added quite a lot of scripts to my original cube and so I reset things and went and grabbed a 3D object from Remix3D, this firefighter;

image

and dropped it into my scene as a child of my GameObject;

image

and then added back the Compound Button script and a Box Collider and then went and added the Compound Button Mesh script and tried to set up some scale and colour changes based on the states within;

image

and that seemed to work out fine – i.e. when I pressed on the button, the fireman got scaled up and the mesh got rendered in red;

image

so, that’s all really useful.

I then threw away my scene again, went back to just having a cube and set up a couple of animations – one which rotated the cube by 45 degrees and another which put it back to 0 and I built an animator around those with the transitions triggered by a change in the Targeted boolean parameter;

image

and then dragged an Animator and a Compound Button Anim component onto my GameObject;

image

and that seemed to give me the basics of having my cube animate into rotation when I focus on it and animate back from rotation when I take the focus away from it – seemed like a very useful tool to have in the toolbox Smile I noticed that Object Button seems to do something similar except it looks to model the various states via a set of different prefabs – i.e.

image

The last one of these Compound Button X types that I wanted to get my head around for this post was the Compound Button Icon type. This feels a little bit like the Text variant in that I can create an empty GameObject and then make it into a Compound Button (+Icon) as below;

image

and this seems to be driven off a ButtonIconProfile which can either be font based or texture based so I set up one that was font based;

image

and then there’s a need here for something to render the icon and I found it “interesting” to add a Cube as a child of my button object and then toggle the dropdown here to select my Cube as the rendering object. The component made a few changes on my behalf!

Here’s the before picture of my cube;

image

and this is what happens to it when I choose it as the renderer for my icon;

image

so – the mesh filter has gone and the material/shader has been changed for me and I can then go back to the Compound Button Icon component and choose the icon;

image

Very cool.

Wrapping Up

Having done a bit of exploring, I can now start to get some idea of what the tooling is doing if I use an option like;

image

and create myself a “Rectangle Button” which gives me all the glory of;

image

and so I’ve got compound, mesh, sounds, icon, text and speech all in one go and ready to be used and it takes me only a second or two in order to get buttons created;

image

and there’s a lot of flexibility in there.

As I said at the start of the post, I’m just experimenting here and I may well be getting things wrong so feel free to let me know and I’ll carry on my exploring in later posts…

1 thought on “Exploring the Mixed Reality Design Labs–Experiment #2

  1. I really enjoy your blog posts, some really great info on here. Keep up the good work!

Comments are closed.