HoloLens, Unity and Recognition with Vuforia (Part 2)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following up from this post;

HoloLens, Unity and Recognition with Vuforia (Part 1)

I wanted to see if I could do some type of ‘custom’ object recognition using HoloLens and Vuforia starting from scratch rather than starting from the pre-baked Vuforia sample and following the steps outlined here;


I have this scenario in my head where I could combine HoloLens and Sphero as I did in this post;

Windows 10, UWP and Sphero–Bringing 2D UWP Demo Code to HoloLens

but then bring in Vuforia such that I could try and use Vuforia to locate the Sphero ball within a scene and then that would seem to give me the pieces such that I could have a HoloLens app where the Sphero ball did things like follow the user around the room.

That’s where I’m trying to head but I’m not yet sure whether Vuforia can recognise spheres for me so, in the meantime, I should perhaps just focus on seeing whether I can get a custom use of Vuforia to work – it’s always best to ‘start small’ and work towards the final goal Smile

Picking an Object to Recognise

First off, I needed to decided which object I wanted to recognise in my scene and so I went to the kitchen and found this box of biscuits (it’s Xmas!) which are very nice by the way;


and so I thought I’d see if I could get Vuforia to identify that box of biscuits for me.

Creating a Vuforia Target Database

My first step here was to go to Vuforia’s license manager page and make sure that I’d created an app;


which I’d done for my previous post and I then went to the ‘Target Manager’ tab and created a target database;


and you’re then presented with a choice of whether you are trying to do [device/cloud/VuMark] recognition and so I chose device and then once I’ve got a database, I can add targets to it which are [image/cube/cylinder/object] and so I chose cuboid and provided some details of my biscuits;


What I really wasn’t sure of here was what dimensions I was supposed to be using and whether they needed to relate to the real-world size of the box. I read one or two forum posts (like this one) but it still didn’t really leave me with clarity around what I was meant to do in terms of units and my box is 8cm (wide) by 7cm (deep) by 15cm tall but I wasn’t confident that I was telling the Target Manager this correctly.

Having done this part, my biscuit box details are flagged as ‘incomplete’;


and so I went and provided more details and then I got a bit bogged down because the uploader seemed to want images that matched the aspect ratios that I had given it – e.g. 8/15 = 0.533 whereas my image was 1417/2701 = 0.524 and so I resized the images a little to try and come closer to the right aspect ratio and the tool ultimately backed down and let me win Winking smile


At that point, it looks like I can use the website to download the database containing this set of (1) targets;


and the download here gave me a Unity package (called TestDevice) and so it’s time to perhaps move across to Unity and see if I can do something with it.

I need to admit at this point that I went around a ‘loop or two’ coming up with the walkthrough below – it maybe took me 4-6 hours as I was finding that my projects weren’t working and some of that was due to me being new at using Vuforia and some of it was just one of those classic situations where you have one lump of code that works and another lump of code that doesn’t and you’re trying to figure out the delta between the two.

Ultimately, the set of following (seemingly simple!) steps are what dropped out of my experimenting…

Making a Unity Project and Importing Packages

I made a blank HoloLens project in Unity 5.5 and then imported 3 different unity packages as below;

  1. HoloToolkity-Unity : I imported the Build, Input, UI, Spatial Mapping and Utilities pieces.
  2. Vuforia SDK 6-2-6: I imported everything other than the pieces clearly labelled iOS and Android much as I did in my previous blog post.
  3. The unity package called TestDevice that I’d just downloaded from the Vuforia site containing my Shortbread model.

I then set up my project for HoloLens development as I do at the start of this video so as to configure the project and the scene for HoloLens development.

I also made sure that I had switched on the UWP capabilities to allow internet connection, spatial perception and webcam although at the time of writing I’m unsure whether I need them all.

I also made sure that Virtual Reality supported was switched on (as usual);


Setting up the Vuforia Configuration

I then went and used the (added) Vuforia menu in Unity to open up the configuration and I changed the highlighted options below;


Setting up the Vuforia Camera

I then dragged out the Vuforia prefab ARCamera to my scene and made sure that I’d set it up as recommended by altering the highlighted options below;


and so that tells the Vuforia camera about the HoloLens camera.

Setting up a Multi Target Behavior

At this point I got a little stuck Smile

The essence of this was really a ‘category error’ on my part and it was because I thought that I was doing ‘object recognition’ and so I kept adding an ‘Object Target Behavior’ script and attempting to point it at my database and the editor didn’t like me trying to do it;


So I went off and read the doc page here and realised that while I may be thinking of my box of biscuits as ‘object recognition’ Vuforia doesn’t really think of it that way and reserves ‘object recognition’ for scenarios where I’ve actually scanned a 3D object (the example on the doc page is a toy).

I then spent a little time wondering whether Vuforia might want me to do ‘image recognition’ but that didn’t seem to fit my cuboid (box of biscuits) scenario and reading this doc page made me realise that Vuforia called this cuboid scenario a ‘multi target’ and so I was supposed to be adding a ‘Multi Target Behavior’ onto my object and things became a bit clearer and I later realised that if I’d read this document beforehand then I might have had an easier time Smile

I dragged out a ‘Multi Target’ prefab onto my scene;


and configured it to use my database and my object and switched on the extended tracking;


Comparing with the Vuforia Unity Sample

I didn’t get everything working ‘first time’ Smile and so I had reason to compare what I was doing here with the original Vuforia sample that I looked at in my previous blogpost and I noticed that the Vuforia camera had a script on it;


seems to set a frame rate down to 30fps. I’m unsure whether this is necessary or not but at the time of writing my demo seems to be running without having to include this script but perhaps it could run better with it? That’s still ‘To Be Determined’ at the time of writing.

First Trial

With that setup I deployed to the device and watched the debug output in Visual Studio to see if I was able to track my Shortbread target.

Note – at this point, I’d actually been around this loop around 4-5 times as I’d messed things up a few times and tried out various different routes before settling on the set of steps that I’ve written up here which new feel fairly simple and obvious compared to some of the things that I somewhat randomly tried to get this ‘hello world’ up and running Smile

And, sure enough, I spotted the debug spew that seemed to show that Vuforia was spotting my box of shortbread;


This comes from Default Trackable Event Handler script that comes as part of the MultiTarget prefab;


which picks up the TrackableBehavior component and has some default event handlers which swich on/off any child renderers and colliders as the object is tracked/lost and it seemed to make sense to use those to add something into the scene to visualise what Vuforia was tracking.

Highlighting the Tracked Object

I went back into Unity and added a simple cube to surround the Shortbread box making it slightly larger than the biscuit box that it is meant to surround;


and I went and borrowed a wireframe shader from the UCLA Game Lab;


and used that to shade my cube;


and then tried that out on HoloLens, capturing the output below where you can see that the biscuit box is picked up pretty well;

so, in the end, getting that basic scenario up and running wasn’t too difficult at all. I’d like to try something ‘more imaginative’ as a follow on but that’ll have to be in another post…

HoloLens, Unity and Recognition with Vuforia (Part 1)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

HoloLens and the Windows Holographic platform that it runs provide a lot of capability to immerse your user into mixed reality but there are a number of scenarios where you might need to identify real-world objects in the user’s environment and that’s something that the platform doesn’t have built in although from scanning this API list it looks like the UWP APIs for face detection and for OCR would be present although I have yet to try them myself on HoloLens.

Beyond that, you might reach out to the cloud and use something like Cognitive Services to do some work although, clearly, you’d have to apply some filtering there as it’s not entirely practical in terms of performance or price to call a cloud service at 60 frames per second but if you had a way of selecting the right images to send to the cloud then Cognitive Services can definitely help with its vision APIs (and also perhaps with speech, knowledge, language too).

If you need something that happens on the device in real-time then a possibility is an SDK like Vuforia – I’ve known of this SDK for quite a long time but I’ve never tried it and so I wanted to see if I could get something working with the Vuforia SDK to recognise objects in a holographic app. Vuforia can recognise different types of things as listed in the ‘Features’ section here.

The first step with Vuforia is to go and get the SDK;


and, at the time of writing, I downloaded the 6.2 SDK for Unity which brings down a Unity package onto my disk.

I then went and got a development license key;

Vuforia License Keys

There’s then a developer guide on the Vuforia site;

Developing Vuforia Apps for HoloLens

and that feels largely like a grouping together of these docs on the HoloLens developer site (or vice versa);

Vuforia development overview

Getting started with Vuforia

The role of extended tracking

Binding the HoloLens Scene Camera

Building and executing a Vuforia app for HoloLens

and so I had a pretty good read of those.

Trying the Vuforia Sample

I went and downloaded the Unity sample from the ‘Digital Eyewear’ section of this page;

Vuforia Samples

and that gives you a unity package and so I made a blank 3D project in Unity 5.5 and then brought in both the HoloToolkit-Unity (as in this post) and set up my project, scene and capability settings (including the spatial perception capability) and then brought in the Vuforia package.

In importing the Vuforia samples package, I wasn’t 100% sure what I did or didn’t need and so I went for most of it missing out only the pieces that seemed to be specific to iOS/Android;


and then I found a scene within the sample named ‘Vuforia-2-Hololens.unity’ and so I opened that up and tried to see if I could build the project and deploy it to my device. That worked out…just fine Smile except I realised that I’d forgotten to put my newly acquired license key into the configuration;


and so I tried again with the license key plugged in and then I got a little bit stuck in that the sample scene that I was trying to view seemed to contain a couple of teapots;


but when I ran it I seemed to see nothing and I wasn’t entirely sure what was meant to be happening – i.e. I have the sample but I don’t know what I’m meant to do with it Smile 

In the docs here, there is a mention that I’m supposed to;

“follow the steps in the Building and Executing the sample section”

but I couldn’t find a Building and Executing the Sample section Smile I made sure that all 3 of the sample scenes were part of my build settings;


but this still left me with a blank scene and, frankly, I spent a good 5-10 minutes scratching my head wondering what the sample was supposed to do until I finally figured that maybe the sample was waiting for me to do something in the sense of present it with an image that it could recognise as those 2 teapots are sitting on two images named ‘stones and chips’ and so I did a web search and found the Vuforia PDF containing those images;

Vuforia Target Images

and I printed them out in colour and put one on my desk here and, sure enough, Vuforia did its thing and a teapot appeared Smile


and so the main difficulty I had in getting the sample to work was in understanding what the sample was trying to do – i.e. to look for one of two images that it knew about and draw a teapot on top of them when it located them. I printed the other image out as well and I get a differently coloured teapot when I look at that image.

I think the next steps would be to try and set up an example from scratch and try and figure out whether I can do a different kind of object/image recognition – I’ll put that into another post to avoid this one get overly long…