NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.
Following up from this earlier post;
First Experiment with Image Classification on Windows ML from UWP
around Windows ML;
at the end of that previous post I’d said that I would be really keen to try the code that I’d written on HoloLens but, at the time of that post, the required Windows 10 “Redstone 4” preview wasn’t available for HoloLens.
Things change quickly these days and just a few days later there’s a preview of “Redstone 4” available for HoloLens documented here;
and I followed the instructions there and very quickly had that preview operating system running on my HoloLens.
The first thing that I then wanted to do was to take the code that I’d written for that previous post around WindowsML and try it out on HoloLens even though it was a 2D XAML app rather than a 3D immersive app.
My hope was that it would “just work”. Did it?
No, of course not, it’s software
I ran the code inside of Visual Studio and immediately got;
Oh dear. But…I suspected that this might be because I had used Windows 10 SDK Preview version 17110 to build this app in the first place and perhaps that wasn’t going to work so well on a device that is now running a 17123.* build number.
So, I went back to the Windows Insider site and downloaded the Preview SDK labelled 10.0.17125.1000 to see if that changed things for me and I retargeted my application in Visual Studio to set its Target build to 17125 and its minimum build to 16299 before doing a complete rebuild and redeploy.
I had to set the minimum build to something below 17123 as that is what the device is now running.
Once again, I got the exact same error and so I set about trying to debug and immediately noticed that my debugger wasn’t stepping nicely and that prompted me to notice for the first time that VS had automatically selected the release build configuration and it jarred a memory in that I remembered that I had seen this exact same exception trying to run in release mode on the PC when I’d first written the code and I hadn’t figured it out putting it down to perhaps something in the preview SDK.
So, perhaps HoloLens wasn’t behaving any differently from the PC here? I switched to the debug configuration and, sure enough, the code doesn’t hit that marshalling exception and runs fine although I’m not sure yet about that ‘average time’ value that I’m calculating – that needs some looking into but here’s a screenshot of the app staring at a picture of a dachshund;
The screenshot is a bit weird because I cropped it out of a video recording and also because I’m holding up a picture of a dachshund in front of the app which is then displaying the view from its own webcam which contains the picture of the dachshund so it all gets a little bit recursive.
Here’s the app looking at a picture of an alsatian;
and it’s a little less sure about this pony;
So, for a quick experiment this is great in that I’ve taken the exact same code and the exact same model from the PC and it works ‘as is’ on these preview pieces on HoloLens Clearly, I could do with taking a look at the time it seems to be taking to process frames but I suspect that’s to do with me running debug bits and/or the way in which I’m grabbing frames from the camera.
For me, it’s a bit of a challenge though to have this 2D XAML app get in the way of what the camera is actually looking at so the next step would be to see if I can put this into an immersive app rather than a 2D app and that’s perhaps where I’d follow up with a later blog post.
For this post, the code is just where it was for the previous post – nothing has changed
By the way – I still don’t know what happens if I point the model at an actual dachshund/dog/pony – I need to get some of those for testing and, additionally, I suspect that once the code is comfortable with being able to find a particular object then the next question is likely to involve locating it in the 3D scene which might involve some kind of correlation between the colour image and a depth image and I’m not sure whether that’s something that’s achievable – I’d need to think about that.
Pingback: Third Experiment with Image Classification on Windows ML from UWP (on HoloLens in Unity) – Mike Taulty
Pingback: Second Experiment with Picture Classification on Windows ML out of UWP (on HoloLens) - Mike Taulty - HoloLensVirals.com - Latest HoloLens News
Pingback: #WindowsML – Ya se puede crear apps con AI nativo en #Hololens #Windows10 – El Bruno
Pingback: #WindowsML – Create Native AI apps for #Hololens #Windows10 – El Bruno