Second Experiment with Image Classification on Windows ML from UWP (on HoloLens)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following up from this earlier post;

First Experiment with Image Classification on Windows ML from UWP

around Windows ML;

AI Platform for Windows Developers

at the end of that previous post I’d said that I would be really keen to try the code that I’d written on HoloLens but, at the time of that post, the required Windows 10 “Redstone 4” preview wasn’t available for HoloLens.

Things change quickly these days Winking smile and just a few days later there’s a preview of “Redstone 4” available for HoloLens documented here;

HoloLens RS4 Preview

and I followed the instructions there and very quickly had that preview operating system running on my HoloLens.

The first thing that I then wanted to do was to take the code that I’d written for that previous post around WindowsML and try it out on HoloLens even though it was a 2D XAML app rather than a 3D immersive app.

My hope was that it would “just work”. Did it?

No, of course not, it’s software Smile 

I ran the code inside of Visual Studio and immediately got;

crash

Oh dear. But…I suspected that this might be because I had used Windows 10 SDK Preview version 17110 to build this app in the first place and perhaps that wasn’t going to work so well on a device that is now running a 17123.* build number.

So, I went back to the Windows Insider site and downloaded the Preview SDK labelled 10.0.17125.1000 to see if that changed things for me and I retargeted my application in Visual Studio to set its Target build to 17125 and its minimum build to 16299 before doing a complete rebuild and redeploy.

I had to set the minimum build to something below 17123 as that is what the device is now running.

Once again, I got the exact same error and so I set about trying to debug and immediately noticed that my debugger wasn’t stepping nicely and that prompted me to notice for the first time that VS had automatically selected the release build configuration and it jarred a memory in that I remembered that I had seen this exact same exception trying to run in release mode on the PC when I’d first written the code and I hadn’t figured it out putting it down to perhaps something in the preview SDK.

So, perhaps HoloLens wasn’t behaving any differently from the PC here? I switched to the debug configuration and, sure enough, the code doesn’t hit that marshalling exception and runs fine although I’m not sure yet about that ‘average time’ value that I’m calculating – that needs some looking into but here’s a screenshot of the app staring at a picture of a dachshund;

image

The screenshot is a bit weird because I cropped it out of a video recording and also because I’m holding up a picture of a dachshund in front of the app which is then displaying the view from its own webcam which contains the picture of the dachshund so it all gets a little bit recursive.

Here’s the app looking at a picture of an alsatian;

image

and it’s a little less sure about this pony;

image

So, for a quick experiment this is great in that I’ve taken the exact same code and the exact same model from the PC and it works ‘as is’ on these preview pieces on HoloLens Smile Clearly, I could do with taking a look at the time it seems to be taking to process frames but I suspect that’s to do with me running debug bits and/or the way in which I’m grabbing frames from the camera.

For me, it’s a bit of a challenge though to have this 2D XAML app get in the way of what the camera is actually looking at so the next step would be to see if I can put this into an immersive app rather than a 2D app and that’s perhaps where I’d follow up with a later blog post.

For this post, the code is just where it was for the previous post – nothing has changed Smile

By the way – I still don’t know what happens if I point the model at an actual dachshund/dog/pony – I need to get some of those for testing Winking smile and, additionally, I suspect that once the code is comfortable with being able to find a particular object then the next question is likely to involve locating it in the 3D scene which might involve some kind of correlation between the colour image and a depth image and I’m not sure whether that’s something that’s achievable – I’d need to think about that.

Rough Notes on UWP and webRTC (Part 3)

This is a follow-on from my previous post around taking small steps with webRTC and UWP.

At the end of that post, I had some scrappy code which was fairly fixed in function in that it was a small UWP app which would use the UWP webRTC library to connect to a signalling service and then could begin a conversation with a peer that was also connected to the same signalling service.

The signalling service in question had to be the one provided with the UWP webRTC bits and the easiest way to test that my app was doing something was to run it against the PeerCC sample which also ships with the UWP webRTC bits and does way more than my app does by demonstrating lots of functionality that’s present in UWP webRTC.

The links to all the webRTC pieces that I’m referring to are in the previous 2 posts on this topic.

Tidying Up

The code that I had in the signalling branch of this github repo at the end of the previous post was quite messy and not really in a position to be re-used and so I spent a little time just pulling that code apart, refactoring some of the functionality behind interfaces and reducing the implicit dependencies in order to try and move the code towards being a little bit more re-usable (even if the functionality it currently implements isn’t of much actual use to a real user – I’m just experimenting).

What I was trying to move towards was some code that I knew sort of worked in this XAML based UWP app that I could then lift out of the app and re-use in a non-XAML based UWP app (i.e. a Unity app) so that I would have some control over the knowns and unknowns in trying out that process.

What I needed to do then was make sure that in refactoring things, I ended up with code that was clearly abstracted from its dependencies on anything in the XAML layer.

Firstly, I refactored the solution into two projects to make for a class library and an app project which referenced it;

image

and then I took some of the pieces of functionality that I had in there and abstracted it out into a set of interfaces;

image

with a view to making the dependencies between these interfaces explicit and the implementation pluggable.

This included putting the code which provides signalling by invoking the signalling service supplied with the original sample behind an interface. Note that I’m not at all trying to come up with a generic interface that could generally represent the notion of signalling in webRTC but, instead, I’m just trying to put an interface on to the existing signalling code that I took (almost) entirely from the PeerCC sample project in the UWP webRTC bits.

image

The other interfaces/services that I added here are hopefully named ‘reasonably well’ in terms of the functionality that they represent with perhaps the one that’s not quite so obvious obvious being the IConversationManager.

This interface is just my attempt to codify the minimum functionality that I need to bring the other interface implementations together in order to get any kind of conversation over webRTC up and running from my little sample app as it stands and that IConversationManager interface right now just looks as below;

image

and so the idea here is that a consumer of an IConversationManager can simply;

  • Tell the manager whether it is meant to initiate conversations or simply wait for a remote peer to being a conversation with it
    • In terms of initiating conversations – the code is ‘aggressive’ in that it simply finds the first peer that it sees provided by the signalling service and attempts to being a conversation with it.
  • Call InitialiseAsync providing the name that the local peer wants to be represented by.
  • Call ConnectToSignallingAsync with the IP Address and port where the signalling service is to be found.

From there, the implementation jumps in and tries to bring together all the right pieces to get a conversation flowing.

In making these abstractions, I found two places where I had to apply a little bit of thought and that was where;

  • The UWP webRTC pieces need initialising with a Dispatcher object and so I abstracted that out into an interface so that an implementation can be injected into the underlying layer.
  • There is a need at some point to do some work with UI objects to represent media streams. In the code to date, this has meant working with XAML MediaElements but in other scenarios (e.g. Unity UI) that wouldn’t work.

In order to try and abstract the library code from these media pieces, I made an IMediaManager interface with the intention being to write a different implementation for the different UI layers so to bring this library up inside of a Unity app I’d at least need to provide a Unity version of the highlighted implementation pieces below which are about IMediaManager in a XAML UI world;

image

My main project took a dependency on autofac to provide a container from which to serve up the implementations of my interfaces and I did a cheap trick of providing my own “container” embedded into the library and named CheapContainer in case the library was going to be used in a situation where autofac or some other IoC container wasn’t immediately available.

Configuration of the container then moves into my App.xaml.cs file and is fairly simple and I wrote it twice, once for autofac and once using my own CheapContainer;

#if !USE_CHEAP_CONTAINER
        Autofac.IContainer Container
        {
            get
            {
                if (this.iocContainer == null)
                {
                    this.BuildContainer();
                }
                return (this.iocContainer);
            }
        }
#endif
        void BuildContainer()
        {
#if USE_CHEAP_CONTAINER
            CheapContainer.Register<ISignallingService, Signaller>();
            CheapContainer.Register<IDispatcherProvider, XamlMediaElementProvider>();
            CheapContainer.Register<IXamlMediaElementProvider, XamlMediaElementProvider>();
            CheapContainer.Register<IMediaManager, XamlMediaElementMediaManager>();
            CheapContainer.Register<IPeerManager, PeerManager>();
            CheapContainer.Register<IConversationManager, ConversationManager>();
#else
            var builder = new ContainerBuilder();
            builder.RegisterType<Signaller>().As<ISignallingService>().SingleInstance();

            builder.RegisterType<XamlMediaElementProvider>().As<IXamlMediaElementProvider>().As<IDispatcherProvider>().SingleInstance();

            builder.RegisterType<XamlMediaElementMediaManager>().As<IMediaManager>().SingleInstance();
            builder.RegisterType<PeerManager>().As<IPeerManager>().SingleInstance();
            builder.RegisterType<ConversationManager>().As<IConversationManager>().SingleInstance();
            builder.RegisterType<MainPage>().AsSelf().SingleInstance();
            this.iocContainer = builder.Build();
#endif
        }
#if USE_CHEAP_CONTAINER
#else
        Autofac.IContainer iocContainer;
#endif

and the code which now lives inside of my MainPage.xaml.cs file involved in actually getting the webRTC conversation up and running is reduced down to almost nothing;

        async void OnConnectToSignallingAsync()
        {
            await this.conversationManager.InitialiseAsync(this.addressDetails.HostName);

            this.conversationManager.IsInitiator = this.isInitiator;

            this.HasConnected = await this.conversationManager.ConnectToSignallingAsync(
                this.addressDetails.IPAddress, this.addressDetails.Port);
        }

and so that seems a lot simpler, neater and more re-usable than what I’d had at the end of the previous blog post.

In subsequent posts, I’m going to see if I can now re-use this library inside of other environments (e.g. Unity) so as to bring this same (very limited) webRTC functionality that I’ve been playing with to that environment.

“Hello World” Mixed Reality Demo from the UK TechKnowDay Event 2018

I had the privilege to be invited to speak at the UK TechKnowDay Event today as part of International Women’s Day;

and I went along with my colleague, Pete, and talked to the attendees about Windows Mixed Reality.

As part of that, I’d put together a very simple “Hello World” demo involving taking a 3D model of an avatar who appeared when air-tapped on a HoloLens and then fell with a parachute to the floor. This is really just a way of showing the basics of using the Unity toolkit, the Mixed Reality Toolkit and Visual Studio to make something that runs on HoloLens and which blends the digital with the physical.

At the event, we shortened the demo because we were running a little low on time and so I promised to include the materials on the web somewhere and that’s what this post is about.

First, I made 3 models using Paint3D and so I wanted to include that little video here – it’s intended to be spoken over so there’s no audio on it;

and then there’s a little video showing me working through in Unity to bring in the assets from Paint3D and add some very, very limited interactivity to them using Unity and the Mixed Reality Toolkit.

The way the app is supposed to work is that an air tap will cause the creation of an instance of the avatar. She will then fall under (reduced) gravity landing on a surface when her parachute should disappear and then she might sort of ‘snowboard’ to a stop where her snowboard should also disappear Smile

I’m not sure that anyone would want this coding masterpiece Smile but if they did then it’s on github over here;

https://github.com/mtaulty/parachutes

Feel very free to re-use, share or whatever you like with this if it’s of use to you.