Third Experiment with Image Classification on Windows ML from UWP (on HoloLens in Unity)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following up from this earlier post;

Second Experiment with Image Classification on Windows ML from UWP (on HoloLens)

I’d finished up that post by flagging that what I was doing with a 2D UI felt weird in that I was looking through my HoloLens at a 2D app which was then displaying the contents of the webcam on the HoloLens back to me and while things seemed to work fine, it felt like a hall of mirrors.

Moving the UI to an immersive 3D app built in something like Unity would make this a little easier to try out and that’s what this post is about.

Moving the code as I had it across to Unity hasn’t proved difficult at all.

I spun up a new Unity project and set it up for HoloLens development by setting the typical settings like;

  • Switching the target platform to UWP (I also switched to the .NET backend and its 4.6 support)
  • Switching on support for the Windows Mixed Reality SDK
  • Moving the camera to the origin, changing its clear flags to solid black and changing the near clipping plane to 0.85
  • Switching on the capabilities that let my app access the camera and the microphone

and, from there, I brought the .onnx file with my model in it and placed it as a resource in Unity;

image

and then I brought the code across from the XAML based UWP project in as much as I could, conditionally compiling most of it out with ENABLE_WINMD_SUPPORT constants as most of the code that I’m trying to run here is entirely UWP dependent and isn’t going to run in the Unity Editor and so on.

In terms of code, I ended up with only 2 code files;

image

the dachshund file started life by being generated for me in the first post in this series by the mlgen tool although I did have to alter it to get it to work after it had been generated.

The code uses the underlying LearningModelPreview class which claims to be able to load a model from a storage file and from a stream. Because in this instance inside of Unity I’m going to load the model using Unity’s Resource.Load() mechanism I’m going to end up with a byte[] for the model and so I wanted to feed it through into the LoadModelFromStreamAsync() method but I found this didn’t seem to be implemented yet and so I had to do a minor hack and write the byte array out to a file before feeding it to the LoadModelFromStorageFileAsync() method.

That left this piece of code looking as below;

#if ENABLE_WINMD_SUPPORT
namespace dachshunds.model
{
    using System;
    using System.Collections.Generic;
    using System.IO;
    using System.Runtime.InteropServices.WindowsRuntime;
    using System.Threading.Tasks;

    using Windows.AI.MachineLearning.Preview;
    using Windows.Media;
    using Windows.Storage;
    using Windows.Storage.Streams;

    // MIKET: I renamed the auto generated long number class names to be 'Daschund'
    // to make it easier for me as a human to deal with them 🙂
    public sealed class DachshundModelInput
    {
        public VideoFrame data { get; set; }
    }

    public sealed class DachshundModelOutput
    {
        public IList<string> classLabel { get; set; }
        public IDictionary<string, float> loss { get; set; }

        public DachshundModelOutput()
        {
            this.classLabel = new List<string>();
            this.loss = new Dictionary<string, float>();

            // MIKET: I added these 3 lines of code here after spending *quite some time* 🙂
            // Trying to debug why I was getting a binding excption at the point in the
            // code below where the call to LearningModelBindingPreview.Bind is called
            // with the parameters ("loss", output.loss) where output.loss would be
            // an empty Dictionary<string,float>.
            //
            // The exception would be 
            // "The binding is incomplete or does not match the input/output description. (Exception from HRESULT: 0x88900002)"
            // And I couldn't find symbols for Windows.AI.MachineLearning.Preview to debug it.
            // So...this could be wrong but it works for me and the 3 values here correspond
            // to the 3 classifications that my classifier produces.
            //
            this.loss.Add("daschund", float.NaN);
            this.loss.Add("dog", float.NaN);
            this.loss.Add("pony", float.NaN);
        }
    }

    public sealed class DachshundModel
    {
        private LearningModelPreview learningModel;

        public static async Task<DachshundModel> CreateDachshundModel(byte[] bits)
        {
            // Note - there is a method on LearningModelPreview which seems to
            // load from a stream but I got a 'not implemented' exception and
            // hence using a temporary file.
            IStorageFile file = null;
            var fileName = "model.bin";

            try
            {
                file = await ApplicationData.Current.TemporaryFolder.GetFileAsync(
                    fileName);
            }
            catch (FileNotFoundException)
            {
            }
            if (file == null)
            {
                file = await ApplicationData.Current.TemporaryFolder.CreateFileAsync(
                    fileName);

                await FileIO.WriteBytesAsync(file, bits);
            }

            var model = await DachshundModel.CreateDachshundModel((StorageFile)file);

            return (model);
        }
        public static async Task<DachshundModel> CreateDachshundModel(StorageFile file)
        {
            LearningModelPreview learningModel = await LearningModelPreview.LoadModelFromStorageFileAsync(file);
            DachshundModel model = new DachshundModel();
            model.learningModel = learningModel;
            return model;
        }
        public async Task<DachshundModelOutput> EvaluateAsync(DachshundModelInput input) {
            DachshundModelOutput output = new DachshundModelOutput();
            LearningModelBindingPreview binding = new LearningModelBindingPreview(learningModel);
            binding.Bind("data", input.data);
            binding.Bind("classLabel", output.classLabel);

            // MIKET: this generated line caused me trouble. See MIKET comment above.
            binding.Bind("loss", output.loss);

            LearningModelEvaluationResultPreview evalResult = await learningModel.EvaluateAsync(binding, string.Empty);
            return output;
        }
    }
}
#endif // ENABLE_WINMD_SUPPORT

and then I made a few minor modifications to the code which had previously formed my ‘code behind’ in my XAML based app to move it into this MainScript.cs file where it performs pretty much the same function as it did in the XAML based app – getting frames from the webcam, passing them to the model for evaluation and then displaying the results. That code now looks like;

using System;
using System.Linq;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

#if ENABLE_WINMD_SUPPORT
using System.Threading.Tasks;
using Windows.Devices.Enumeration;
using Windows.Media.Capture;
using Windows.Media.Capture.Frames;
using Windows.Media.Devices;
using Windows.Storage;
using dachshunds.model;
using System.Diagnostics;
using System.Threading;
#endif // ENABLE_WINMD_SUPPORT

public class MainScript : MonoBehaviour
{
    public TextMesh textDisplay;

#if ENABLE_WINMD_SUPPORT
    public MainScript ()
	{
        this.inputData = new DachshundModelInput();
        this.timer = new Stopwatch();
	}
    async void Start()
    {
        await this.LoadModelAsync();

        var device = await this.GetFirstBackPanelVideoCaptureAsync();

        if (device != null)
        {
            await this.CreateMediaCaptureAsync(device);

            await this.CreateMediaFrameReaderAsync();
            await this.frameReader.StartAsync();
        }
    }    
    async Task LoadModelAsync()
    {
        // Get the bits from Unity's resource system :-S
        var modelBits = Resources.Load(DACHSHUND_MODEL_NAME) as TextAsset;

        this.learningModel = await DachshundModel.CreateDachshundModel(
            modelBits.bytes);
    }
    async Task<DeviceInformation> GetFirstBackPanelVideoCaptureAsync()
    {
        var devices = await DeviceInformation.FindAllAsync(
            DeviceClass.VideoCapture);

        var device = devices.FirstOrDefault(
            d => d.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Back);

        return (device);
    }
    async Task CreateMediaFrameReaderAsync()
    {
        var frameSource = this.mediaCapture.FrameSources.Where(
            source => source.Value.Info.SourceKind == MediaFrameSourceKind.Color).First();

        this.frameReader =
            await this.mediaCapture.CreateFrameReaderAsync(frameSource.Value);

        this.frameReader.FrameArrived += OnFrameArrived;
    }

    async Task CreateMediaCaptureAsync(DeviceInformation device)
    {
        this.mediaCapture = new MediaCapture();

        await this.mediaCapture.InitializeAsync(
            new MediaCaptureInitializationSettings()
            {
                VideoDeviceId = device.Id
            }
        );
        // Try and set auto focus but on the Surface Pro 3 I'm running on, this
        // won't work.
        if (this.mediaCapture.VideoDeviceController.FocusControl.Supported)
        {
            await this.mediaCapture.VideoDeviceController.FocusControl.SetPresetAsync(FocusPreset.AutoNormal);
        }
        else
        {
            // Nor this.
            this.mediaCapture.VideoDeviceController.Focus.TrySetAuto(true);
        }
    }

    async void OnFrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
    {
        if (Interlocked.CompareExchange(ref this.processingFlag, 1, 0) == 0)
        {
            try
            {
                using (var frame = sender.TryAcquireLatestFrame())
                using (var videoFrame = frame.VideoMediaFrame?.GetVideoFrame())
                {
                    if (videoFrame != null)
                    {
                        // From the description (both visible in Python and through the
                        // properties of the model that I can interrogate with code at
                        // runtime here) my image seems to to be 227 by 227 which is an 
                        // odd size but I'm assuming the underlying pieces do that work
                        // for me.
                        // If you've read the blog post, I took out the conditional
                        // code which attempted to resize the frame as it seemed
                        // unnecessary and confused the issue!
                        this.inputData.data = videoFrame;

                        this.timer.Start();
                        var evalOutput = await this.learningModel.EvaluateAsync(this.inputData);
                        this.timer.Stop();
                        this.frameCount++;

                        await this.ProcessOutputAsync(evalOutput);
                    }
                }
            }
            finally
            {
                Interlocked.Exchange(ref this.processingFlag, 0);
            }
        }
    }
    string BuildOutputString(DachshundModelOutput evalOutput, string key)
    {
        var result = "no";

        if (evalOutput.loss[key] > 0.25f)
        {
            result = $"{evalOutput.loss[key]:N2}";
        }
        return (result);
    }
    async Task ProcessOutputAsync(DachshundModelOutput evalOutput)
    {
        string category = evalOutput.classLabel.FirstOrDefault() ?? "none";
        string dog = $"{BuildOutputString(evalOutput, "dog")}";
        string pony = $"{BuildOutputString(evalOutput, "pony")}";

        // NB: Spelling mistake is built into model!
        string dachshund = $"{BuildOutputString(evalOutput, "daschund")}";
        string averageFrameDuration =
            this.frameCount == 0 ? "n/a" : $"{(this.timer.ElapsedMilliseconds / this.frameCount):N0}";

        UnityEngine.WSA.Application.InvokeOnAppThread(
            () =>
            {
                this.textDisplay.text = 
                    $"dachshund {dachshund} dog {dog} pony {pony}\navg time {averageFrameDuration}";
            },
            false
        );
    }
    DachshundModelInput inputData;
    int processingFlag;
    MediaFrameReader frameReader;
    MediaCapture mediaCapture;
    DachshundModel learningModel;
    Stopwatch timer;
    int frameCount;
    static readonly string DACHSHUND_MODEL_NAME = "dachshunds"; // .bytes file in Unity

#endif // ENABLE_WINMD_SUPPORT
}

while experimenting with this code, it certainly occurred to me that I could move it to more of a “pull” model inside of Unity by trying to grab frames in an Update() method rather than do the work separately and then pushing the results back to the App thread. It also occurred to me that the code is very single threaded and simply drops frames if it is ‘busy’ whereas it could be smarter and process them on some other thread including perhaps a thread from the thread pool. There are lots of possibilities Smile

In terms of displaying the results inside of Unity – I no longer need to display a preview from the webcam because my eyes are already seeing the same thing that the camera sees and so I’m just left with the challenge of displaying some text and so I just added a 3D Text object into the scene and made it accessible via a public field that can be set up in the editor.

image

and the ScriptHolder there is just a place to put my MainScript and pass it this TextMesh to display text in;

image

and that’s pretty much it.

I still see a fairly low processing rate when running on the device and I haven’t yet looked at that but here’s some screenshots of me looking at photos from Bing search on my 2nd monitor while running the app on HoloLens.

In this case the device (on my head) is around 40cm from the 24 inch monitor and I’ve got the Bing search results displaying quite large and the model seems to do a decent job of spotting dachshunds…

image

image

image

and dogs in general (although it has only really been trained on alsatians so it knows that they are dogs but not dachshunds);

image

and for whatever reason that I can’t explain I also trained it on ponies so it’s quite good at spotting those;

image

image

This works pretty well for me Smile I need to revisit and take a look at whether I can improve the processing speed and also the problem that I flagged in my previous post around not being able to run a release build but, otherwise, it feels like progress.

The code is in the same repo as it was before – I just added a Unity project to the repo.

https://github.com/mtaulty/WindowsMLExperiment

Second Experiment with Image Classification on Windows ML from UWP (on HoloLens)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following up from this earlier post;

First Experiment with Image Classification on Windows ML from UWP

around Windows ML;

AI Platform for Windows Developers

at the end of that previous post I’d said that I would be really keen to try the code that I’d written on HoloLens but, at the time of that post, the required Windows 10 “Redstone 4” preview wasn’t available for HoloLens.

Things change quickly these days Winking smile and just a few days later there’s a preview of “Redstone 4” available for HoloLens documented here;

HoloLens RS4 Preview

and I followed the instructions there and very quickly had that preview operating system running on my HoloLens.

The first thing that I then wanted to do was to take the code that I’d written for that previous post around WindowsML and try it out on HoloLens even though it was a 2D XAML app rather than a 3D immersive app.

My hope was that it would “just work”. Did it?

No, of course not, it’s software Smile 

I ran the code inside of Visual Studio and immediately got;

crash

Oh dear. But…I suspected that this might be because I had used Windows 10 SDK Preview version 17110 to build this app in the first place and perhaps that wasn’t going to work so well on a device that is now running a 17123.* build number.

So, I went back to the Windows Insider site and downloaded the Preview SDK labelled 10.0.17125.1000 to see if that changed things for me and I retargeted my application in Visual Studio to set its Target build to 17125 and its minimum build to 16299 before doing a complete rebuild and redeploy.

I had to set the minimum build to something below 17123 as that is what the device is now running.

Once again, I got the exact same error and so I set about trying to debug and immediately noticed that my debugger wasn’t stepping nicely and that prompted me to notice for the first time that VS had automatically selected the release build configuration and it jarred a memory in that I remembered that I had seen this exact same exception trying to run in release mode on the PC when I’d first written the code and I hadn’t figured it out putting it down to perhaps something in the preview SDK.

So, perhaps HoloLens wasn’t behaving any differently from the PC here? I switched to the debug configuration and, sure enough, the code doesn’t hit that marshalling exception and runs fine although I’m not sure yet about that ‘average time’ value that I’m calculating – that needs some looking into but here’s a screenshot of the app staring at a picture of a dachshund;

image

The screenshot is a bit weird because I cropped it out of a video recording and also because I’m holding up a picture of a dachshund in front of the app which is then displaying the view from its own webcam which contains the picture of the dachshund so it all gets a little bit recursive.

Here’s the app looking at a picture of an alsatian;

image

and it’s a little less sure about this pony;

image

So, for a quick experiment this is great in that I’ve taken the exact same code and the exact same model from the PC and it works ‘as is’ on these preview pieces on HoloLens Smile Clearly, I could do with taking a look at the time it seems to be taking to process frames but I suspect that’s to do with me running debug bits and/or the way in which I’m grabbing frames from the camera.

For me, it’s a bit of a challenge though to have this 2D XAML app get in the way of what the camera is actually looking at so the next step would be to see if I can put this into an immersive app rather than a 2D app and that’s perhaps where I’d follow up with a later blog post.

For this post, the code is just where it was for the previous post – nothing has changed Smile

By the way – I still don’t know what happens if I point the model at an actual dachshund/dog/pony – I need to get some of those for testing Winking smile and, additionally, I suspect that once the code is comfortable with being able to find a particular object then the next question is likely to involve locating it in the 3D scene which might involve some kind of correlation between the colour image and a depth image and I’m not sure whether that’s something that’s achievable – I’d need to think about that.

Rough Notes on UWP and webRTC (Part 3)

This is a follow-on from my previous post around taking small steps with webRTC and UWP.

At the end of that post, I had some scrappy code which was fairly fixed in function in that it was a small UWP app which would use the UWP webRTC library to connect to a signalling service and then could begin a conversation with a peer that was also connected to the same signalling service.

The signalling service in question had to be the one provided with the UWP webRTC bits and the easiest way to test that my app was doing something was to run it against the PeerCC sample which also ships with the UWP webRTC bits and does way more than my app does by demonstrating lots of functionality that’s present in UWP webRTC.

The links to all the webRTC pieces that I’m referring to are in the previous 2 posts on this topic.

Tidying Up

The code that I had in the signalling branch of this github repo at the end of the previous post was quite messy and not really in a position to be re-used and so I spent a little time just pulling that code apart, refactoring some of the functionality behind interfaces and reducing the implicit dependencies in order to try and move the code towards being a little bit more re-usable (even if the functionality it currently implements isn’t of much actual use to a real user – I’m just experimenting).

What I was trying to move towards was some code that I knew sort of worked in this XAML based UWP app that I could then lift out of the app and re-use in a non-XAML based UWP app (i.e. a Unity app) so that I would have some control over the knowns and unknowns in trying out that process.

What I needed to do then was make sure that in refactoring things, I ended up with code that was clearly abstracted from its dependencies on anything in the XAML layer.

Firstly, I refactored the solution into two projects to make for a class library and an app project which referenced it;

image

and then I took some of the pieces of functionality that I had in there and abstracted it out into a set of interfaces;

image

with a view to making the dependencies between these interfaces explicit and the implementation pluggable.

This included putting the code which provides signalling by invoking the signalling service supplied with the original sample behind an interface. Note that I’m not at all trying to come up with a generic interface that could generally represent the notion of signalling in webRTC but, instead, I’m just trying to put an interface on to the existing signalling code that I took (almost) entirely from the PeerCC sample project in the UWP webRTC bits.

image

The other interfaces/services that I added here are hopefully named ‘reasonably well’ in terms of the functionality that they represent with perhaps the one that’s not quite so obvious obvious being the IConversationManager.

This interface is just my attempt to codify the minimum functionality that I need to bring the other interface implementations together in order to get any kind of conversation over webRTC up and running from my little sample app as it stands and that IConversationManager interface right now just looks as below;

image

and so the idea here is that a consumer of an IConversationManager can simply;

  • Tell the manager whether it is meant to initiate conversations or simply wait for a remote peer to being a conversation with it
    • In terms of initiating conversations – the code is ‘aggressive’ in that it simply finds the first peer that it sees provided by the signalling service and attempts to being a conversation with it.
  • Call InitialiseAsync providing the name that the local peer wants to be represented by.
  • Call ConnectToSignallingAsync with the IP Address and port where the signalling service is to be found.

From there, the implementation jumps in and tries to bring together all the right pieces to get a conversation flowing.

In making these abstractions, I found two places where I had to apply a little bit of thought and that was where;

  • The UWP webRTC pieces need initialising with a Dispatcher object and so I abstracted that out into an interface so that an implementation can be injected into the underlying layer.
  • There is a need at some point to do some work with UI objects to represent media streams. In the code to date, this has meant working with XAML MediaElements but in other scenarios (e.g. Unity UI) that wouldn’t work.

In order to try and abstract the library code from these media pieces, I made an IMediaManager interface with the intention being to write a different implementation for the different UI layers so to bring this library up inside of a Unity app I’d at least need to provide a Unity version of the highlighted implementation pieces below which are about IMediaManager in a XAML UI world;

image

My main project took a dependency on autofac to provide a container from which to serve up the implementations of my interfaces and I did a cheap trick of providing my own “container” embedded into the library and named CheapContainer in case the library was going to be used in a situation where autofac or some other IoC container wasn’t immediately available.

Configuration of the container then moves into my App.xaml.cs file and is fairly simple and I wrote it twice, once for autofac and once using my own CheapContainer;

#if !USE_CHEAP_CONTAINER
        Autofac.IContainer Container
        {
            get
            {
                if (this.iocContainer == null)
                {
                    this.BuildContainer();
                }
                return (this.iocContainer);
            }
        }
#endif
        void BuildContainer()
        {
#if USE_CHEAP_CONTAINER
            CheapContainer.Register<ISignallingService, Signaller>();
            CheapContainer.Register<IDispatcherProvider, XamlMediaElementProvider>();
            CheapContainer.Register<IXamlMediaElementProvider, XamlMediaElementProvider>();
            CheapContainer.Register<IMediaManager, XamlMediaElementMediaManager>();
            CheapContainer.Register<IPeerManager, PeerManager>();
            CheapContainer.Register<IConversationManager, ConversationManager>();
#else
            var builder = new ContainerBuilder();
            builder.RegisterType<Signaller>().As<ISignallingService>().SingleInstance();

            builder.RegisterType<XamlMediaElementProvider>().As<IXamlMediaElementProvider>().As<IDispatcherProvider>().SingleInstance();

            builder.RegisterType<XamlMediaElementMediaManager>().As<IMediaManager>().SingleInstance();
            builder.RegisterType<PeerManager>().As<IPeerManager>().SingleInstance();
            builder.RegisterType<ConversationManager>().As<IConversationManager>().SingleInstance();
            builder.RegisterType<MainPage>().AsSelf().SingleInstance();
            this.iocContainer = builder.Build();
#endif
        }
#if USE_CHEAP_CONTAINER
#else
        Autofac.IContainer iocContainer;
#endif

and the code which now lives inside of my MainPage.xaml.cs file involved in actually getting the webRTC conversation up and running is reduced down to almost nothing;

        async void OnConnectToSignallingAsync()
        {
            await this.conversationManager.InitialiseAsync(this.addressDetails.HostName);

            this.conversationManager.IsInitiator = this.isInitiator;

            this.HasConnected = await this.conversationManager.ConnectToSignallingAsync(
                this.addressDetails.IPAddress, this.addressDetails.Port);
        }

and so that seems a lot simpler, neater and more re-usable than what I’d had at the end of the previous blog post.

In subsequent posts, I’m going to see if I can now re-use this library inside of other environments (e.g. Unity) so as to bring this same (very limited) webRTC functionality that I’ve been playing with to that environment.