Third Experiment with Image Classification on Windows ML from UWP (on HoloLens in Unity)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following up from this earlier post;

Second Experiment with Image Classification on Windows ML from UWP (on HoloLens)

I’d finished up that post by flagging that what I was doing with a 2D UI felt weird in that I was looking through my HoloLens at a 2D app which was then displaying the contents of the webcam on the HoloLens back to me and while things seemed to work fine, it felt like a hall of mirrors.

Moving the UI to an immersive 3D app built in something like Unity would make this a little easier to try out and that’s what this post is about.

Moving the code as I had it across to Unity hasn’t proved difficult at all.

I spun up a new Unity project and set it up for HoloLens development by setting the typical settings like;

  • Switching the target platform to UWP (I also switched to the .NET backend and its 4.6 support)
  • Switching on support for the Windows Mixed Reality SDK
  • Moving the camera to the origin, changing its clear flags to solid black and changing the near clipping plane to 0.85
  • Switching on the capabilities that let my app access the camera and the microphone

and, from there, I brought the .onnx file with my model in it and placed it as a resource in Unity;

image

and then I brought the code across from the XAML based UWP project in as much as I could, conditionally compiling most of it out with ENABLE_WINMD_SUPPORT constants as most of the code that I’m trying to run here is entirely UWP dependent and isn’t going to run in the Unity Editor and so on.

In terms of code, I ended up with only 2 code files;

image

the dachshund file started life by being generated for me in the first post in this series by the mlgen tool although I did have to alter it to get it to work after it had been generated.

The code uses the underlying LearningModelPreview class which claims to be able to load a model from a storage file and from a stream. Because in this instance inside of Unity I’m going to load the model using Unity’s Resource.Load() mechanism I’m going to end up with a byte[] for the model and so I wanted to feed it through into the LoadModelFromStreamAsync() method but I found this didn’t seem to be implemented yet and so I had to do a minor hack and write the byte array out to a file before feeding it to the LoadModelFromStorageFileAsync() method.

That left this piece of code looking as below;

#if ENABLE_WINMD_SUPPORT
namespace dachshunds.model
{
    using System;
    using System.Collections.Generic;
    using System.IO;
    using System.Runtime.InteropServices.WindowsRuntime;
    using System.Threading.Tasks;

    using Windows.AI.MachineLearning.Preview;
    using Windows.Media;
    using Windows.Storage;
    using Windows.Storage.Streams;

    // MIKET: I renamed the auto generated long number class names to be 'Daschund'
    // to make it easier for me as a human to deal with them 🙂
    public sealed class DachshundModelInput
    {
        public VideoFrame data { get; set; }
    }

    public sealed class DachshundModelOutput
    {
        public IList<string> classLabel { get; set; }
        public IDictionary<string, float> loss { get; set; }

        public DachshundModelOutput()
        {
            this.classLabel = new List<string>();
            this.loss = new Dictionary<string, float>();

            // MIKET: I added these 3 lines of code here after spending *quite some time* 🙂
            // Trying to debug why I was getting a binding excption at the point in the
            // code below where the call to LearningModelBindingPreview.Bind is called
            // with the parameters ("loss", output.loss) where output.loss would be
            // an empty Dictionary<string,float>.
            //
            // The exception would be 
            // "The binding is incomplete or does not match the input/output description. (Exception from HRESULT: 0x88900002)"
            // And I couldn't find symbols for Windows.AI.MachineLearning.Preview to debug it.
            // So...this could be wrong but it works for me and the 3 values here correspond
            // to the 3 classifications that my classifier produces.
            //
            this.loss.Add("daschund", float.NaN);
            this.loss.Add("dog", float.NaN);
            this.loss.Add("pony", float.NaN);
        }
    }

    public sealed class DachshundModel
    {
        private LearningModelPreview learningModel;

        public static async Task<DachshundModel> CreateDachshundModel(byte[] bits)
        {
            // Note - there is a method on LearningModelPreview which seems to
            // load from a stream but I got a 'not implemented' exception and
            // hence using a temporary file.
            IStorageFile file = null;
            var fileName = "model.bin";

            try
            {
                file = await ApplicationData.Current.TemporaryFolder.GetFileAsync(
                    fileName);
            }
            catch (FileNotFoundException)
            {
            }
            if (file == null)
            {
                file = await ApplicationData.Current.TemporaryFolder.CreateFileAsync(
                    fileName);

                await FileIO.WriteBytesAsync(file, bits);
            }

            var model = await DachshundModel.CreateDachshundModel((StorageFile)file);

            return (model);
        }
        public static async Task<DachshundModel> CreateDachshundModel(StorageFile file)
        {
            LearningModelPreview learningModel = await LearningModelPreview.LoadModelFromStorageFileAsync(file);
            DachshundModel model = new DachshundModel();
            model.learningModel = learningModel;
            return model;
        }
        public async Task<DachshundModelOutput> EvaluateAsync(DachshundModelInput input) {
            DachshundModelOutput output = new DachshundModelOutput();
            LearningModelBindingPreview binding = new LearningModelBindingPreview(learningModel);
            binding.Bind("data", input.data);
            binding.Bind("classLabel", output.classLabel);

            // MIKET: this generated line caused me trouble. See MIKET comment above.
            binding.Bind("loss", output.loss);

            LearningModelEvaluationResultPreview evalResult = await learningModel.EvaluateAsync(binding, string.Empty);
            return output;
        }
    }
}
#endif // ENABLE_WINMD_SUPPORT

and then I made a few minor modifications to the code which had previously formed my ‘code behind’ in my XAML based app to move it into this MainScript.cs file where it performs pretty much the same function as it did in the XAML based app – getting frames from the webcam, passing them to the model for evaluation and then displaying the results. That code now looks like;

using System;
using System.Linq;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

#if ENABLE_WINMD_SUPPORT
using System.Threading.Tasks;
using Windows.Devices.Enumeration;
using Windows.Media.Capture;
using Windows.Media.Capture.Frames;
using Windows.Media.Devices;
using Windows.Storage;
using dachshunds.model;
using System.Diagnostics;
using System.Threading;
#endif // ENABLE_WINMD_SUPPORT

public class MainScript : MonoBehaviour
{
    public TextMesh textDisplay;

#if ENABLE_WINMD_SUPPORT
    public MainScript ()
	{
        this.inputData = new DachshundModelInput();
        this.timer = new Stopwatch();
	}
    async void Start()
    {
        await this.LoadModelAsync();

        var device = await this.GetFirstBackPanelVideoCaptureAsync();

        if (device != null)
        {
            await this.CreateMediaCaptureAsync(device);

            await this.CreateMediaFrameReaderAsync();
            await this.frameReader.StartAsync();
        }
    }    
    async Task LoadModelAsync()
    {
        // Get the bits from Unity's resource system :-S
        var modelBits = Resources.Load(DACHSHUND_MODEL_NAME) as TextAsset;

        this.learningModel = await DachshundModel.CreateDachshundModel(
            modelBits.bytes);
    }
    async Task<DeviceInformation> GetFirstBackPanelVideoCaptureAsync()
    {
        var devices = await DeviceInformation.FindAllAsync(
            DeviceClass.VideoCapture);

        var device = devices.FirstOrDefault(
            d => d.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Back);

        return (device);
    }
    async Task CreateMediaFrameReaderAsync()
    {
        var frameSource = this.mediaCapture.FrameSources.Where(
            source => source.Value.Info.SourceKind == MediaFrameSourceKind.Color).First();

        this.frameReader =
            await this.mediaCapture.CreateFrameReaderAsync(frameSource.Value);

        this.frameReader.FrameArrived += OnFrameArrived;
    }

    async Task CreateMediaCaptureAsync(DeviceInformation device)
    {
        this.mediaCapture = new MediaCapture();

        await this.mediaCapture.InitializeAsync(
            new MediaCaptureInitializationSettings()
            {
                VideoDeviceId = device.Id
            }
        );
        // Try and set auto focus but on the Surface Pro 3 I'm running on, this
        // won't work.
        if (this.mediaCapture.VideoDeviceController.FocusControl.Supported)
        {
            await this.mediaCapture.VideoDeviceController.FocusControl.SetPresetAsync(FocusPreset.AutoNormal);
        }
        else
        {
            // Nor this.
            this.mediaCapture.VideoDeviceController.Focus.TrySetAuto(true);
        }
    }

    async void OnFrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
    {
        if (Interlocked.CompareExchange(ref this.processingFlag, 1, 0) == 0)
        {
            try
            {
                using (var frame = sender.TryAcquireLatestFrame())
                using (var videoFrame = frame.VideoMediaFrame?.GetVideoFrame())
                {
                    if (videoFrame != null)
                    {
                        // From the description (both visible in Python and through the
                        // properties of the model that I can interrogate with code at
                        // runtime here) my image seems to to be 227 by 227 which is an 
                        // odd size but I'm assuming the underlying pieces do that work
                        // for me.
                        // If you've read the blog post, I took out the conditional
                        // code which attempted to resize the frame as it seemed
                        // unnecessary and confused the issue!
                        this.inputData.data = videoFrame;

                        this.timer.Start();
                        var evalOutput = await this.learningModel.EvaluateAsync(this.inputData);
                        this.timer.Stop();
                        this.frameCount++;

                        await this.ProcessOutputAsync(evalOutput);
                    }
                }
            }
            finally
            {
                Interlocked.Exchange(ref this.processingFlag, 0);
            }
        }
    }
    string BuildOutputString(DachshundModelOutput evalOutput, string key)
    {
        var result = "no";

        if (evalOutput.loss[key] > 0.25f)
        {
            result = $"{evalOutput.loss[key]:N2}";
        }
        return (result);
    }
    async Task ProcessOutputAsync(DachshundModelOutput evalOutput)
    {
        string category = evalOutput.classLabel.FirstOrDefault() ?? "none";
        string dog = $"{BuildOutputString(evalOutput, "dog")}";
        string pony = $"{BuildOutputString(evalOutput, "pony")}";

        // NB: Spelling mistake is built into model!
        string dachshund = $"{BuildOutputString(evalOutput, "daschund")}";
        string averageFrameDuration =
            this.frameCount == 0 ? "n/a" : $"{(this.timer.ElapsedMilliseconds / this.frameCount):N0}";

        UnityEngine.WSA.Application.InvokeOnAppThread(
            () =>
            {
                this.textDisplay.text = 
                    $"dachshund {dachshund} dog {dog} pony {pony}\navg time {averageFrameDuration}";
            },
            false
        );
    }
    DachshundModelInput inputData;
    int processingFlag;
    MediaFrameReader frameReader;
    MediaCapture mediaCapture;
    DachshundModel learningModel;
    Stopwatch timer;
    int frameCount;
    static readonly string DACHSHUND_MODEL_NAME = "dachshunds"; // .bytes file in Unity

#endif // ENABLE_WINMD_SUPPORT
}

while experimenting with this code, it certainly occurred to me that I could move it to more of a “pull” model inside of Unity by trying to grab frames in an Update() method rather than do the work separately and then pushing the results back to the App thread. It also occurred to me that the code is very single threaded and simply drops frames if it is ‘busy’ whereas it could be smarter and process them on some other thread including perhaps a thread from the thread pool. There are lots of possibilities Smile

In terms of displaying the results inside of Unity – I no longer need to display a preview from the webcam because my eyes are already seeing the same thing that the camera sees and so I’m just left with the challenge of displaying some text and so I just added a 3D Text object into the scene and made it accessible via a public field that can be set up in the editor.

image

and the ScriptHolder there is just a place to put my MainScript and pass it this TextMesh to display text in;

image

and that’s pretty much it.

I still see a fairly low processing rate when running on the device and I haven’t yet looked at that but here’s some screenshots of me looking at photos from Bing search on my 2nd monitor while running the app on HoloLens.

In this case the device (on my head) is around 40cm from the 24 inch monitor and I’ve got the Bing search results displaying quite large and the model seems to do a decent job of spotting dachshunds…

image

image

image

and dogs in general (although it has only really been trained on alsatians so it knows that they are dogs but not dachshunds);

image

and for whatever reason that I can’t explain I also trained it on ponies so it’s quite good at spotting those;

image

image

This works pretty well for me Smile I need to revisit and take a look at whether I can improve the processing speed and also the problem that I flagged in my previous post around not being able to run a release build but, otherwise, it feels like progress.

The code is in the same repo as it was before – I just added a Unity project to the repo.

https://github.com/mtaulty/WindowsMLExperiment

Second Experiment with Image Classification on Windows ML from UWP (on HoloLens)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following up from this earlier post;

First Experiment with Image Classification on Windows ML from UWP

around Windows ML;

AI Platform for Windows Developers

at the end of that previous post I’d said that I would be really keen to try the code that I’d written on HoloLens but, at the time of that post, the required Windows 10 “Redstone 4” preview wasn’t available for HoloLens.

Things change quickly these days Winking smile and just a few days later there’s a preview of “Redstone 4” available for HoloLens documented here;

HoloLens RS4 Preview

and I followed the instructions there and very quickly had that preview operating system running on my HoloLens.

The first thing that I then wanted to do was to take the code that I’d written for that previous post around WindowsML and try it out on HoloLens even though it was a 2D XAML app rather than a 3D immersive app.

My hope was that it would “just work”. Did it?

No, of course not, it’s software Smile 

I ran the code inside of Visual Studio and immediately got;

crash

Oh dear. But…I suspected that this might be because I had used Windows 10 SDK Preview version 17110 to build this app in the first place and perhaps that wasn’t going to work so well on a device that is now running a 17123.* build number.

So, I went back to the Windows Insider site and downloaded the Preview SDK labelled 10.0.17125.1000 to see if that changed things for me and I retargeted my application in Visual Studio to set its Target build to 17125 and its minimum build to 16299 before doing a complete rebuild and redeploy.

I had to set the minimum build to something below 17123 as that is what the device is now running.

Once again, I got the exact same error and so I set about trying to debug and immediately noticed that my debugger wasn’t stepping nicely and that prompted me to notice for the first time that VS had automatically selected the release build configuration and it jarred a memory in that I remembered that I had seen this exact same exception trying to run in release mode on the PC when I’d first written the code and I hadn’t figured it out putting it down to perhaps something in the preview SDK.

So, perhaps HoloLens wasn’t behaving any differently from the PC here? I switched to the debug configuration and, sure enough, the code doesn’t hit that marshalling exception and runs fine although I’m not sure yet about that ‘average time’ value that I’m calculating – that needs some looking into but here’s a screenshot of the app staring at a picture of a dachshund;

image

The screenshot is a bit weird because I cropped it out of a video recording and also because I’m holding up a picture of a dachshund in front of the app which is then displaying the view from its own webcam which contains the picture of the dachshund so it all gets a little bit recursive.

Here’s the app looking at a picture of an alsatian;

image

and it’s a little less sure about this pony;

image

So, for a quick experiment this is great in that I’ve taken the exact same code and the exact same model from the PC and it works ‘as is’ on these preview pieces on HoloLens Smile Clearly, I could do with taking a look at the time it seems to be taking to process frames but I suspect that’s to do with me running debug bits and/or the way in which I’m grabbing frames from the camera.

For me, it’s a bit of a challenge though to have this 2D XAML app get in the way of what the camera is actually looking at so the next step would be to see if I can put this into an immersive app rather than a 2D app and that’s perhaps where I’d follow up with a later blog post.

For this post, the code is just where it was for the previous post – nothing has changed Smile

By the way – I still don’t know what happens if I point the model at an actual dachshund/dog/pony – I need to get some of those for testing Winking smile and, additionally, I suspect that once the code is comfortable with being able to find a particular object then the next question is likely to involve locating it in the 3D scene which might involve some kind of correlation between the colour image and a depth image and I’m not sure whether that’s something that’s achievable – I’d need to think about that.

Conversations with the Language Understanding (LUIS) Service from Unity in Mixed Reality Apps

I’ve written quite a bit on this blog about speech interactions in the past and elsewhere like these articles that I wrote for the Windows blog a couple of years ago;

Using speech in your UWP apps- It’s good to talk

Using speech in your UWP apps- From talking to conversing

Using speech in your UWP apps- Look who’s talking

which came out of earlier investigations that I did for this blog like this post;

Speech to Text (and more) with Windows 10 UWP & ‘Project Oxford’

and we talked about Speech in our Channel9 show again a couple of years ago now;

image

and so I won’t rehash the whole topic here of speech recognition and understanding but in the last week I’ve been working on a fairly simple scenario that I thought I would share the code from.

Backdrop – the Scenario

The scenario involved a Unity application built against the “Stable .NET 3.5 Equivalent” scripting runtime which targets both HoloLens and immersive Windows Mixed Reality headsets where there was a need to use natural language instructions inside of the app.

That is, there’s a need to;

  1. grab audio from the microphone.
  2. turn the audio into text.
  3. take the text and derive the user’s intent from the spoken text.
  4. drive some action inside of the application based on that intent.

It’s fairly generic although the specific application is quite exciting but in order to get this implemented there’s some choices around technologies/APIs and whether functionality happens in the cloud or at the edge.

Choices

When it comes to (2) there’s a couple of choices in that there are layered Unity/UWP APIs that can make this happen and the preference in this scenario would be to use the Unity APIs which are the KeywordRecognizer and the DictationRecognizer for handling short/long chunks of speech respectively.

Those APIs are packaged so as to wait for a reasonable, configurable period of time for some speech to occur before delivering a ‘speech occurred’ type event to the caller passing the text that has been interpreted from the speech. 

There’s no cost (beyond on-device resources) to using these APIs and so in a scenario which only went as far as speech-to-text it’d be quite reasonable to have these types of APIs running all the time gathering up text and then having the app decide what to do with it.

However, when it comes to (3), the API of choice is LUIS which can take a piece of text like;

“I’d like to order a large pepperoni pizza please”

and can turn it into something like;

Intent: OrderPizza

Entity: PizzaType (Pepperoni)

Entity: Size (Large)

Confidence: 0.85

and so it’s a very useful thing as it takes the task of fathoming all the intricacies of natural language away from the developer.

This poses a bit of a challenge though for a ‘real time’ app in that it’s not reasonable to take every speech utterance that the user delivers and run it through the LUIS cloud service. There’s a number of reasons for that including;

  1. The round-trip time from the client to the service is likely to be fairly long and so, without care, the app would have many calls in flight leading to problems with response time and complicating the code and user experience.
  2. The service has a financial cost.
  3. The user may not expect or want all of their utterances to be run through the cloud.

Consequently, it seems sensible to have some trigger in an app which signifies that the user is about to say something that is of meaning to the app and which should be sent off to the LUIS service for examination. In short, it’s the;

“Hey, Cortana”

type key phrase that lets the system know that the user has something to say.

This can be achieved in a Unity app targeting .NET 3.5 by having the KeywordRecognizer class work in conjunction with the DictationRecognizer class such that the former listens for the speech keyword (‘hey, Cortana!’) and the latter then springs into life and listens for the dictated phrase that the user wants to pass on to the app.

As an aside, it’s worth flagging that these classes are only supported by Unity on Windows 10 as detailed in the docs and that there is an isSupported flag to let the developer test this at runtime.

There’s another aside to using these two classes together in that the docs here note that different types of recognizer cannot be instantiated at once and that they rely on an underlying PhraseRecognitionSystem and that the system has to be Shutdown in order to switch between one type of recognizer and another.

Later on in the post, I’ll return to the idea of making a different choice around turning speech to text but for the moment, I moved forward with the DictationRecognizer.

Getting Something Built

Some of that took a little while to figure out but once it’s sorted it’s “fairly” easy to write some code in Unity which uses a KeywordRecognizer to switch on/off a DictationRecognizer in an event-driven loop so as to gather dictated text.

I chose to have the notion of a DictationSink which is just something that receives some text from somewhere. It could have been an interface but I thought that I’d bring in MonoBehavior;

using UnityEngine;

public class DictationSink : MonoBehaviour
{
    public virtual void OnDictatedText(string text)
    {
    }
}

and so then I can write a DictationSource which surfaces a few properties from the underlying DictationRecognizer and passes on recognized text to a DictationSink;

using System;
using UnityEngine;
using UnityEngine.Windows.Speech;

public class DictationSource : MonoBehaviour
{
    public event EventHandler DictationStopped;

    public float initialSilenceSeconds;
    public float autoSilenceSeconds;
    public DictationSink dictationSink;
   
    // TODO: Think about whether this should be married with the notion of
    // a focused object rather than just some 'global' entity.

    void NewRecognizer()
    {
        this.recognizer = new DictationRecognizer();
        this.recognizer.InitialSilenceTimeoutSeconds = this.initialSilenceSeconds;
        this.recognizer.AutoSilenceTimeoutSeconds = this.autoSilenceSeconds;
        this.recognizer.DictationResult += OnDictationResult;
        this.recognizer.DictationError += OnDictationError;
        this.recognizer.DictationComplete += OnDictationComplete;
        this.recognizer.Start();
    }
    public void Listen()
    {
        this.NewRecognizer();
    }
    void OnDictationComplete(DictationCompletionCause cause)
    {
        this.FireStopped();
    }
    void OnDictationError(string error, int hresult)
    {
        this.FireStopped();
    }
    void OnDictationResult(string text, ConfidenceLevel confidence)
    {
        this.recognizer.Stop();

        if ((confidence == ConfidenceLevel.Medium) ||
            (confidence == ConfidenceLevel.High) &&
            (this.dictationSink != null))
        {
            this.dictationSink.OnDictatedText(text);
        }
    }
    void FireStopped()
    {
        this.recognizer.DictationComplete -= this.OnDictationComplete;
        this.recognizer.DictationError -= this.OnDictationError;
        this.recognizer.DictationResult -= this.OnDictationResult;
        this.recognizer = null;

        // https://docs.microsoft.com/en-us/windows/mixed-reality/voice-input-in-unity
        // The challenge we have here is that we want to use both a KeywordRecognizer
        // and a DictationRecognizer at the same time or, at least, we want to stop
        // one, start the other and so on.
        // Unity does not like this. It seems that we have to shut down the 
        // PhraseRecognitionSystem that sits underneath them each time but the
        // challenge then is that this seems to stall the UI thread.
        // So far (following the doc link above) the best plan seems to be to
        // not call Stop() on the recognizer or Dispose() it but, instead, to
        // just tell the system to shutdown completely.
        PhraseRecognitionSystem.Shutdown();

        if (this.DictationStopped != null)
        {
            // And tell any friends that we are done.
            this.DictationStopped(this, EventArgs.Empty);
        }
    }
    DictationRecognizer recognizer;
}

notice in that code my attempt to use PhraseRecognitionSystem.Shutdown() to really stop this recognizer when I’ve processed a single speech utterance from it.

I need to switch this recognition on/off in response to a keyword being spoken by the user and so I wrote a simple KeywordDictationSwitch class which tries to do this using KeywordRecognizer with a few keywords;

using System.Linq;
using UnityEngine;
using UnityEngine.Windows.Speech;

public class KeywordDictationSwitch : MonoBehaviour
{
    public string[] keywords = { "ok", "now", "hey", "listen" };
    public DictationSource dictationSource;

    void Start()
    {
        this.NewRecognizer();
        this.dictationSource.DictationStopped += this.OnDictationStopped;
    }
    void NewRecognizer()
    {
        this.recognizer = new KeywordRecognizer(this.keywords);
        this.recognizer.OnPhraseRecognized += this.OnPhraseRecgonized;
        this.recognizer.Start();
    }
    void OnDictationStopped(object sender, System.EventArgs e)
    {
        this.NewRecognizer();
    }
    void OnPhraseRecgonized(PhraseRecognizedEventArgs args)
    {
        if (((args.confidence == ConfidenceLevel.Medium) ||
            (args.confidence == ConfidenceLevel.High)) &&
            this.keywords.Contains(args.text.ToLower()) &&
            (this.dictationSource != null))
        {
            this.recognizer.OnPhraseRecognized -= this.OnPhraseRecgonized;
            this.recognizer = null;

            // https://docs.microsoft.com/en-us/windows/mixed-reality/voice-input-in-unity
            // The challenge we have here is that we want to use both a KeywordRecognizer
            // and a DictationRecognizer at the same time or, at least, we want to stop
            // one, start the other and so on.
            // Unity does not like this. It seems that we have to shut down the 
            // PhraseRecognitionSystem that sits underneath them each time but the
            // challenge then is that this seems to stall the UI thread.
            // So far (following the doc link above) the best plan seems to be to
            // not call Stop() on the recognizer or Dispose() it but, instead, to
            // just tell the system to shutdown completely.
            PhraseRecognitionSystem.Shutdown();

            // And then start up the other system.
            this.dictationSource.Listen();
        }
        else
        {
            Debug.Log(string.Format("Dictation: Listening for keyword {0}, heard {1} with confidence {2}, ignored",
                this.keywords,
                args.text,
                args.confidence));
        }
    }
    void StartDictation()
    {
        this.dictationSource.Listen();
    }
    KeywordRecognizer recognizer;
}

and once again I’m going through some steps to try and switch the KeywordRecognizer on/off here so that I can then switch the DictationRecognizer on/off as simply calling Stop() on a recognizer isn’t enough.

With this in place, I can now stack these components in Unity and have them use each other;

image

and so now I’ve got some code that listens for keywords, switches dictation on, listens for dictation and then passes that on to some DictationSink.

That’s a nice place to implement some LUIS functionality.

In doing so, I ended up writing perhaps more code than I’d liked as I’m not sure whether there is a LUIS library that works from a Unity environment targeting the Stable .NET 3.5 subset. I’ve found this to be a challenge with calling a few Azure services from Unity and LUIS doesn’t seem to be an exception in that there are client libraries on NuGet for most scenarios but I don’t think that they work in Unity (I could be wrong) and there aren’t generally examples/samples for Unity.

So…I rolled some small pieces of my own here which isn’t so hard when the call that we need here with LUIS is just a REST call.

Based on the documentation around the most basic “GET” functionality as detailed in the LUIS docs here,  I wrote some classes to represent the LUIS results;

using System;
using System.Linq;

namespace LUIS.Results
{
    [Serializable]
    public class QueryResultsIntent
    {
        public string intent;
        public float score;
    }
    [Serializable]
    public class QueryResultsResolution
    {
        public string[] values;

        public string FirstOrDefaultValue()
        {
            string value = string.Empty;
            
            if (this.values != null)
            {
                value = this.values.FirstOrDefault();
            }
            return (value);
        }
    }
    [Serializable]
    public class QueryResultsEntity
    {
        public string entity;
        public string type;
        public int startIndex;
        public int endIndex;
        public QueryResultsResolution resolution;

        public string FirstOrDefaultResolvedValue()
        {
            var value = string.Empty;

            if (this.resolution != null)
            {
                value = this.resolution.FirstOrDefaultValue();
            }

            return (value);
        }
        public string FirstOrDefaultResolvedValueOrEntity()
        {
            var value = this.FirstOrDefaultResolvedValue();

            if (string.IsNullOrEmpty(value))
            {
                value = this.entity;
            }
            return (value);
        }
    }
    [Serializable]
    public class QueryResults
    {
        public string query;
        public QueryResultsEntity[] entities;
        public QueryResultsIntent topScoringIntent;
    }
}

and then wrote some code to represent a Query of the LUIS service. I wrote this on top of pieces that I borrowed from my colleague, Dave’s, repo over here in github which provides some Unity compatible REST pieces with JSON serialization etc.

using LUIS.Results;
using RESTClient;
using System;
using System.Collections;

namespace LUIS
{
    public class Query
    {
        string serviceBaseUrl;
        string serviceKey;

        public Query(string serviceBaseUrl,
            string serviceKey)
        {
            this.serviceBaseUrl = serviceBaseUrl;
            this.serviceKey = serviceKey;
        }
        public IEnumerator Get(Action<IRestResponse<QueryResults>> callback)
        {
            var request = new RestRequest(this.serviceBaseUrl, Method.GET);

            request.AddQueryParam("subscription-key", this.serviceKey);
            request.AddQueryParam("q", this.Utterance);
            request.AddQueryParam("verbose", this.Verbose.ToString());
            request.UpdateRequestUrl();

            yield return request.Send();

            request.ParseJson<QueryResults>(callback);
        }        
        public bool Verbose
        {
            get;set;
        }
        public string Utterance
        {
            get;set;
        }
    }
}

and so now I can Query LUIS and get results back and so it’s fairly easy to put this into a DictationSink which passes the dictated speech in text form off to LUIS;

using LUIS;
using LUIS.Results;
using System;
using System.Linq;
using UnityEngine.Events;

[Serializable]
public class QueryResultsEventType : UnityEvent<QueryResultsEntity[]>
{
}

[Serializable]
public class DictationSinkHandler
{
    public string intentName;
    public QueryResultsEventType intentHandler;
}

public class LUISDictationSink : DictationSink
{
    public float minimumConfidenceScore = 0.5f;
    public DictationSinkHandler[] intentHandlers;
    public string luisApiEndpoint;
    public string luisApiKey;

    public override void OnDictatedText(string text)
    {
        var query = new Query(this.luisApiEndpoint, this.luisApiKey);

        query.Utterance = text;

        StartCoroutine(query.Get(
            results =>
            {
                if (!results.IsError)
                {
                    var data = results.Data;

                    if ((data.topScoringIntent != null) &&
                        (data.topScoringIntent.score > this.minimumConfidenceScore))
                    {
                        var handler = this.intentHandlers.FirstOrDefault(
                            h => h.intentName == data.topScoringIntent.intent);

                        if (handler != null)
                        {
                            handler.intentHandler.Invoke(data.entities);
                        }
                    }
                }
            }
        ));
    }
}

and this is really just a map which takes a look at the confidence score provided by LUIS, makes sure that it is high enough for our purposes and then looks into a map which maps between the names of the LUIS intents and a function which handles that intent set up here as a UnityEvent<T> so that it can be configured in the editor.

So, in use if I have some LUIS model which has intents named Create, DeleteAll and DeleteType then I can configure up an instance of this LUISDictationSink in Unity as below to map these to functions inside of a class named LUISIntentHandlers in this case;

image

and then a handler for this type of interaction might look something like;

    public void OnIntentCreate(LUIS.Results.QueryResultsEntity[] entities)
    {
        // We need two pieces of information here - the shape type and
        // the distance.
        var entityShapeType = entities.FirstOrDefault(e => e.type == "shapeType");
        var entityDistance = entities.FirstOrDefault(e => e.type == "builtin.number");

	// ...
    }

and this all works fine and completes the route that goes from;

keyword recognition –> start dictation –> end dictation –> LUIS –> intent + entities –> handler in code –> action

Returning to Choices – Multi-Language & Dictation in the Cloud

I now have some code that works and it feels like the pieces are in the ‘best’ place in that I’m running as much as possible on the device and hopefully only calling the cloud when I need to. That said, if I could get the capabilities of LUIS offline and run then on the device then I’d like to do that too but it’s not something that I think you can do right now with LUIS.

However, there is one limit to what I’m currently doing which isn’t immediately obvious and it’s that it is limited in terms of offering the possibility of non-English languages and, specifically, on HoloLens where (as far as I know) the recognizer classes only offer English support.

So, to support other languages I’d need to do my speech to text work via some other route – I can’t rely on the DictationRecognizer alone.

As an aside, it’s worth saying that I think multi-language support would need more work than just getting the speech to text to work in another language.

I think it would also require building a LUIS model in another language but that’s something that could be done.

An alternate way of performing speech-to-text that does support multiple languages would be to bring in a cloud powered speech to text API like the Cognitive Service Speech API and I could bring that into my code here by wrapping it up as a new type of DictationSource.

That speech-to-text API has some different ways of working. Specifically it can perform speech to text by;

  • Submitting an audio file in a specified format to a REST endpoint and getting back text.
  • Opening a websocket and sending chunks of streamed speech data up to the service to get back responses.

Of the two, the second has the advantage that it can be a bit smarter around detecting silence in the stream and it can also offer interim ‘hypotheses’ around what is being said before it delivers its ultimate view of what the utterance was. It can also support longer sections of speech than the file-based method.

So, this feels like a good way to go as an alternate DictationSource for my code.

However, making use of that API requires sending a stream of audio data to the cloud down a websocket in a format that is compatible with the service on the other end of the wire and that’s code I’d like to avoid writing. Ideally, it feels like the sort of code that one developer who was close to the service would write once and everyone would then re-use.

That work is already done if you’re using the service from .NET and you’re in a situation where you can make use of the client library that wrappers up the service access but I don’t think that it’s going to work for me from Unity when targeting the “Stable .NET 3.5 Equivalent” scripting runtime.

So…for this post, I’m going to leave that as a potential ‘future exercise’ that I will try to return back to if time permits and I’ll update the post if I do so.

In the meantime, here’s the code.

Code

If you’re interested in the code then it’s wrapped up in a simple Unity project that’s here on github;

http://github.com/mtaulty/LUISPlayground

That code is coupled to a LUIS service which has some very basic intents and entities around creating simple Unity game objects (spheres and cubes) at a certain distance in front of the user. It’s very rough.

There are three intents inside of this service. One is intended to create objects with utterances like “I want to create a cube 2 metres away”

image

and then it’s possible to delete everything that’s been created with a simple utterance;

image

and lastly it’s possible to get rid of just the spheres/cubes with a different intent such as “get rid of all the cubes”;

image

If you wanted to make the existing code run then you’d need an API endpoint and a service key for such a service and so I’ve exported the service itself from LUIS as a JSON export into this file in the repo;

image

so it should be possible to go to the LUIS portal and import that as a service;

image

and then plug in the endpoint and service key into the code here;

image