Windows 10, UWP and Sphero–Bringing 2D UWP Demo Code to HoloLens

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

I made up a demo many months ago now for showing some of the universal nature of the Windows 10 UWP platform.

The idea of the demo was to show that the UWP was as device agnostic as possible whilst still allowing for device specialisation as necessary.

The demo was to code up an app from scratch to control a Sphero ball such that the demo app had similar functionality across PC, Phone and IoT but had slightly different UI for Phone and slightly different functionality for IoT.

I’ve used this demo in many places and you can find a recording of one earlier version of it over here on Channel9 from the UK Future Decoded event back in 2015;

image

although I think on that particular day I had a failure with the part of the demo that did a bit of AllJoyn but I’ve shown that working in a few other places so it does generally work out.

Over time, I threw lots of things into this demo such that I had an app that worked with keyboard, mouse, touch, speech, pen/ink, XBOX controller and also could be driven off simple and rotary switches attached to a Raspberry PI.

I’ve also delivered versions of this demo where the UI;

  1. Talks directly to the Sphero over Bluetooth.
  2. Talks over AllJoyn to another app which then talks to the Sphero over Bluetooth.
  3. Talks over an Azure event hub to another app which then talks to the Sphero over Bluetooth.

and had some fun with showing those different kinds of combinations.

In more recent times, I stopped using the demo as I felt that I was becoming “the guy with the Sphero demo” and there wasn’t much more that I could realistically add into it Smile

I also hit a bit of a technical snag in that code which had been quite reliable for me stopped working on the Windows 10 Anniversary Edition and I hadn’t taken the time to quite figure out what had broken.

That said, I had reached out to Twitter and had this discussion with @roadz about the problem perhaps coming down to the underlying Robotkit.dll that sat underneath my code.

image

and he’d done the legwork to figure some of this out and had a version of Robotkit in source form over on github here that might work better than the one that I’d been using.

Today, I picked up that code and have managed to get my demo back working on top of it although I had to make a minor, surgical hack to the piece of code around lines 66 of RobotProvider.cs;

image

For me, I find that when I debug this code, the Sphero that is paired with my PC does not seem to show up named as “Sphero” and so I had to take out this condition to get the code to work. It might be one of those “just on my PC” scenarios but making that one code change seemed to make it possible for me to layer my own little library back on top of the Robotkit.dll and get my Sphero functionality back up and running as it had done prior to the Anniversary Update.

It’s worth saying that the Robotkit.dll that I had was originally built for Windows 8.0 and so I’d been (pleasantly) surprised that it had survived all the way to Windows 10 and including being used on non-PC devices so I wasn’t too disappointed when it finally ran out of steam on me Smile

Recreating (some of) that Sphero Demo

I wanted to run through the steps to recreate some of that original Sphero demo so as to leave a basic interface which could control the Sphero using keyboard, mouse, touch, speech. For the moment, I haven’t included the pieces that I have shown in the past around using XBOX controllers, ink, AllJoyn and IoT switches to control the device although they aren’t too hard to add back in.

I recorded a quick demo of putting these pieces back together below starting from the code that @roadz had added to GitHub.

and that (demo!) code then gets committed back to GitHub here as you’ll see if you watch the video.

The intention of putting that back together though was to see if I could get that same code working on HoloLens.

On HoloLens…

Around the time that I was showing that original demo, I would often say “this works on PC, Phone, IoT and would work on other devices like HoloLens” but I didn’t have a device back then to actually try it out and so I finally got to take the exact same code and deploy it to the HoloLens as the screen capture below shows;

and then ran that to see if it worked with my Sphero as this video captured from the HoloLens shows;

and it works pretty well running as a 2D app but, equally, I could take this Sphero code and package it into a 3D/holographic app and perhaps come up with a more natural way of interacting with the Sphero.

That’s for a future post – for now, it’s good to have this code back running and to have tried it out on another device Smile

Playing with Kinect for Windows V2 and Sphero from a Windows 8.1 Store App

This post is really under the category of “hacking for fun”. I wanted to knock up a quick demo for a talk about Windows 8.1 apps and I wanted to show that it’s possible to connect to interesting bits of kit from an app and the 2 of the “interesting” bits of kit that I came up with where;

  1. The Kinect for Windows V2 sensor.
  2. The Sphero.

and I tied them together with some JavaScript code such that I could control the rotation of a Sphero device with my left hand by moving it away from my body and such that I could drive the sphero with my right hand by “pushing” it away from my shoulder.

I had a few people asking me how that was put together and so I thought I’d blog that here in the manner of a “Recipe” where I’ll try and keep the code/HTML as short as possible to reproduce what I showed at the recent “Native Summit” event.

Step 1 – Beg, Borrow, Buy a Sphero

You’re going to need a Sphero. I got mine from Amazon and I’ve linked the image below to them. Other online stores are available.

image

Step 2 – Beg, Borrow, Buy a Kinect for Windows V2

You’re also going to be needing a Kinect for Windows V2 sensor. I got mine from Andrew at work. He’s not likely to be able to get you one so I think you’d need to buy one. I’ve hyperlinked to the UK Microsoft Store from the image below;

image

I’m not 100% sure whether other stores are available on this one.

Step 3 – Get a Dev Environment for Windows 8.1 Apps Set Up

You’re going to need somewhere to run Visual Studio 2013 on Windows 8.1. This is most likely going to be a PC but I suspect it can also be a Mac whether running Windows virtualised or dual-booted but I should say that I haven’t tried working with Kinect and Sphero on anything other than my Dell XPS 13.

The best plan to go get bits is probably to follow the “Get Set Up” link below;

image

Step 4 – Pair Your Sphero

Once you’ve got your Sphero charged up and have it switched on ( double-tap ) you need to pair it up with Windows so you go to your bluetooth device settings;

image

and pair it up. I didn’t really have any challenges in getting that done – I think it stumbled a bit on the first attempt but worked 2nd time around for me and has stayed working since.

Step 5 – Try an Official Sphero App

The chances are that if the Sphero app in the Windows Store doesn’t work for you then your own code isn’t likely to so go grab that app by clicking on the image below;

image

and give that a quick try.

Step 6 – Get the Kinect SDK

You’re also going to need the Kinect for Windows V2 SDK so go and download that and install it – linked from the picture below;

image

Step 7 – Get the Robotkit Library

I also made use of the Robotkit library. There’s possibly another or a better way of doing this but it’s the direction that I took. You can download that by going up to the official Sphero developer centre;

https://developer.gosphero.com/

and then finding your way across to this github project;

https://github.com/orbotix/Sphero-Win-SDK

and in there you’ll find Robotkit.dll which is a .NET library that you can make use of.

You’ll also find some other bits and pieces that you might also find useful in the sample there.

Step 8 – Write Some Code

The demo that I showed was written in JavaScript so I’ll try and produce a minimalistic version of that here. The Robotkit.dll library is in .NET so I need to wrap something around it to turn it into a WinRT component that can be used from JavaScript.

The starting point for that is then to make a new project in Visual Studio of type “Windows Runtime Component”

image

and then add a reference to that Robotkit.dll that I got from the Sphero github project;

image

and then I just wrote a little class called SpheroControl which offered up some of the functionality that’s within that Robotkit library in as simple, short a way as I could;

namespace MySpheroLibrary
{
  using RobotKit;
  using System;
  using System.Threading.Tasks;
  using Windows.Foundation;

  public sealed class SpheroControl
  {
    Sphero sphero;
    int r;
    int g;
    int b;
    int rotation;
    float backlightBrightness;

    private SpheroControl(Sphero sphero)
    {
      this.sphero = sphero;
      this.r = this.g = this.b = 0;
      this.backlightBrightness = 0.0f;
      this.rotation = 0;
    }
    public int Red
    {
      get
      {
        return (this.r);
      }
      set
      {
        this.r = value;
        this.SetColour();
      }
    }
    public int Green
    {
      get
      {
        return (this.g);
      }
      set
      {
        this.g = value;
        this.SetColour();
      }
    }
    public int Blue
    {
      get
      {
        return (this.b);
      }
      set
      {
        this.b = value;
        this.SetColour();
      }
    }
    public float BacklightBrightness
    {
      get
      {
        return (this.backlightBrightness);
      }
      set
      {
        this.backlightBrightness = value;
        this.sphero.SetBackLED(this.backlightBrightness);
      }
    }
    void SetColour()
    {
      this.sphero.SetRGBLED(this.r, this.g, this.b);
    }
    public int Rotation
    {
      get
      {
        return (this.rotation);
      }
      set
      {
        this.rotation = value;
        this.sphero.Roll(this.rotation, 0);
      }
    }
    public void Roll(float speed)
    {
      this.sphero.Roll(this.rotation, speed);
    }
    public static IAsyncOperation<SpheroControl> GetFirstConnectedSpheroAsync()
    {
      Task<SpheroControl> task = InternalGetFirstConnectedSpheroAsync();
      return (task.AsAsyncOperation());
    }
    static Task<SpheroControl> InternalGetFirstConnectedSpheroAsync()
    {
      TaskCompletionSource<SpheroControl> task = new TaskCompletionSource<SpheroControl>();

      var provider = RobotProvider.GetSharedProvider();
      EventHandler<Robot> handler = null;

      handler = (s, robot) =>
      {
        provider.DiscoveredRobotEvent -= handler;

        handler = (sender, cxnRobot) =>
        {
          provider.ConnectedRobotEvent -= handler;
          task.SetResult(new SpheroControl((Sphero)cxnRobot));
        };
        provider.ConnectedRobotEvent += handler;
        provider.ConnectRobot(robot);
      };

      provider.DiscoveredRobotEvent += handler;
      provider.FindRobots();

      return (task.Task);
    }
  }
}

Now, this was put together pretty quickly and I guess the most “tricky” part of it is the bit that’s trying to get hold of the first Sphero device that it can find connected and that’s mostly a bit “tricky” because I chose to inline a bunch of event handlers and I have to try and cope with adding/removing them. There’s no error handling in there as you’d spot.

However, with that in place I can now create a Windows 8.1 project in JavaScript and make use of it so it’s back to Solution->Add->New Project;

image

and inside of that JavaScript project I need to reference both the MySpheroLibrary project that I’ve just made and the Kinect SDK.

image

The Kinect SDK involves native code so I need to choose a processor architecture to work with and so I can just open up the “Configuration Manager” and switch it to (e.g.) x64;

image

this always feels a little “weird” to me when it comes to working in JavaScript but if we take a dependency on underlying native code then it’s got to be done as that native code has to be built for a processor architecture.

The other thing I need to do is to tweak my application manifest. I need to set up;

image

and I also need to open up the manifest as an XML file and make sure that I’ve got (taken from the sample I think);

image

From there, I could pretty much leave my UI alone. The blank project template will give me this;

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>MySpheroApp</title>

    <!-- WinJS references -->
    <link href="//Microsoft.WinJS.2.0/css/ui-dark.css" rel="stylesheet" />
    <script src="//Microsoft.WinJS.2.0/js/base.js"></script>
    <script src="//Microsoft.WinJS.2.0/js/ui.js"></script>

    <!-- MySpheroApp references -->
    <link href="/css/default.css" rel="stylesheet" />
    <script src="/js/default.js"></script>
</head>
<body>
    <p>Content goes here</p>
</body>
</html>

and that’s fine. It’s worth saying that there’s absolutely no need to use WinJS here, I’m just leaving it in place as I’m not planning to display any UI at all here in the interests of keeping things short.

The blank template also adds a bunch of code which I would strip down to this;

(function () {
    "use strict";

    var app = WinJS.Application;

    app.onactivated = function (args)
    {
      initialiseSphero();
      initialiseKinect();
    };

    app.start();
})();

and then there’s a need to write those 2 initialisation functions and a few helper functions to wire things together. I cooked this up in a different form without any UI and it ended up looking as below;

(function ()
{
  "use strict";

  var app = WinJS.Application;
  var sphero = null;
  var sensor = null;
  var reader = null;
  var kinectSDK = WindowsPreview.Kinect;
  var magicNumbers =
    {
      xRangeMin : 0.1,
      xRangeMax: 0.5,     
      zRangeMin : 0.2,
      zRangeMax: 0.7,
    };

  // for angles 0 to 360 
  magicNumbers.xRangeMultipler = 360.0 / (magicNumbers.xRangeMax - magicNumbers.xRangeMin);

  // for speeds 0 to 1
  magicNumbers.zRangeMultiplier = 1.0 / (magicNumbers.zRangeMax - magicNumbers.zRangeMin);

  app.onactivated = function (args)
  {
    var promise = initialiseSpheroAsync();

    promise.done(
      function ()
      {
        initialiseKinect();
      });
  };
  function initialiseSpheroAsync()
  {
    var promise = MySpheroLibrary.SpheroControl.getFirstConnectedSpheroAsync();
    promise.done(
      function (foundSphero)
      {
        sphero = foundSphero;
        sphero.backlightBrightness = 1.0;
        sphero.red = 255;
      }
    );
    return (promise);
  }
  function initialiseKinect()
  {
    sensor = kinectSDK.KinectSensor.getDefault();
    sensor.open();

    // we're interested in skellingtons...
    reader = sensor.bodyFrameSource.openReader();
    reader.onframearrived = onFrameArrived;
  }
  function onFrameArrived(e)
  {
    var frame = e.frameReference.acquireFrame();
    var body = null;
    var i = 0;
    var leftHand, rightHand, leftHip, rightShoulder;
    var leftDistance = 0;
    var rightDistance = 0;
    var bodies = null;

    // we don't always get frames...
    if (frame != null)
    {
      if (!bodies)
      {
        bodies = new Array(frame.bodyCount);
      }
      // populate the array of bodies...
      frame.getAndRefreshBodyData(bodies);

      // try and find the first one that we have identified as tracked...
      for (i = 0; i < frame.bodyCount; i++)
      {
        if (bodies[i].isTracked)
        {
          body = bodies[i];
          break;
        }
      }
      // if we got one...
      if (body)
      {
        leftHand = getJoint(body, kinectSDK.JointType.handLeft);
        rightHand = getJoint(body, kinectSDK.JointType.handRight);
        leftHip = getJoint(body, kinectSDK.JointType.hipLeft);
        rightShoulder = getJoint(body, kinectSDK.JointType.shoulderRight);

        if (areTracked(leftHand, rightHand, leftHip, rightShoulder))
        {
          rotate(leftHand.position, leftHip.position);
          drive(rightHand.position, rightShoulder.position);
        }
      }
      frame.close();
    }
  }
  function getJoint(body, jointType)
  {
    var joint = null;

    // I'm sure there's a better way of moving through these slightly odd
    // feeling iterators in JS...
    var iter = body.joints.first();

    while (iter.hasCurrent)
    {
      if (iter.current.key === jointType)
      {
        joint = iter.current.value;
        break;
      }
      iter.moveNext();
    }
    return (joint);
  }
  function areTracked()
  {
    var tracked = true;

    for (var i = 0; ((i < arguments.length) && (tracked)) ; i++)
    {
      tracked = arguments[i] &&
        (arguments[i].trackingState === kinectSDK.TrackingState.tracked);
    }
    return (tracked);
  }
  function rotate(leftHandPosition, leftHipPosition)
  {
    var xDistance = Math.abs(leftHandPosition.x - leftHipPosition.x);
    var clampedValue = 0;

    if ((xDistance >= magicNumbers.xRangeMin) && (xDistance <= magicNumbers.xRangeMax))
    {
      clampedValue =
        (xDistance - magicNumbers.xRangeMin) * magicNumbers.xRangeMultipler;

      sphero.rotation = clampedValue;
    }
  }
  function drive(rightHandPosition, rightShoulderPosition)
  {
    var zDistance = Math.abs(rightHandPosition.z - rightShoulderPosition.z);
    var clampedValue = 0;

    if ((zDistance >= magicNumbers.zRangeMin) && (zDistance <= magicNumbers.zRangeMax))
    {
      clampedValue =
        (zDistance - magicNumbers.zRangeMin) * magicNumbers.zRangeMultiplier;

      sphero.roll(clampedValue);
    }
  }

  app.start();

})();

and that’s all there is to the very basics of it – you could then add some UI and call more functions and error handling and so on and so forth but it works reasonably well for me.

That code’s here for download if you want to take it as it is.

Enjoy.