Talk at Native Summit–“Microsoft Has An App Platform With Some Interesting Bits”

I was in London speaking at the inaugural “Native Summit” event yesterday;

image

hosted at the really excellent Genesis Cinema;

this was an all-platforms “native app” event with a great line up of speakers;

image

and I put together a talk with a slightly tongue-in-cheek title of “Microsoft. Has. An. App. Platform. With. Some. Interesting. Bits” which I’d hoped might appeal to whoever was in the audience whether they be primarily interested in Android, iOS, Windows platforms or whether they come at the topic from more of a cross-platform approach.

I had great fun with that talk and really appreciated that the audience seemed very much alive and engaged even though we were in a darkened cinema on a warm, sunny afternoon.

The event was streamed over the web and the sessions were recorded so I’ll post the video of my talk here when the Native Summit guys have got it canned.

In the meantime, the very talented David Burton was at the event doing sketches to capture the essence of each talk and he drew this really great representation of what I talked about – linked back to his flickR stream for the event;

image

Thanks to all who came along – some great conversations and I’ll update this post with the video of the session when I have it.

Update 1 – Session Feedback

Luke who ran the Native Summit event gave me the (anonymised) verbatim feedback from the event so I thought it’d fun to include that here in its entirety and I thought I’d have a light-hearted go at responses to some of it Winking smile

The Good

  • Love how fluid code demos were and how up to date the technology used was.
  • Really interesting – had never occurred to me to wire up a Kinect to a motorised glowing ball before.
  • fun, interesting, packed with things
  • Microsoft have all the toys!
  • "I am already a bit of a (tentative) MS fanboy but I thought Mike did a fantastic job of showing off the platform and I felt genuinely excited to experience the surprise from others of how well-rounded and fun the MS offering is. Mike demonstrated his experience in public speaking and delivered and informative, fun and valuable talk.
  • Live coding done right. By not being live. Snippets are the way forward. 
  • I thought Mike was a fantastic speaker, and his demos were very well thought out, well presented, and really interesting. I haven't felt this excited about Microsoft and the potential of Microsoft hardware and software in a long time. 
  • This is such a treat, I would give it a 1 if it weren't for that failed demonstration. 
  • nice demos 
  • Varied and interesting. Well pitched. Only lost a point as there was too many javascript code examples over C# – but that's just a person thing. 😉

[MT] I’m a C# guy. However, I figured that given there were a lot of Android/iOS folks in the audience then JavaScript would be more of a lingua franca for this group of people. Otherwise, I’d have shown C#.

  • The amount of stuff he showed like the Kinect in JS and the Sphero ball were amazing, he made it look very simple and provided a lot of humour. Definitely was very engaging.
  • Was good, liked the hybrid applications now possible and he also engaged the audience very well and liked the fact he was showing everything realtime on stage 
  • Great presentation and demo. 
  • Great to see different things in Windows
  • I work for a Microsoft partner, so already working on Universal Apps. Was cool to see the JavaScript support though, and he was quite a funny guy. Was nice to laugh lots.
  • Good presentation showing the wide range of development opportunities on MS platform.
  • Nice way of showing how quickly you can throw things together on the new MS platforms.

The Mixed

  • I do think he was a bit over-apologetic for MS during the panel discussion and he would have done himself a favour by abstracting his opinion from the mothership a bit, but I suspect that's a tough default to kick."

[MT] Very fair comment. I always find panels a bit of a challenge because I can’t speak for Microsoft other than to state facts where I happen to know them. So, on panels I feel like I’m always walking a line between “What does Mike think?” and “What does Microsoft think?”.

  • Interesting talk but with little native development.

[MT] Sorry, I failed you. With the exception of the first demo where I talked about the difference between native and hybrid *EVERYTHING* I showed was a native app. Some of them were written in JavaScript. Some were written in .NET. Some were written in a mixture of JavaScript/C#/C++. But everything was a native app calling native WinRT APIs. Nothing was being done as a hybrid web app (other than the initial hybrid web app where I tried to make this point).

  • Stick to mobile / app content.

[MT] Sorry, I failed you too. There was *NOTHING* I showed here that wasn’t an app built for the Windows/Phone app platform that could be submitted into either the Windows/Phone Stores. Everything was an app. Nothing was anything else. Some things were Phone apps. Some things were Windows apps but in my head they are all mobile apps. I’d reiterate that the stuff I was doing e.g. playing a game with an Xbox controller works on a small tablet.

The Ugly (well, not really)

  • Very funny, quite interesting, veered into being too salesy at points.

[MT] Ah, sorry – didn’t mean to “sell” you anything other than the idea that Windows could be an interesting place to be.

  • Just seemed like gimmicks to make Microsoft dev look interesting. Didn't make me care about Microsoft dev.

[MT] I hear what you’re saying. I did try to pull out some interesting things that I thought were a bit different about Windows/Phone app development. I could have done an hour on other topics like our multi-language approach or the way in which we handle the application process lifecycle or some such but I wanted to try and provide a few shiny things. I’d be happy to talk to you about other aspects any time if you wanted to talk about other things.

  • This talk was given by a humorous robot who'd been programmed with unquestionable loyalty to Microsoft.  Probably manufactured by Microsoft. Who uses the force.

[MT] Guilty as charged. When I grow up I want to be a real boy like Clippy Winking smile

Kinect for Windows V2 SDK: Hello (Skeletal) World for the 3D JavaScript Windows 8.1 App Developer

Following on from my previous posts;

and, again, highlighting the official videos and samples for the Kinect for Windows V2 SDK bits;

Programming-Kinect-for-Windows-v2

I thought I’d go out on a bit of a limb and combine my lack of skill in JavaScript with my lack of skill in 3D and attempt to move my Windows 8.1 Store app such that it was able to draw in 3D using JavaScript.

In order to do that, I wanted a high level library to help out on the 3D aspects (much like I did when working in WPF) and so I went and looked at http://threejs.org/ to do that for me. It took a little bit of reading but not more than about 10 minutes to get a basic scene up and running and then to figure out how I might be able to use it to draw what I wanted in terms of a connected skeleton.

There were more parallels between 2D/3D JavaScript and 2D/3D .NET in that I ended up trying to come up with a common class that served as a base class for a “2D body drawer” and a “3D body drawer” although I found it slightly more of a challenge than I did in the WPF world.

Here’s a little video of the code running showing both 2D and 3D drawing;

In terms of having both 2D and 3D drawing in the same app, I ended up having 2 canvas instances that I switch on and off as (AFAIK) you can either draw 2D to a canvas or 3D to a canvas but you can’t switch between them (based on: http://msdn.microsoft.com/en-us/library/ie/ff975238(v=vs.85).aspx).

So, my UI became as below which is pretty much the same as the previous post except that I have a 2nd Canvas and I have a basic toggle button on the AppBar to switch between 2D/3D mode which shows/hides the canvases. The code I have though is not capable of doing this halfway through running – it requires stopping the frame reader and releasing the sensor.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>App200</title>

    <script src="//Microsoft.WinJS.2.0/js/base.js"></script>

    <link href="/css/default.css" rel="stylesheet" />
    <script src="js/three.min.js"></script>
    <script src="js/Iterable.js"></script>
    <script src="js/JointConnection.js"></script>
    <script src="js/BodyDrawerBase.js"></script>
    <script src="js/CanvasBodyDrawer.js"></script>
    <script src="js/3jsBodyDrawer.js"></script>
    <script src="js/KinectControl.js"></script>
    <script src="js/UIHandler.js"></script>
    <script src="js/default.js"></script>
    <script src="//Microsoft.WinJS.2.0/js/ui.js" type="text/javascript"></script>
    <link href="//Microsoft.WinJS.2.0/css/ui-dark.css" rel="stylesheet" type="text/css">
</head>
<body>
    <!-- NB: setting this to 1920x1080 but CSS then scales it to the available space -->
    <!-- took some direction from http://stackoverflow.com/questions/2588181/canvas-is-stretched-when-using-css-but-normal-with-width-height-properties -->
    <canvas id="canvas3d" width="1920" height="1080"></canvas>
    <canvas id="canvas2d" width="1920" height="1080"></canvas>
    <div id="appBar" data-win-control="WinJS.UI.AppBar" data-win-options="{ sticky:true }">
        <button id="chkThreeD" data-win-control="WinJS.UI.AppBarCommand" data-win-options="{label:'3D', selected:true, type:'toggle'}"></button>
        <button data-win-control="WinJS.UI.AppBarCommand"
                onclick="Sample.UIHandler.onGetSensor()"
                data-win-options="{icon:'camera', label:'get sensor', section:'global', type:'button'}"></button>
        <button data-win-control="WinJS.UI.AppBarCommand"
                onclick="Sample.UIHandler.onOpenReader()"
                data-win-options="{icon:'play', label:'open reader', section:'global', type:'button'}"></button>
        <button data-win-control="WinJS.UI.AppBarCommand"
                onclick="Sample.UIHandler.onCloseReader()"
                data-win-options="{icon:'stop', label:'close reader', section:'global', type:'button'}"></button>
        <button data-win-control="WinJS.UI.AppBarCommand"
                onclick="Sample.UIHandler.onReleaseSensor()"
                data-win-options="{icon:'closepane', label:'release sensor', section:'global', type:'button'}"></button>
    </div>
</body>
</html>

and you can no doubt spot the include for three.min.js to help out on the 3D drawing. I then have a bit of code behind this UI which is much like it was in the previous post except that it does some basic work to toggle the visibility of the canvases depending on the toggle switch;

(function ()
{
  "use strict";

  var UIHandler = WinJS.Class.define(
    function ()
    {
    },
    {
      onGetSensor: function (canvas)
      {
        var chkThreeD = document.getElementById('chkThreeD').winControl;
        var threeD = chkThreeD.selected;
        var canvas2d = document.getElementById('canvas2d');
        var canvas3d = document.getElementById('canvas3d');
        var canvas = threeD ? canvas3d : canvas2d;
        var hideCanvas = threeD ? canvas2d : canvas3d;

        canvas.style.visibility = 'visible';
        hideCanvas.style.visibility = 'hidden';

        this._controller = new Sample.KinectControl(
          function ()
          {
            return (
              threeD ? new Sample.ThreeJsBodyDrawer(canvas) : new Sample.CanvasBodyDrawer(canvas));
          }
        );

        this._controller.getSensor();
      },
      onOpenReader: function ()
      {
        this._controller.openReader();
      },
      onCloseReader: function ()
      {
        this._controller.closeReader();
      },
      onReleaseSensor: function ()
      {
        this._controller.releaseSensor();
      },
      _controller: null
    }
  );

  WinJS.Namespace.define(
    'Sample',
    {
      UIHandler: new UIHandler()
    });

})();

and it instantiates my KinectControl ‘class’,  passing it a factory function that either returns a CanvasBodyDrawer or a ThreeJsBodyDrawer depending on whether we’re in 2d/3d mode.

At this point I started to try and define a bit of commonality between drawing to a Canvas in 2D and using three.js to draw to a Canvas in 3D and I bumped up against a few hurdles;

  1. The 2D drawing model largely is simpler than the 3D one in that a context is used to draw circles/lines at various co-ordinates on screen.
  2. The 3D drawing model is more about building up a scene of elements and those elements can later be retrieved and manipulated.

The other thing that I hit was more performance related in that;

  1. I found that in the 2D model I could get away with clearing the entire Canvas and then re-drawing all circles and the connecting lines between them for each frame of data off the sensor.
  2. I found that the 3D model was much less forgiving and I needed to be more reasonable and draw the 3D spheres once and then move them around. I also got much, much better performance by using the capability that three.js has to draw lots of line segments in one batch rather than treating my connections between skeletal points as separate lines.

Because of this, I changed my KinectControl class somewhat from the previous post such that the OnFrameArrived function essentially iterates around its set of 6 body drawing instances and executes a DrawFrame/ClearFrame method call depending on whether that particular body is being tracked or not by the Kinect sensor. This is different from the previous post where the whole Canvas was always cleared before the tracked bodies were drawn and it leans towards the 3D model where each object responsible for drawing a single body has a ‘memory’ of the elements that it has drawn and can re-draw them by moving them or can clear them entirely from the scene without having an impact on other drawn bodies;

(function ()
{
  "use strict";

  var nsKinect = WindowsPreview.Kinect;

  var constants = {
    bodyCount : 6
  };

  var kinectControl = WinJS.Class.define(
    function (bodyDrawerFactory)
    {
      this._bodyDrawerFactory = bodyDrawerFactory;
    },
    {
      getSensor : function()
      {
        var bodyCount = 0;

        this._sensor = nsKinect.KinectSensor.getDefault();
        this._sensor.open();

        this._bodies = new Array(constants.bodyCount);
        this._bodyDrawers = new Array(constants.bodyCount);

        for (bodyCount = 0; bodyCount < constants.bodyCount; bodyCount++)
        {
          this._bodyDrawers[bodyCount] = this._bodyDrawerFactory();
          this._bodyDrawers[bodyCount].init(bodyCount, this._sensor);
        }
      },
      openReader : function()
      {
        this._boundHandler = this._onFrameArrived.bind(this);
        this._reader = this._sensor.bodyFrameSource.openReader();
        this._reader.addEventListener('framearrived', this._boundHandler);
      },
      closeReader : function()
      {
        this._reader.removeEventListener('framearrived', this._boundHandler);
        this._boundHandler = null;
        this._reader.close();
        this._reader = null;
      },
      releaseSensor : function()
      {
        this._bodyDrawers = null;
        this._bodies = null;
        this._sensor.close();
        this._sensor = null;
      },
      _onFrameArrived : function(e)
      {
        var frame = e.frameReference.acquireFrame();
        var i = 0;

        if (frame)
        {
          frame.getAndRefreshBodyData(this._bodies);

          for (i = 0; i < constants.bodyCount; i++)
          {
            if (this._bodies[i].isTracked)
            {
              this._bodyDrawers[i].drawFrame(this._bodies[i]);
            }
            else
            {
              this._bodyDrawers[i].clearFrame();
            }
          }
          frame.close();
        }
      },
      _boundHandler:null,
      _bodyDrawerFactory : null,
      _sensor: null,
      _reader: null,
      _bodyDrawers: null,
      _bodies : null
    }
  );

  WinJS.Namespace.define('Sample',
    {
      KinectControl : kinectControl
    }
  );

})();

Because of the 2D/3D drawing differences, I found that the ‘base class’ abstraction I built to try and represent the commonality of both of these approaches is a bit clunky but I ended up with this ‘class’ which wasn’t part of the previous post;

(function ()
{
  "use strict";

  var nsKinect = WindowsPreview.Kinect;

  var bodyDrawerBase = WinJS.Class.define(
    function ()
    {
    },
    {
      init: function (index, sensor)
      {
        this._index = index;
        this._sensor = sensor;
      },
      drawFrame: function (body)
      {
        // could almost certainly get this all done in one pass.
        var jointPositions = this._drawJoints(body);
        this._drawLines(jointPositions);
        this._drawFrameDone();
      },
      _mapPoint: function (point)
      {
        return (point);
      },
      clearFrame : function()
      {
        throw new Error('Abstract base class method call');
      },
      _drawJoint : function(mappedPoint, isLeaf, color)
      {
        throw new Error('Abstract base class method call');
      },
      _drawConnectionBetweenJoints:function(jointPosition1, jointPosition2, lineColor)
      {
        throw new Error('Abstract base class method call');
      },
      _drawJoints: function(body)
      {
        var that = this;
        var jointPositions = {};

        Iterable.forEach(body.joints,
          function (keyValuePair)
          {
            var jointType = keyValuePair.key;
            var joint = keyValuePair.value;
            var isTracked = joint.trackingState === nsKinect.TrackingState.tracked;    
            var mappedPoint = that._mapPoint(joint.position);

            if (that._isJointForDrawing(joint, mappedPoint))
            {
              that._drawJoint(
                jointType,
                mappedPoint,
                that._isLeaf(jointType),
                isTracked ? bodyDrawerBase._colors[that._index] : bodyDrawerBase._inferredColor);

              jointPositions[jointType] = mappedPoint;
            }
            else
            {
              that._ensureJointNotDrawn(jointType);
            }
          }
        );
        return (jointPositions);
      },
      _ensureJointNotDrawn : function(jointType)
      {

      },
      _drawFrameDone: function ()
      {
      },
      _drawLines: function(jointPositions)
      {
        var that = this;

        bodyDrawerBase._jointConnections.forEach(
          function (jointConnection)
          {
            jointConnection.forEachPair(
              function (j1, j2)
              {
                // do we have this pair recorded in our positions? 
                // i.e. have we drawn them?
                if (jointPositions[j1] && jointPositions[j2])
                {
                  that._drawConnectionBetweenJoints(
                    jointPositions[j1], jointPositions[j2], bodyDrawerBase._lineStyle);
                }
              }
            );
          }
        );
      },
      _isLeaf: function(jointType)
      {
        var leafs = [nsKinect.JointType.head, nsKinect.JointType.footLeft, nsKinect.JointType.footRight];
        return (leafs.indexOf(jointType) !== -1);
      },
      _isJointForDrawing: function(joint, point)
      {
        return (
          (joint.trackingState !== nsKinect.TrackingState.notTracked) &&
          (point.x !== Number.NEGATIVE_INFINITY) &&
          (point.y !== Number.POSITIVE_INFINITY));
      },
      _index : -1,
      _sensor : null
    },
    {
      _colors: ['red', 'green', 'blue', 'yellow', 'purple', 'orange'],
      _lineColor: 'black',
      _inferredColor: 'grey',
      _lineStyle : 'black',
      _jointConnections:
        [
          Sample.JointConnection.createFromStartingJoint(nsKinect.JointType.spineBase, 2),
          Sample.JointConnection.createFromStartingJoint(nsKinect.JointType.shoulderLeft, 4),
          Sample.JointConnection.createFromStartingJoint(nsKinect.JointType.shoulderRight, 4),
          Sample.JointConnection.createFromStartingJoint(nsKinect.JointType.hipLeft, 4),
          Sample.JointConnection.createFromStartingJoint(nsKinect.JointType.hipRight, 4),
          Sample.JointConnection.createFromStartingJoint(nsKinect.JointType.neck, 2),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.spineMid, nsKinect.JointType.spineShoulder, nsKinect.JointType.neck),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.shoulderLeft, nsKinect.JointType.spineShoulder, nsKinect.JointType.shoulderRight),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.hipLeft, nsKinect.JointType.spineBase, nsKinect.JointType.hipRight),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.handTipLeft, nsKinect.JointType.handLeft),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.handTipRight, nsKinect.JointType.handRight),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.wristLeft, nsKinect.JointType.thumbLeft),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.wristRight, nsKinect.JointType.thumbRight)
        ]
    }
  );

  WinJS.Namespace.define('Sample',
    {
      BodyDrawerBase : bodyDrawerBase
    });

})();

I can then use WinJS.Class.derive to built out a version of my previous post’s CanvasBodyDrawer class which derives from this base class;

(function ()
{
  "use strict";

  var nsKinect = WindowsPreview.Kinect;

  var constants =
  {
    circleLeafRadius: 30,
    circleNonLeafRadius: 10,
    lineWidth: 3,
    bodyCount: 6
  };

  var canvasBodyDrawer = WinJS.Class.derive(Sample.BodyDrawerBase,
    function (canvas)
    {
      this._canvas = canvas;
    },
    {
      init: function (index, sensor)
      {
        Sample.BodyDrawerBase.prototype.init.call(this, index, sensor);

        this._sensorColourFrameDimensions = {};

        this._sensorColourFrameDimensions.width =
          this._sensor.colorFrameSource.frameDescription.width;

        this._sensorColourFrameDimensions.height =
          this._sensor.colorFrameSource.frameDescription.height;
      },
      clearFrame: function ()
      {
        // we don't clear because we can't clear on a per-frame basis. we
        // flag the fact we've been asked to clear.
        canvasBodyDrawer._drawCount++;
      },
      _drawFrameDone: function ()
      {
        // at the end of the frame we also flag that we've drawn.
        canvasBodyDrawer._drawCount++;
      },
      _clearBeforeFirstFrame: function ()
      {
        var context;

        // if we've done 6 draw/clears then it must be time to really clear
        // the lot before the next 6 draw/clears.
        if (canvasBodyDrawer._drawCount >= constants.bodyCount)
        {
          context = this._getContext();
          context.clearRect(0, 0, this._canvas.width, this._canvas.height);
          canvasBodyDrawer._drawCount = 0;
        }
      },
      _getContext: function ()
      {
        return (this._canvas.getContext('2d'));
      },
      _mapPoint: function (point)
      {
        var colourPoint = this._sensor.coordinateMapper.mapCameraPointToColorSpace(
          point);

        colourPoint.x *= this._canvas.width / this._sensorColourFrameDimensions.width;
        colourPoint.y *= this._canvas.height / this._sensorColourFrameDimensions.height;

        return (colourPoint);
      },
      _drawJoint: function (jointType, mappedPoint, isLeaf, color)
      {
        var context = this._getContext();

        this._clearBeforeFirstFrame();

        context.fillStyle = color;

        context.beginPath();

        context.arc(
          mappedPoint.x,
          mappedPoint.y,
          isLeaf ? constants.circleLeafRadius : constants.circleNonLeafRadius,
          2 * Math.PI,
          false);

        context.fill();
        context.stroke();
        context.closePath();

        canvasBodyDrawer._drawn = true;
      },
      _drawConnectionBetweenJoints: function (jointPosition1, jointPosition2, lineColor)
      {
        var context = this._getContext();
        context.strokeStyle = lineColor;
        context.lineWidth = constants.lineWidth;

        context.beginPath();
        context.moveTo(jointPosition1.x, jointPosition1.y);
        context.lineTo(jointPosition2.x, jointPosition2.y);
        context.stroke();
        context.closePath();
      },
      _canvas: null,
      _sensorColourFrameDimensions: null
    },
    {
      _drawCount : 0
    }
  );

  WinJS.Namespace.define('Sample',
    {
      CanvasBodyDrawer: canvasBodyDrawer
    });

})();

and then I wrote a new 3D drawing derivation using three.js to do the 3D work for me. This fits better with the base class than the 2D one which I felt I shoe-horned into a new shape from where it was in the previous blog post;

(function ()
{
  "use strict";

  var nsKinect = WindowsPreview.Kinect;

  var constants =
  {
    cameraFieldOfView: 45,
    nearPlaneDistance : 0.1,
    farPlaneDistance: 1000,
    circleRadius: 0.03,
    leafScale: 3.0
  };

  var threeJsBodyDrawer = WinJS.Class.derive(Sample.BodyDrawerBase,
    function (canvas)
    {
      threeJsBodyDrawer._canvas = canvas;
    },
    {
      init: function (index, sensor)
      {
        Sample.BodyDrawerBase.prototype.init.call(this, index, sensor);

        this._drawnJoints = {};

        threeJsBodyDrawer._initScene();
      },
      clearFrame: function ()
      {
        var jointType;

        this._clearLine();

        for (jointType in this._drawnJoints)
        {
          threeJsBodyDrawer._scene.remove(this._drawnJoints[jointType]);
        }
        this._drawnJoints = {};
        threeJsBodyDrawer._renderLoop();
      },
      _clearLine : function()
      {
        if (this._drawnLine)
        {
          threeJsBodyDrawer._scene.remove(this._drawnLine);
          this._drawnLine = null;
        }
      },
      _mapPoint : function(point)
      {
        point.z = 0 - point.z;
        return(point);
      },
      _drawJoint: function (jointType, mappedPoint, isLeaf, color)
      {
        var sphere = this._drawnJoints[jointType];
        var material, scale;

        this._clearLine();

        if (!sphere)
        {
          sphere = new THREE.Mesh(threeJsBodyDrawer._sphereGeometry, material);
          scale = isLeaf ? constants.leafScale : 1.0;
          sphere.scale.set(scale, scale, scale);
          threeJsBodyDrawer._scene.add(sphere);

          this._drawJoints[jointType] = sphere;
        }
        // ensure it's using the right material - can change between frames if joints
        // go from inferred/tracked.
        material = threeJsBodyDrawer._makeSphereMaterial(color);
        sphere.material = material;
        sphere.position.set(mappedPoint.x, mappedPoint.y, mappedPoint.z);
        this._drawnJoints[jointType] = sphere;
      },
      _ensureJointTypeNotDrawn : function(jointType)
      {
        var sphere = this._drawnJoints[jointType];

        if (sphere)
        {
          threeJsBodyDrawer._scene.remove(sphere);
          delete this._drawJoints[jointType];
        }
      },
      _drawConnectionBetweenJoints: function (jointPosition1, jointPosition2, lineColor)
      {
        // Rather than draw many lines (which made the perf horrible), batch them up
        // here into one line (which makes the perf nice :-)).
        if (!this._pendingLineGeometry)
        {
          this._pendingLineGeometry = new THREE.Geometry();
        }
        this._pendingLineGeometry.vertices.push(
          new THREE.Vector3(jointPosition1.x, jointPosition1.y, jointPosition1.z),
          new THREE.Vector3(jointPosition2.x, jointPosition2.y, jointPosition2.z));
      },
      _drawFrameDone : function()
      {
        // got a line waiting to draw?
        if (this._pendingLineGeometry)
        {
          this._drawnLine = new THREE.Line(this._pendingLineGeometry, threeJsBodyDrawer._lineMaterial,
            THREE.LinePieces);

          threeJsBodyDrawer._scene.add(this._drawnLine);

          this._pendingLineGeometry = null;
        }
        threeJsBodyDrawer._renderLoop();
      },
      _drawnLine:null,
      _pendingLineGeometry : null,
      _drawnJoints: null,
      _canvas: null
    },
    {
      _makeSphereMaterial : function(color)
      {
        if (!threeJsBodyDrawer._sphereMaterials[color])
        {
          threeJsBodyDrawer._sphereMaterials[color] = new THREE.MeshLambertMaterial(
            {
              color: color
            }
          );
        }
        return (threeJsBodyDrawer._sphereMaterials[color]);
      },
      _initScene: function (canvas)
      {
        var light;

        if (!threeJsBodyDrawer._scene)
        {
          threeJsBodyDrawer._scene = new THREE.Scene();

          threeJsBodyDrawer._camera =
            new THREE.PerspectiveCamera(
              constants.cameraFieldOfView,
              threeJsBodyDrawer._canvas.width / threeJsBodyDrawer._canvas.height,
              constants.nearPlaneDistance,
              constants.farPlaneDistance);

          threeJsBodyDrawer._camera.position.z = 1;
          threeJsBodyDrawer._scene.add(threeJsBodyDrawer._camera);

          light = new THREE.PointLight(0xFFFFFF);
          light.position.set(-1, 1, 1);
          threeJsBodyDrawer._scene.add(light);

          threeJsBodyDrawer._renderer = new THREE.WebGLRenderer(
            {
              antialias : true,
              canvas: threeJsBodyDrawer._canvas
            }
          );

          threeJsBodyDrawer._renderer.setClearColor('grey');
          threeJsBodyDrawer._renderer.clear();
        }
      },
      _renderLoop: function ()
      {
        threeJsBodyDrawer._renderer.render(
          threeJsBodyDrawer._scene, threeJsBodyDrawer._camera);
      },
      _lineMaterial: new THREE.LineBasicMaterial({ color: 0x000000 }),
      _sphereGeometry: new THREE.SphereGeometry(constants.circleRadius, 32, 32),
      _sphereMaterials: {},
      _canvas: null,
      _renderer: null,
      _scene: null,
      _camera: null
    }
  );

WinJS.Namespace.define('Sample',
  {
    ThreeJsBodyDrawer: threeJsBodyDrawer
  });

})();

and the rest of the code is identical to what I listed out in the previous post so I won’t repeat that here.

That code is here for download if you wanted to try it out or have a poke around in it.

In terms of getting a skeleton drawn in JavaScript I was pretty impressed by how little code it takes and how relatively high level that code is.

In terms of this series of posts, I’ve spent a bit of time experimenting with skeletal data in a few different development environments and so what I’d like to do next is to look at some of the other Kinect data sources that I haven’t experimented with.

More to come…

Kinect for WindowsV2 SDK: Hello (Skeletal) World for the JavaScript Windows 8.1 App Developer

Following on from my previous posts;

and, again, highlighting the official videos and samples for the Kinect for Windows V2 SDK bits;

Programming-Kinect-for-Windows-v2

I thought I’d continue my own journey along the Windows 8.1 app path. In the previous post, I talked about the Kinect SDK APIs being WinRT APIs which means that they are available to app developers working in the different technologies for building Windows 8.1 native apps – namely C++, .NET and JavaScript.

In the previous post, I moved my C#/.NET code across from the desktop world of WPF into the Windows Store app world of WinRT.

In this post, I thought I’d see what it was like to take that port and move it away from .NET altogether and build similar functionality in JavaScript.

In doing that, I should say that I’m a bit of a basic-level JavaScript developer and I haven’t been writing much JavaScript in the past few months. I can write it but it’s a little like speaking a foreign language to me and I have to exert more energy into thinking about how to get something expressed.

The other thing that I’d say is that because I already had some C# code, I took the approach of largely porting this to JavaScript rather than re-think the whole thing and so the JavaScript code ended up following a similar structure to my C# code and the last point I’d add are that I brought in little pieces of the WinJS library in order to;

  1. Get the app up and running and to provide an AppBar control that I can use to place a few buttons.
  2. Provide the infrastructure for defining JavaScript “classes” – i.e. to provide a bit of a veneer of constructors/instance members/static members over the mechanisms that exist in JavaScript.

There’s absolutely no need to make use of WinJS to build out Windows Store app code in JavaScript – you could leave WinJS out of the picture altogether.

Here’s the app code up and running with smooth performance on tracking a skeleton;

In terms of getting this going, I made a blank app project in JavaScript and made sure that I configured up my app’s manifest to allow access to the webcam and microphone (this is important and easy to forget – I’ve forgotten it a few times and then scratched my head);

image

and, just like in .NET, I added a “reference” to the WinRT library for Kinect for Windows V2;

image

and, just like in .NET, this means changing to a project configuration that’s targeting a specific processor architecture so I did that via the Configuration Manager;

image

and then I set up a basic UI with a piece of HTML that contains a Canvas element and a div purposed at providing a WinJS AppBar control. On the AppBar I placed the same 4 buttons that I’d used in my previous blog post and I wired them up to call particular functions.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>App200</title>

    <script src="//Microsoft.WinJS.2.0/js/base.js"></script>

    <link href="/css/default.css" rel="stylesheet" />
    <script src="js/Iterable.js"></script>
    <script src="js/JointConnection.js"></script>
    <script src="js/CanvasBodyDrawer.js"></script>
    <script src="js/KinectControl.js"></script>
    <script src="js/UIHandler.js"></script>
    <script src="js/default.js"></script>
	<script src="//Microsoft.WinJS.2.0/js/ui.js" type="text/javascript"></script>
	<link href="//Microsoft.WinJS.2.0/css/ui-dark.css" rel="stylesheet" type="text/css">
</head>
<body>
    <!-- NB: setting this to 1920x1080 but CSS then scales it to the available space -->
    <!-- took some direction from http://stackoverflow.com/questions/2588181/canvas-is-stretched-when-using-css-but-normal-with-width-height-properties -->
    <canvas id="drawCanvas" width="1920" height="1080">
    </canvas>
	<div id="appBar" data-win-control="WinJS.UI.AppBar" data-win-options="{ sticky:true }">
		<button data-win-control="WinJS.UI.AppBarCommand" 
                onclick="Sample.UIHandler.onGetSensor(document.getElementById('drawCanvas'))"
                data-win-options="{icon:'camera', label:'get sensor', section:'global', type:'button'}"></button>
        <button data-win-control="WinJS.UI.AppBarCommand"
                onclick="Sample.UIHandler.onOpenReader()"
                data-win-options="{icon:'play', label:'open reader', section:'global', type:'button'}"></button>
        <button data-win-control="WinJS.UI.AppBarCommand"
                onclick="Sample.UIHandler.onCloseReader()"
                data-win-options="{icon:'stop', label:'close reader', section:'global', type:'button'}"></button>
        <button data-win-control="WinJS.UI.AppBarCommand"
                onclick="Sample.UIHandler.onReleaseSensor()"
                data-win-options="{icon:'closepane', label:'release sensor', section:'global', type:'button'}"></button>
	</div>
</body>
</html>

This file becomes the start (and only) page for my project and lives in my default.html file.

One thing I’d say about it is that I spent just a little time trying to figure out how the Canvas element in HTML5 deals with the difference between its stated width and height and the actual width and height that the element ends up with at runtime based on any CSS that’s applied to it. Without a little bit of care, the Canvas can end up being of a large actual size but working with its default coordinate size of 300×150 and then scaling content up which makes for a lo-fidelity, pixelated display.

I took some guidance on that from this post;

http://stackoverflow.com/questions/2588181/canvas-is-stretched-when-using-css-but-normal-with-width-height-properties 

Beyond that, this file includes my default.js file which does very little (including skipping the issue of app lifecycle management) other than making sure that WinJS does the right thing in terms of making sure the WinJS controls get processed (in my case the AppBar);

(function ()
{
  "use strict";

  var app = WinJS.Application;
  var activation = Windows.ApplicationModel.Activation;

  app.onactivated = function (args)
  {
    if (args.detail.kind === activation.ActivationKind.launch)
    {
      var promise = WinJS.UI.processAll();

      promise.done(
        function()
        {
          var appBar = document.getElementById('appBar');
          appBar.winControl.show();
        }
      );

      args.setPromise(promise);
    }
  };

  app.start();

})();

So, there’s really not much to it. The button handlers that are specified in my default.html file perform a very similar role to the “code behind” that I had in my previous XAML based post in that they delegate all the work down to a KinectControl class that I have placed into a “namespace” called Sample which is where I also place this class that I called UIHandler;

(function ()
{
  "use strict";

  var UIHandler = WinJS.Class.define(
    function ()
    {
      this._controller = new Sample.KinectControl(
        function ()
        {
          return (new Sample.CanvasBodyDrawer());
        },
        Sample.CanvasBodyDrawer.clearFrames
      );
    },
    {
      onGetSensor: function (canvas)
      {
        Sample.CanvasBodyDrawer.canvas = canvas;
        this._controller.getSensor();
      },
      onOpenReader: function ()
      {
        this._controller.openReader();
      },
      onCloseReader: function ()
      {
        this._controller.closeReader();
      },
      onReleaseSensor: function ()
      {
        this._controller.releaseSensor();
      },
      _controller: null
    }
  );

  WinJS.Namespace.define(
    'Sample',
    {
      UIHandler: new UIHandler()
    });

})();

and so here, I publish into the global namespace an instance of an “object” named Sample.UIHandler which provides methods for the UI buttons to call and largely delegates them down to an instance of a Sample.KinectControl object. That object is constructed with 2 dependencies;

  • A factory function that knows how to instantiate a component that knows how to draw bodies.
  • A function that knows how to clear the canvas.

If you’ve not seen WinJS.Class.define before then the 3 args above are essentially – (constructor, object containing ‘instance’ members, object containing ‘static’ members) – and WinJS.Namespace.define just creates a ‘global’ object named Sample.UIHandler in this case. This makes use of a KinectControl class which is a very literal port of a class I had in the .NET world;

(function ()
{
  "use strict";

  var nsKinect = WindowsPreview.Kinect;

  var constants = {
    bodyCount : 6
  };

  var kinectControl = WinJS.Class.define(
    function (bodyDrawerFactory, clearCanvas)
    {
      this._bodyDrawerFactory = bodyDrawerFactory;
      this._clearCanvas = clearCanvas;
    },
    {
      getSensor : function()
      {
        var bodyCount = 0;

        this._sensor = nsKinect.KinectSensor.getDefault();
        this._sensor.open();

        this._bodies = new Array(constants.bodyCount);
        this._bodyDrawers = new Array(constants.bodyCount);

        for (bodyCount = 0; bodyCount < constants.bodyCount; bodyCount++)
        {
          this._bodyDrawers[bodyCount] = this._bodyDrawerFactory();
          this._bodyDrawers[bodyCount].init(bodyCount, this._sensor);
        }
      },
      openReader : function()
      {
        this._boundHandler = this._onFrameArrived.bind(this);
        this._reader = this._sensor.bodyFrameSource.openReader();
        this._reader.addEventListener('framearrived', this._boundHandler);
      },
      closeReader : function()
      {
        this._reader.removeEventListener('framearrived', this._boundHandler);
        this._reader.close();
        this._reader = null;
      },
      releaseSensor : function()
      {
        this._sensor.close();
        this._sensor = null;
      },
      _onFrameArrived : function(e)
      {
        var frame = e.frameReference.acquireFrame();
        var i = 0;

        if (frame)
        {
          this._clearCanvas();

          frame.getAndRefreshBodyData(this._bodies);

          for (i = 0; i < constants.bodyCount; i++)
          {
            if (this._bodies[i].isTracked)
            {
              this._bodyDrawers[i].drawFrame(this._bodies[i]);
            }
          }
          frame.close();
        }
      },
      _boundHandler:null,
      _clearCanvas: null,
      _bodyDrawerFactory : null,
      _sensor: null,
      _reader: null,
      _bodyDrawers: null,
      _bodies : null
    }
  );

  WinJS.Namespace.define('Sample',
    {
      KinectControl : kinectControl
    }
  );

})();

This class takes at construction time a little factory method and uses it to create 6 “Body Drawing” instances of objects which will take responsibility for drawing any/all of the bodies being tracked by the sensor onto the screen in different colours. It also takes a function that knows how to clear the canvas.

The function here is a bit simpler than what I used in the previous post in the .NET world because I think there are differences between the XAML Canvas and the HTML5 Canvas in that the XAML Canvas is a collection of child elements that draw themselves whereas the HTML5 Canvas is more of a direct-draw model and I don’t think it’s so easy to ask an HTML5 Canvas to remove elements that have been previously drawn although you could try and take an approach of overwriting them with the background colour or clearing specific rectangles.

I didn’t take that route and so the implications for me were that in the XAML world I took an approach of adding elements to the Canvas and later removing them whereas in the HTML5 world I’ve taken an approach of clearing the entire Canvas between each frame that is drawn for up to 6 bodies which is what happens in that _onFrameArrived method above (for a single body).

The CanvasBodyDrawer class looks like as below and is, again, a fairly literal port of the same class in the previous, .NET-based post;

(function ()
{
  "use strict";

  var nsKinect = WindowsPreview.Kinect;

  var constants =
  {
    circleLeafRadius: 30,
    circleNonLeafRadius: 10,
    lineWidth: 3
  };

  var canvasBodyDrawer = WinJS.Class.define(
    function ()
    {
    },
    {
      init: function (index, sensor)
      {
        this._index = index;
        this._sensor = sensor;
        this._sensorColourFrameDimensions = {};

        this._sensorColourFrameDimensions.width =
          this._sensor.colorFrameSource.frameDescription.width;

        this._sensorColourFrameDimensions.height =
          this._sensor.colorFrameSource.frameDescription.height;
      },
      drawFrame: function (body)
      {
        // could almost certainly get this all done in one pass.
        var jointPositions = this._drawJoints(body);

        this._drawLines(jointPositions);
      },
      _drawJoints: function(body)
      {
        var that = this;
        var jointPositions = {};

        Iterable.forEach(body.joints,
          function (keyValuePair)
          {
            var jointType = keyValuePair.key;
            var joint = keyValuePair.value;
            var isTracked = joint.trackingState === nsKinect.TrackingState.tracked;    
            var mappedPoint = that._mapPointToCanvasSpace(joint.position);
            var context = canvasBodyDrawer.canvas.getContext('2d');

            if (that._isJointForDrawing(joint, mappedPoint))
            {
              context.fillStyle =
                isTracked ?
                canvasBodyDrawer._colors[that._index] : canvasBodyDrawer._inferredColor;

              context.beginPath();

              context.arc(
                mappedPoint.x,
                mappedPoint.y,
                that._isLeaf(jointType) ? constants.circleLeafRadius : constants.circleNonLeafRadius,
                2 * Math.PI,
                false);

              context.fill();
              context.stroke();
              context.closePath();

              jointPositions[jointType] = mappedPoint;
            }
          }
        );
        return (jointPositions);
      },
      _drawLines: function(jointPositions)
      {
        var that = this;
        var context = canvasBodyDrawer.canvas.getContext('2d');

        // setting some of these properites way more often than necessary.
        context.strokeStyle = canvasBodyDrawer._lineStyle;
        context.lineWidth = constants.lineWidth;

        canvasBodyDrawer._jointConnections.forEach(
          function (jointConnection)
          {
            jointConnection.forEachPair(
              function (j1, j2)
              {
                // do we have this pair recorded in our positions? 
                // i.e. have we drawn them?
                if (jointPositions[j1] && jointPositions[j2])
                {
                  context.beginPath();
                  context.moveTo(jointPositions[j1].x, jointPositions[j1].y);
                  context.lineTo(jointPositions[j2].x, jointPositions[j2].y);
                  context.stroke();
                  context.closePath();
                }
              }
            );
          }
        );
      },
      _isLeaf: function(jointType)
      {
        var leafs = [nsKinect.JointType.head, nsKinect.JointType.footLeft, nsKinect.JointType.footRight];
        return (leafs.indexOf(jointType) !== -1);
      },
      _isJointForDrawing: function(joint, point)
      {
        return (
          (joint.trackingState !== nsKinect.TrackingState.notTracked) &&
          (point.x !== Number.NEGATIVE_INFINITY) &&
          (point.y !== Number.POSITIVE_INFINITY));
      },
      _mapPointToCanvasSpace: function(cameraSpacePoint)
      {
        // NB: with the way I've set up my canvas in this example (1920x1080), this should be
        // a 1:1 mapping but leaving the flexibility here.
        var colourPoint = this._sensor.coordinateMapper.mapCameraPointToColorSpace(
          cameraSpacePoint);

        colourPoint.x *= canvasBodyDrawer.canvas.width / this._sensorColourFrameDimensions.width;
        colourPoint.y *= canvasBodyDrawer.canvas.height / this._sensorColourFrameDimensions.height;

        return (colourPoint);
      },
      _index : -1,
      _sensorColourFrameDimensions : null,
      _sensor : null
    },
    {
      clearFrames : function()
      {
        var canvas = canvasBodyDrawer.canvas;
        var ctx = canvas.getContext('2d');

        ctx.clearRect(0, 0, canvas.width, canvas.height);
      },
      canvas: {
        get : function()
        {
          return (canvasBodyDrawer._canvas);
        },
        set : function(value)
        {
          canvasBodyDrawer._canvas = value;
        }
      },
      _canvas : null,
      _colors: ['red', 'green', 'blue', 'yellow', 'purple', 'orange'],
      _lineColor: 'black',
      _inferredColor: 'grey',
      _lineStyle : 'black',
      _jointConnections:
        [
          Sample.JointConnection.createFromStartingJoint(nsKinect.JointType.spineBase, 2),
          Sample.JointConnection.createFromStartingJoint(nsKinect.JointType.shoulderLeft, 4),
          Sample.JointConnection.createFromStartingJoint(nsKinect.JointType.shoulderRight, 4),
          Sample.JointConnection.createFromStartingJoint(nsKinect.JointType.hipLeft, 4),
          Sample.JointConnection.createFromStartingJoint(nsKinect.JointType.hipRight, 4),
          Sample.JointConnection.createFromStartingJoint(nsKinect.JointType.neck, 2),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.spineMid, nsKinect.JointType.spineShoulder, nsKinect.JointType.neck),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.shoulderLeft, nsKinect.JointType.spineShoulder, nsKinect.JointType.shoulderRight),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.hipLeft, nsKinect.JointType.spineBase, nsKinect.JointType.hipRight),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.handTipLeft, nsKinect.JointType.handLeft),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.handTipRight, nsKinect.JointType.handRight),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.wristLeft, nsKinect.JointType.thumbLeft),
          Sample.JointConnection.createFromJointList(nsKinect.JointType.wristRight, nsKinect.JointType.thumbRight)
        ]
    }
  );

  WinJS.Namespace.define('Sample',
    {
      CanvasBodyDrawer : canvasBodyDrawer
    });

})();

Hopefully, that code mostly speaks for itself – for a particular body that is being tracked by the sensor, an instance of this class draws that body in its drawFrame function by first drawing out the joints and then connecting them together (this could easily be combined into one pass rather than two). The class uses a different colour from its _colors array depending on which of the 0..5 bodies it is representing and this does mean that a single person in front of the sensor could be picked up and drawn as body 4 in one colour and then they could leave the frame and return as body 2 in a different colour. I make no attempt to somehow figure out that it’s “the same person”.

Just as in my .NET post, this class relies on a JointConnection class to represent connections between joints that need joining with lines and, just like in that post, I play a little fast-and-loose with the values of the enumeration of JointType to make some assumptions about building up lists of joints that should be connected together;

(function ()
{
  "use strict";
  
  var jointConnection = WinJS.Class.define(
    function ()
    {
      this._joints = [];
    },
    {
      forEachPair : function(handler)
      {
        for (var i = 0; i < this._joints.length - 1; i++)
        {
          handler(this._joints[i], this._joints[i + 1]);
        }
      },
      _joints: null
    },
    {
      createFromStartingJoint : function(jointType, range)
      {
        var connection = new jointConnection();

        for (var i = 0; i < range; i++)
        {
          connection._joints.push(jointType + i);
        }

        return (connection);
      },
      createFromJointList : function()
      {
        var connection = new jointConnection();

        for (var i = 0; i < arguments.length; i++)
        {
          connection._joints.push(arguments[i]);
        }
        return (connection);
      }
    }
  );

  WinJS.Namespace.define('Sample',
    {
      JointConnection : jointConnection
    });

})();

and that’s pretty much it with a couple of tiny other source files in the project.

I suspect that I could boil this code down to much, much less JavaScript if I took out all the WinJS.Class pieces of it and took away some of the structure and took more of a direct approach to getting the bodies drawn on-screen. In some ways, I feel I’ve bloated the JavaScript out a little by porting over from my .NET code rather than starting in JavaScript.

Regardless, it’s great to see that I can get this going in around 90-120 minutes and have great, smooth performance rendering skeletal data from the sensor direct onto the screen – I suspect that having strong JavaScript support here opens up the sensor to a lot of developers who might otherwise find too high of a barrier to entry with C++ or C#.

The code for the app above is here for download – feel free to download, poke around and borrow any bits.