Following up on this post, I wanted to see if I could continue to use the interop wrappers in order to add inertia to my multi-touch manipulations. The intertia API in Windows 7 is really “just” a physics engine that you can use to add more realism to things like touch interactions. There’s some detail on that physics engine in the docs and, specifically, here.
As an example, for a manipulation that involves a translation ( like a pan ) your object is moved in some direction by the manipulation along some 2D vector (Ai + Bj) and the inertia engine picks up when the manipulation is over and takes the object’s position, the displacement vector and a deceleration value and then calculates new positions for the object over a specified time period at a specified interval.
The documents of course refer to the COM implementation whereas the interop wrappers wrap all this up for you and surface it a slightly different way and it took me a little while to get even a little used to what was going on – for me, perhaps the most difficult thing was figuring out what values I’m supposed to plug in to make the inertia engine work.
I stuck with my “UI” from the previous post which is just a green rectangle 🙂 as in;
<Window x:Class="WpfApplication8.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" WindowState="Maximized"> <Canvas x:Name="canvas"> <Rectangle x:Name="rectangle" Width="192" Height="96" Fill="Green" RadiusX="3" RadiusY="3" Canvas.Left="192" Canvas.Top="192" RenderTransformOrigin="0.5,0.5"> <Rectangle.RenderTransform> <TransformGroup> <ScaleTransform x:Name="scale" /> <RotateTransform x:Name="rotate" Angle="0"/> <TranslateTransform x:Name="translate" /> </TransformGroup> </Rectangle.RenderTransform> </Rectangle> </Canvas> </Window>
and then wrote a little code behind it;
public partial class Window1 : Window { public Window1() { InitializeComponent(); this.Loaded += new RoutedEventHandler(OnLoaded); } void OnLoaded(object sender, RoutedEventArgs args) { Factory.EnableStylusEvents(this); processor = new ManipulationInertiaProcessor( ProcessorManipulations.ALL, Factory.CreateTimer()); processor.BeforeInertia += OnBeforeInertia; processor.ManipulationDelta += OnManipulationDelta; this.StylusDown += (s, e) => processor.ProcessDown((uint)e.StylusDevice.Id, e.GetPosition(canvas).ToDrawingPointF()); this.StylusUp += (s, e) => processor.ProcessUp((uint)e.StylusDevice.Id, e.GetPosition(canvas).ToDrawingPointF()); this.StylusMove += (s, e) => processor.ProcessMove((uint)e.StylusDevice.Id, e.GetPosition(canvas).ToDrawingPointF()); } void OnManipulationDelta(object sender, ManipulationDeltaEventArgs e) { rotate.Angle += e.RotationDelta / Math.PI * 180.0; scale.ScaleX *= e.ScaleDelta; scale.ScaleY *= e.ScaleDelta; translate.X += e.TranslationDelta.Width; translate.Y += e.TranslationDelta.Height; if (e.RotationDelta != 0) { lastRotationDelta = e.RotationDelta / 40.0f; } if (e.ExpansionDelta != 0) { lastExpansionDelta = e.ExpansionDelta / 2000.0f ; } } void OnBeforeInertia(object sender, BeforeInertiaEventArgs e) { TimeSpan span = new TimeSpan(0, 0, 0, 0, 10); // Odd to me that this is specified in Ticks when the underlying // DispatcherTimer that it uses also wants a TimeSpan. processor.InertiaProcessor.InertiaTimerInterval = (int)span.Ticks; processor.InertiaProcessor.MaxInertiaSteps = 500; // TODO: Not sure it's right to take the Velocity from the processor // here. processor.InertiaProcessor.InitialVelocity = processor.Velocity; processor.InertiaProcessor.DesiredDeceleration = 0.001f; // TODO: Not feeling too confident about the value I'm passing here as // InitialAngularVelocity. Docs talk about 1/40th of the rotation delta // which I'm trying but it's more by experimentation than anything. // Similarly for DesiredAngularDeceleration. processor.InertiaProcessor.InitialAngularVelocity = lastRotationDelta; processor.InertiaProcessor.DesiredAngularDeceleration = 0.000002f; // TODO: Not sure about this one either. Fudged in a factor of 2000 // in previous code. processor.InertiaProcessor.InitialExpansionVelocity = lastExpansionDelta; processor.InertiaProcessor.DesiredExpansionDeceleration = 0.00001f; } float lastRotationDelta; float lastExpansionDelta; ManipulationInertiaProcessor processor; }
The big difference here from the previous post is that I’m now using a ManipulationInertiaProcessor and it foxed me for a little while because ( as the name suggests ) it is both a manipulation processor and an inertia processor.
In this case, I construct one and I tell it that I’m interested in all gestures. I also give it a timer as it needs one and there’s a nice Factory method in the wrappers to make a suitable one for you so I gave it no more thought than that.
We then ( lines 20 to 27 above ) have to wire it up to the same Stylus events that we were using previously in the previous post with the pure ManipulationProcessor so that it knows when “interesting things” happen in terms of touch events. That is – it’s still a ManipulationProcessor.
Just as in the previous example, I also handle the ManipulationDelta event but it’s somewhat different this time in that there are two reasons why this event is now fired.
Firstly, the event is fired when I’m doing a touch manipulation just like before but, secondly, it’s also fired when the inertia processor kicks in and starts to do its deceleration physics thereby continuing the manipulation “automatically”. So, I think the order goes something like this;
- I begin a manipulation with a touch gesture like pan ( this will cause ManipulationStarted to fire )
- I continue the manipulation ( this will be firing ManipulationDelta events as it progresses )
- I stop the touch gesture and the BeforeInertia event fires to let me set up my inertia parameters.
- The inertia processor then continues the manipulation by applying its physics engine ( this will be firing ManipulationDelta events as it progresses )
- The inertia processor comes to the end of its timer and ManipulationCompleted fires.
I think that’s how it’s working. The trickiness for me right now is in knowing what to pass to the Inertia processor at the start of its inertia processing. There look to be 3 key things for it to know;
- Either InitialVelocity or DesiredDisplacement. One of those two alongside a DesiredDeceleration.
- Either InitialAngularVelocity or DesiredRotation. One of those two alongside a DesiredAngularDeceleration.
- Either InitialExpansionVelocity or DesiredExpansion. One of those two alongside a DesiredExpansionDeceleration.
For now, I’ve more or less fudged these values as I’m not entirely sure how to calculate them. There’s some hints in the docs but it needed a little more of a lengthy explanation for me. What I try to do right now is only set the InitialXXXVelocity parts, I never try and set the DesiredXXX parts. In terms of my guesswork;
- I try and set InitialVelocity from the ManipulationProcessor’s value for Velocity. Seems reasonable! Note that the ManipulationProcessor is the same object as the InertiaProcessor which ( for me ) is a bit confusing but you get used to it.
- I try and set the InitialAngularVelocity from the ManipulationDelta event by grabbing the last decent looking RotationDelta and dividing it by 40.0. Why 40.0? The docs hinted this might be a good idea although I might have the implementation wrong.
- I try and set the InitialExpansionVelocity from the ManipulationDelta event by grabbing the last decent looking ExpansionDelta and dividing it by 2000.0. Why 2000.0? Pure “magic number” in that it seemed to produce a reasonable effect.
In terms of the various DesiredXXXDeceleration values, these are either set based on what the docs suggested or are pure “magic numbers” from experimentation.
I need to revisit this and find out more about what values you’re actually meant to provide to the inertia processor but, in the meantime, the code more-or-less gives the desired result in that I can set an effect in motion and watch the inertia processor continue it for me after I’ve stopped moving the mice. Hard to show that in images but…
so you get the idea.
One of the problems with this is that I allow both the direct manipulations and the inertia processor to drive the object off the screen. The inertia processor has mechanisms built in to cater for this using a Boundary setting and an ElasticMargin setting. They’re doc’d here but I had a quick play with them and haven’t been able to quite figure them out just yet. Maybe in another post? Or perhaps I’ll look at WPF 4.0 Beta 1 first and see what it does for multi-touch for me…
Again – there are much more complete samples in the download for the interop wrappers so take those much more seriously than you take these posts 🙂