Following on from the previous post and the one before, it was fun to play with my own interop wrappers for a while but I figure it’s going to be a lot more productive to use the interop wrappers contained up at;
and they also include wrappers for the manipulation API and its friend the inertia API which I didn’t really want to get into wrapping if I could avoid it.
Wrapping those COM objects might not have been too ugly a job but the initial thought of trying to track down the TLB info for the COM objects involved ( assuming that there is TLB info for those COM objects because it’s a lot less fun if there’s not ) and then try and get it imported into the .NET world didn’t fill me with joy and so switching to using someone else’s wrapper seemed like a smart move.
I wanted to experiment with those wrappers and see whether it made life easier or more difficult and so I thought I’d quickly replicate what I’d done previously and so I started a new Windows Forms application to see if I could handle WM_TOUCH messages in it. I made a “UI” with a SplitContainer, Panel and a TextBox in it and wrote this code ( adding references to the sample Windows7.Multitouch and Windows7.Multitouch.WinForms assemblies ).
Yep, that’s it. Nothing else to see. Those wrappers are nice – very easy to pick up the touch messages using those things and I’d say the whole thing took less than 10 minutes rather than me spending a bunch of cycles building some partially-formed wrappers myself.
It’s almost embarrassingly easy and so I thought I’d add a little code just to draw something in response to the events that are being picked up rather than just writing them into a TextBox – simple enough stuff and there’s a more complex but similar sample packaged with the wrappers so I’m only experimenting rather than making any contribution 🙂
public partial class Form1 : Form { private class TouchState { public Point LastPoint { get; set; } public Color Color { get; set; } } public Form1() { InitializeComponent(); TouchHandler handler = Factory.CreateHandler<TouchHandler>(pnlMain); handler.TouchUp += OnTouchHandler; handler.TouchDown += OnTouchHandler; handler.TouchMove += OnTouchHandler; touchState = new Dictionary<int, TouchState>(); } void OnTouchHandler(object sender, TouchEventArgs e) { if (e.IsTouchDown) { if (!touchState.ContainsKey(e.Id)) { touchState[e.Id] = new TouchState() { Color = Colors.MakeRandomColor() }; } touchState[e.Id].LastPoint = e.Location; } else if (e.IsTouchMove) { TouchState state = touchState[e.Id]; using (Graphics g = pnlMain.CreateGraphics()) { using (Pen p = new Pen(state.Color, 5)) { g.DrawLine(p, state.LastPoint, e.Location); } } state.LastPoint = e.Location; } } Dictionary<int, TouchState> touchState; }
note that I make no attempt to store the points connected by lines so invalidating the window will clear the display. What I find interesting here is that with my virtual CodePlex driver for touch based around 2 mice I see that both mice fire the same touch ID here ( 10 ) when used individually so for me, both of my mice will draw with the same colour until I draw with them together and then they fire different ids and so ( with the code above ) draw in different colours.
So, replicating the experience of getting Touch messages into Windows Forms was easy. What about replicating the experience of getting gesture messages into WPF? I made another WPF application with a Rectangle on a Canvas and added references to the sample wrapper libraries Windows7.Multitouch and Windows7.Multitouch.WPF;
<Window x:Class="WpfApplication8.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="600" Width="800"> <Canvas> <Rectangle x:Name="rectangle" Width="192" Height="96" Fill="Green" RadiusX="3" RadiusY="3" Canvas.Left="192" Canvas.Top="192" RenderTransformOrigin="0.5,0.5"> <Rectangle.RenderTransform> <TransformGroup> <ScaleTransform x:Name="scale" /> <RotateTransform x:Name="rotate" Angle="0"/> <TranslateTransform x:Name="translate" /> </TransformGroup> </Rectangle.RenderTransform> </Rectangle> </Canvas> </Window>
with a little code behind it;
public partial class Window1 : Window { public Window1() { InitializeComponent(); GestureHandler handler = Factory.CreateGestureHandler(this); handler.Pan += OnPan; handler.PanBegin += OnPan; handler.PanEnd += OnPan; handler.Rotate += OnRotate; handler.RotateBegin += OnRotate; handler.RotateEnd += OnRotate; handler.TwoFingerTap += OnTwoFingerTap; handler.Zoom += OnZoom; handler.ZoomBegin += OnZoom; handler.ZoomEnd += OnZoom; handler.PressAndTap += OnPressAndTap; } void OnPressAndTap(object sender, GestureEventArgs e) { // Todo. } void OnZoom(object sender, GestureEventArgs e) { scale.ScaleX *= e.ZoomFactor; scale.ScaleY *= e.ZoomFactor; } void OnTwoFingerTap(object sender, GestureEventArgs e) { // Todo. } void OnRotate(object sender, GestureEventArgs e) { rotate.Angle -= e.RotateAngle / Math.PI * 180; } void OnPan(object sender, GestureEventArgs e) { translate.X += e.PanTranslation.Width; translate.Y += e.PanTranslation.Height; } }
That makes this stuff all pretty easy – not much that you can really add to that except to say that, again, there’s a more fully fledged demo in the download bits.
Now…that gets me back to that manipulation API that I didn’t want to wrap up the COM interface for. There’s already a wrapper in these libraries so if I take my existing WPF code I can change it to use that API rather than trying to have to figure out anything to do with the gestures myself ( specifically for zoom, rotate, translate ). I named my Canvas as canvas and wrote a little code;
public partial class Window1 : Window { public Window1() { InitializeComponent(); this.Loaded += new RoutedEventHandler(OnLoaded); } void OnLoaded(object sender, RoutedEventArgs args) { Factory.EnableStylusEvents(this); ManipulationProcessor processor = new ManipulationProcessor( ProcessorManipulations.ALL); processor.ManipulationDelta += OnManipulationChanged; this.StylusDown += (s, e) => processor.ProcessDown((uint)e.StylusDevice.Id, e.GetPosition(canvas).ToDrawingPointF()); this.StylusUp += (s, e) => processor.ProcessUp((uint)e.StylusDevice.Id, e.GetPosition(canvas).ToDrawingPointF()); this.StylusMove += (s, e) => processor.ProcessMove((uint)e.StylusDevice.Id, e.GetPosition(canvas).ToDrawingPointF()); } void OnManipulationChanged(object sender, ManipulationDeltaEventArgs e) { scale.ScaleX = e.CumulativeScale; scale.ScaleY = e.CumulativeScale; rotate.Angle = e.CumulativeRotation / Math.PI * 180; translate.X = e.CumulativeTranslation.Width; translate.Y = e.CumulativeTranslation.Height; } }
I was pretty impressed with how easy that seemed to be and the natural feel that I now get to my UI as I try and manipulate my rectangle with my 2 mice 🙂 It’s worth nothing the call to EnableStylesEvents which gives us the raw events to feed through to the ManipulationProcessor and I think in Windows Forms you’d do this with raw touch events as I was working with in my previous 2 ( “stone age” ) posts.
Oh…and, again, there’s a more fully featured sample in the downloads.
So…this is incredibly easy to get working, what about adding inertia to make the interactions with these objects feel even more real? That’s a bit more complex – next post…