I was down in the Microsoft Technology Centre at Reading the other week where I got a chance to take a look at a piece of proof-of-concept work that one of our customers is doing around using touch in Windows 7 to improve the user experience of one of their applications.
( I’ve captured this on video and will share at a later point as I’m not able to just at the moment ).
But this got me thinking about Multi-Touch in Windows 7 and wondering how easy/hard it is.
I’ve done some work with the Tablet PC in my time and that wasn’t too tricky and I gave an example of that at DevDays the other week in that I used a simple WPF application that handles basic ink gestures;
and that’s just using a WPF InkCanvas with its gesture mode set to GestureOnly and a little code behind it ( download the source here if you like ).
So, that was easy enough but what about Multi-Touch? Well, the first “problem” I have is that I don’t have any multi-touch-enabled hardware. I’ve seen 2 multi-touch-enabled machines, the HP TouchSmart;
and the Dell XT2;
neither of which I own and neither of which is Microsoft about to buy me 🙂
So, I fired up the Windows 7 SDK and wrote a little code in order to experiment.
BIG NOTE: this is not really the right way to go. The right way to go is either to look at WPF V4.0 which I’ll return to or to head to;
which will point you to;
and there you can download a whole bunch of interop wrappers that’ll make your life a lot easier. I’ll return to those too in a subsequent post but, initially, I just wanted to have a glance at the underlying API’s a little first and that’s what follows here and on the next post.
I pretty soon realised that my Windows 7 SDK was out of date ( the previous link is ok though ) and so I flicked through the online docs instead whilst downloading the updated ISO image for the RC rather than the beta.
From what I could tell, there’s a new parameter value for GetSystemMetrics which can tell you what kind of touch capabilities the device you are on has and so I figured I’d write a little class to surface that;
static class TouchCapabilities { // Taken from the Windows 7 SDK, possible transcription errors. [Flags] private enum TouchMetrics { IntegratedTouch = 0x1, ExternalTouch = 0x2, IntegratedPen = 0x4, ExternalPen = 0x8, MultiInput = 0x40, StackReady = 0x80 } public static bool HasMultiTouch { get { return ( (((TouchMetrics)GetSystemMetrics(SM_DIGITIZER)) & TouchMetrics.MultiInput) != 0); } } const int SM_DIGITIZER = 94; // Windows 7 SDK says that a failure here ( 0 ) will not set LastError. [DllImport("User32")] private static extern int GetSystemMetrics(int nIndex); }
I dropped that into a Windows Forms application, called it and came up with the somewhat-expected response of false.
So things were looking a little “tricky” until Paul pointed me at a CodePlex project that helps out with a virtual driver so I downloaded those bits, watched the installation video and installed the bits and rebooted. I also added in a second mouse to use as my second touch input.
Then when I run my application I see a decent return value from my TouchCapabilities.HasMultiTouch property.
Ok…maybe this gets me somewhere? When you look at the SDK there’s a few sides to the touch capabilities;
- New windows messages (WM_TOUCH) representing touch up/down/move with identification of multiple touch points
- New windows messages (WM_GESTURE) representing multi-touch gesture messages
- Manipulation support – from a set of ( from manipulations.idl ) translateX, translateY, scale, rotate
- Inertia support – there’s not much in the docs here so I think a bit of experimentation would be called for
So, having asked whether I’m on a touch-enabled device or not, maybe I can capture some touch events? I wrote a little Panel-derived class ( Windows Forms Panel that is ) to try and pick some up – you need to call RegisterTouchWindow only if you want the lower level touch messages rather than the gesture messages;
class TouchPanel : Panel { public bool Register() { return (RegisterTouchWindow(this.Handle, 0)); } public bool Unregister() { return (UnregisterTouchWindow(this.Handle)); } protected override void DefWndProc(ref Message m) { if (m.Msg == WM_TOUCH) { Debug.WriteLine("Got a touch message!"); } else { base.DefWndProc(ref m); } } [DllImport("User32")] private static extern bool RegisterTouchWindow(IntPtr handle, UInt32 flags); [DllImport("User32")] private static extern bool UnregisterTouchWindow(IntPtr handle); const UInt32 TWF_FINETOUCH = 1; // From Winuser.h const int WM_TOUCH = 0x240; const int WM_GESTURE = 0x119; }
and that all seemed to go well enough in that I saw my Debug.WriteLine calls executing. To decipher what’s actually present in a WM_TOUCH message you have to pass it to GetTouchInputInfo and I didn’t fancy writing the interop classes to make that work so I wrote a bit of C++/CLI to do it for me which looks a bit like this (header file);
namespace NativeCodeHelpers { [Flags] public enum class TouchInputType { Down = 0x01, InRange = 0x08, Move = 0x02, NoCoalese = 0x20, Palm = 0x80, Primary = 0x10, Up = 0x4 }; public ref class TouchInput { public: property TouchInputType InputType; property long X; property long Y; property UInt32 TouchId; property UInt32 TimestampInMilliseconds; property Nullable<UInt32> ContactWidth; property Nullable<UInt32> ContactHeight; virtual String^ ToString() override; internal: TouchInput(); }; public ref class TouchInputDecoder { public: TouchInputDecoder(IntPtr wParam, IntPtr lParam); property List<TouchInput^>^ Inputs { List<TouchInput^>^ get(); }; private: List<TouchInput^>^ _inputs; }; }
and the implementation file;
#include "stdafx.h" using namespace System; using namespace System::Collections::Generic; #include "NativeCodeHelpers.h" using namespace NativeCodeHelpers; TouchInput::TouchInput() { } String^ TouchInput::ToString() { return(String::Format(L"Touch Id {0}, Input Type {1}, X {2}, Y {3}, Timestamp {4}", this->TouchId, this->InputType, this->X, this->Y, this->TimestampInMilliseconds)); } TouchInputDecoder::TouchInputDecoder(IntPtr wParam, IntPtr lParam) { _inputs = gcnew List<TouchInput^>(); HTOUCHINPUT hTouchInput = (HTOUCHINPUT)lParam.ToPointer(); UINT cInputs = (UINT)wParam.ToInt32(); bool bThrow = false; if (cInputs) { TOUCHINPUT* pTouchInputs = new TOUCHINPUT[cInputs]; ZeroMemory(pTouchInputs, cInputs * sizeof(*pTouchInputs)); if (GetTouchInputInfo(hTouchInput, cInputs, pTouchInputs, sizeof(*pTouchInputs))) { for (int i = 0; i < cInputs; i++) { TouchInput^ touchInput = gcnew TouchInput(); touchInput->InputType = (TouchInputType)pTouchInputs.dwFlags; touchInput->X = pTouchInputs.x; touchInput->Y = pTouchInputs.y; touchInput->TouchId = pTouchInputs.dwID; touchInput->TimestampInMilliseconds = pTouchInputs.dwTime; if (pTouchInputs.dwMask & TOUCHINPUTMASKF_CONTACTAREA) { touchInput->ContactWidth = pTouchInputs.cxContact; touchInput->ContactHeight = pTouchInputs.cyContact; } _inputs->Add(touchInput); } } delete [] pTouchInputs; pTouchInputs = NULL; } ::CloseTouchInputHandle(hTouchInput); if (bThrow) { // TODO: Not a great exception to throw perhaps throw gcnew InvalidOperationException(L"Failed to get touch input information"); } } List<TouchInput^>^ TouchInputDecoder::Inputs::get() { return(_inputs); }
with some actual code ( I tend not to write my code in header files, old habits die hard );
public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void OnRegister(object sender, EventArgs args) { if (TouchCapabilities.HasMultiTouch) { panel = new TouchPanel(); panel.TouchInputReceived += (s, e) => { foreach (TouchInput input in e.TouchInput) { txtDebugOutput.Text += string.Format("{0}{1}", input.ToString(), Environment.NewLine); txtDebugOutput.ScrollToCaret(); } }; if (panel.Register()) { panel.BackColor = Color.SkyBlue; panel.Dock = DockStyle.Fill; this.Controls.Add(panel); } else { MessageBox.Show("Failed to register for touch input"); } } else { MessageBox.Show("No multi touch input detected"); } } private void OnUnregister(object sender, EventArgs e) { if (panel != null) { bool retVal = panel.Unregister(); Debug.Assert(retVal); } } TouchPanel panel; }
and that worked out reasonably well for me in that I can run my application with the CodePlex driver, click the button that calls OnRegister in the code-behind above and then do a bit of “touch” input and get some debug tracing dumped into a TextBox;
It’s not a very exciting app but the blue bit at the bottom is the TouchPanel and the diagnostics are going into the TextBox above.
For the longest time, I was convinced that I only saw a single Touch Id ( value of 10 ) – but a judicious use of a conditional breakpoint in Visual Studio told me that there were times when I got 2 inputs simultaneously and that I had Touch Id 10 and Touch Id 100 both in flight at the same time.
So, I can pick up raw touch data but that all seems a bit low level ( it’s always where I like to start ) so it’d be nice to get some gestures instead I think….next post…but in the meantime the source for what I built is here for download if you want to play with it ( apply large pinch of salt, I’m just learning ). As is probably clear, you’ll need the C++ bits installed to compile what I did here.