Touched ( Part 1 )–Getting Touch for Free

Multi-touch interfaces are becoming more and more prevalent. This week saw the launch of the Windows Phone 7 where the interface is mostly (or almost entirely in the case of phones without keyboards) touch driven and it’s hardly alone amongst the Android, iPhone and iPad devices for making use of multi-touch in its interface.

On the desktop (or laptop, or netbook) side of things, Windows 7 has brought a consistent approach to touch capable screens in the same way as Windows has traditionally brought a consistent approach to handling mice, keyboards, pens, etc.

But across Windows and the Windows Phone 7, there’s lots of different ways in which you can program multi-touch in order to get different levels of capability.

I recently broke open the piggy bank and bought myself a nice new touch monitor;

although this is optical touch rather than capacitive or resistive touch and offers only 2 points of multi-touch but it’s more than good enough for experimentation.

I wrote some posts just over a year ago that played around with multi-touch in Windows 7 using a few different mechanisms ( part 1, 2, 3, 4, 5 ) but these were;

  1. Before the Windows Phone 7 came along.
  2. Before WPF 4.0 released.
  3. Before Silverlight 4.0 released.
  4. Done without touch hardware.

and so it occurred to me that it might be a good time to revisit that whole area of “touch” as people are thinking about how to build great, natural interfaces for the phone and that tends to lead them to think about reproducing some or all of those interfaces elsewhere.

Getting Touch Capabilities for Free

Now, on my new shiny touch screen I notice that (e.g.) calculator.exe works reasonably well with the touch screen even though it was never built with touch in mind.

It works because, in the absence of anything better, Windows will treat a single touch point as a mouse event and so calculator is fooled into thinking that I’m using a mouse, not a touch screen.

What’s interesting about that usage is that calculator.exe isn’t really a joy to use via touch.

The buttons are too small and close together to be friendly.

However, that’s with my monitor (at 22”) running at its native resolution of 1920×1080. If I change that resolution to 1024×768 then it’s easier to use calculator.exe via touch and if I change the display settings to 150%;

image

then calculator.exe becomes easier again to use from a touch perspective.

So, there’s some cheap wins to be had there even with an application as well established as calculator.

These interactions are controlled by the touch settings in control panel. There are 3 main tabs that come into play;

image

Here I can switch on whether I allow touch based input at all. If I switch it off then my touch screen does nothing.

If I switch just that setting on then my touch screen comes back into life and I can use my finger just like a mouse pointer.

image

Generally, this lets me click on buttons, drag windows around by their title bar and so on and it also lets me use Press+Hold to simulate the right mouse button.

But it gives me little else. For instance, if I’m looking at a long document or web page;

image

then scrolling that left/right or up/down is painful as I actually have to hit the scrollbar thumb with my fingers accurately to get it to work.

If I switch on multi-touch gestures though;

image

then things instantly improve because I can use a two-finger pan gesture to scroll that document;

image

Switching on that option doesn’t just give me panning. It gives me zooming and rotating as well and I’ll return to those in a moment.

Panning with 2 fingers can be a pain so I have control over that on the panning tab where I can switch on single-finger panning;

image

image

but the panning feels a little “wooden” so I can use the other 2 options;

image

where inertia on my panning means that objects will move further than I actually pan them and then decelerate over time. The deceleration depends on that resistance setting.

Boundary feedback means that if I try and pan past the top (e.g.) of my document then the whole window wobbles a little to give me feedback. It feels quite natural as does the inertia setting.

I can also set up “flicks” on the 3rd tab;

image

which I can customise;

image

and so that means (e.g.) as I’m browsing here (in FireFox as a non Microsoft example);

image

then my flick to the left will cause a navigation Back in the browser as that’s how I’ve set my flick up.

Naturally, if I go to an input field like the browser address bar I can quickly get an onscreen keyboard to help me out on my input;

image

So there’s quite a lot there that I get for free and, beyond that there’s applications in the OS that make explicit use of touch gestures with Media Center being a good example of an application that takes explicit steps to be a good touch enabled citizen.

As an aside, I quite like the look of that virtual keyboard and would like to buy one made perhaps out of glass or perspex or something along those lines Smile

Examining the “Free” Stuff

How did that stuff come for free?

The “behave as a mouse pointer” seems obvious and the keyboard speaks for itself.

The reason why panning (and boundary) magically works is because a Window can either register itself for low or high level touch events.

  • low == raw touch data (WM_TOUCH)
  • high == gestures (WM_GESTURE)

and, as far as I’m aware, you can’t have both so if you want both then you presumably have to go for the low level stuff and then build your own variant of the high level stuff on top of it ( which I think is similar to what WPF does because it does end up offering both ).

If a Window has registered itself for low level touch events then that’s all it gets. It’s on its own and needs to process those touch messages.

If it’s not registered itself for low level touch events then either it’s a gesture aware application or it’s not. Windows sends it a WM_GESTURENOTIFY message to tell it that a gesture is heading in its direction. The Window can then either;

  • do nothing
  • respond with which gestures it supports (I think that once this is set for a particular window it can’t be reset)

If it responds then, clearly, it has specific behaviour to handle specific gestures and it will get the gestures (WM_GESTURE) and have to deal with those messages. It’s on its own again.

If it does not respond then Windows tries to do the right thing by mapping these gestures into windows messages as explained here which talks through how Pan/Zoom/Press+Hold can work without explicit application support. This page says that boundary support is thrown in for free by the default handler as well.

This works well for applications that pre-date touch support.

What about the flick support though? I think that’s done via sending a WM_TABLET_FLICK message as described here and if the application doesn’t know what to do with that message then Windows will send an alternate message that the flick is mapped to (such as a KEYDOWN) and that looks to explain how these flicks can be sent to applications that don’t even know about them.

So that’s how applications can work pretty well with touch even if they weren’t written to support it and I can drop into a large document in Notepad and pan scroll the thing around despite (as far as I know) notepad not taking any explicit actions to support multi-touch.

However…most applications are built on top of the various UI frameworks and so it’s interesting to see how this possibility of

“do nothing special in the application code and still get touch for free”

works in the UI frameworks. Some examples…

A Tale of 4 Buttons

Here’s 4 Buttons…

imageimageimageimage

from Windows Forms, WPF, Silverlight and Silverlight on Windows Phone 7 respectively and (as you’d expect) hitting them with my finger works Smile although requires pixel perfect accuracy with the exception of the Phone button which I seem to remember has a wider border around it to make it better for touch.

Windows converts the single touch event into a left mouse button event and passes that onto the frameworks in question.

It’s worth remembering that WPF and Silverlight are different to traditional Windows applications such as those produced with Windows Forms.

WPF/Silverlight have top level windows but the controls within them are not Windows common controls. They are managed and drawn by WPF/Silverlight and so for my window that contains my WPF button all I see in Spy++ is;

image

whereas for the Windows Forms version I see;

image

and I think there are implications of this in that when Windows implements new “free” features like touch across its common controls it means that WPF/Silverlight don’t just magically pick them up – those frameworks have to do additional work in order to integrate the new features and some of that shows up as we go along here…

A Tale of 4 Context Menus

Here’s 4 context menus;

imageimageimageimage

The first one is from Windows Forms and a simple Tap+Hold gesture on the UI makes it pop up just as you’d expect when Tap+Hold is mapped to the right mouse button (as it is on my system).

The second one is WPF and it works just the same way in that the Tap+Hold gesture pops up the context menu and that holds true for me whether I build it against WPF 3.5 Sp1 or WPF 4.0.

I was surprised that this works in both WPF versions and I wonder how/why that’s the case – my guess is that it’s a gesture that comes through from the tablet stylus which WPF has long-standing support for rather than a multi-touch gesture as such but that’s just a guess.

The third one is Silverlight and this one doesn’t work for me at all. That is, I can’t get a Tap+Hold gesture to act as a right mouse button unless I set windowless mode on the plugin (which causes other limitations to kick in and is not the default).

There’s an explanation of this up here which says that Silverlight registers itself via RegisterTouchWindow/SetStylusGesture/SetGestureConfig for low level touch input and does not “process gestures, use the platform capabilities to that effect, or process WM_GESTURE”. So I guess that includes this particular gesture.

If I try my Tap+Hold gesture on a Silverlight app then I don’t get the little circle from the OS to show that it has recognised the gesture and is passing it on to the underlying application as a right mouse button. This perhaps mattered less when Silverlight didn’t support the idea of hooking the right mouse menu as it does in Silverlight 4.

I imagine that the windowless mode “works” because Silverlight can’t call those lower level functions if it doesn’t have a window to call them on but that’s just my supposition. I wouldn’t want to have to go windowless to get touch capabilities.

The Windows Phone 7 context menu comes from the Silverlight Toolkit for Windows Phone 7 and, of course, this is done a very different way because there’s no right mouse buttons on the phone Smile and so the context menu is raised by a Tap+Hold gesture and that “just works” ( I think it’s based on the gesture support that’s also in the toolkit for the Phone ).

A Tale of 4 ListBoxes

Another example would be how a ListBox responds to a Pan gesture. Here’s one from Windows Forms;

image

and that does the right thing with respect to the settings made around panning in control panel – looking at the Windows messages I can see that we send GESTURENOTIFY and then GESTURE and then SCROLL messages to the Window;

image

So, this ListBox hasn’t explicitly been enabled to deal with this pan gesture, it’s just magically seeming to be able to pan because Windows is translating the gesture into something that the ListBox understands – scroll messages.

Now, a WPF 3.5 Sp1 ListBox doesn’t do so well at this;

image

and using the panning gesture just causes the ListBox selection to move around a little and that ListBox I’m using is not a Windows ListBox. It’s a WPF ListBox that is drawn in a completely different way and so Windows can’t send it messages to make it “do the right thing”.

Now, if I take that example to WPF 4.0 then it starts to behave very nicely because WPF 4.0 added touch support to a bunch of controls such as the ListBox (the actual list of controls is in the article here).

This Silverlight 4.0 ListBox does not really do the right thing around panning either for the same reason – Silverlight does not make use of Windows controls and does not support touch beyond the raw touch event level and so it’s not going to understand the pan gesture here and the OS isn’t in any position to help it out.

image

Note, the ListBox will work in the sense that if I can grab that scrollbar and move it around it’ll scroll but it’s not so easy to do that with your finger.

The Silverlight 3 Windows Phone 7 ListBox does exactly the right thing;

image

and so supports panning up and down with inertia although that gesture/inertia behaviour is built into the platform and is not surfaced for applications to make use of as far as I know although there are things that we can do with other libraries.

Heading Forwards…

We get a lot of touch capabilities for free here in the sense that we don’t have to explicitly write code and yet we still get UI that respond to touch events and gestures but the support is clearly different across frameworks/devices.

Of course, you don’t have to stay with the “free” behaviour and you don’t have to stay entirely within the bounds of the frameworks as they ship in the box – you can look to third party libraries to help out and there’s quite a few out there.

It seems to me though that there’s 4 levels of support that developers are looking for from touch;

  1. Ability to pick up and handle raw events.
  2. Ability to turn raw events into gestures and process those.
  3. Ability to process manipulations with inertia.
  4. A basic set of controls that combine (1), (2) and (3) in order to provide default touch behaviour that can be supplemented for special case scenarios by the developer.
  5. A richer set of controls combining (1),(2),(3) with specific support for touch. I’m thinking of things like an items control which already supports manipulation and inertia on its items beyond just panning the items.

Looking across WPF, Silverlight and Windows Phone I think all the platforms give you (1) but (2), (3) and (4) need different levels of work on each platform.

Whilst I’ve skirted around this area before, I’ll come back to (1), (2), (3), (4) and perhaps a little of (5) in follow on posts and see how they hang together across WPF, Silverlight and Windows Phone 7.

In the meantime, feel free to drop me a comment on this one if you can confirm/deny some of my guesswork or (as always) if I’ve written any of this up incorrectly.