Touched ( Part 3 )–Manipulation and Gesture Support

quick note – this post has a lot of sketchy code in it that I’ve not really tidied up – apply a pinch of salt and look for better examples elsewhere as I’m just trying to skate across the surface of the different API options.

Following up on that previous post, I thought I’d carry on a little and think about gestures.

What’s a gesture? There’s a list over here on MSDN about the types of gestures that Windows recognises;

image

and those include;

  • tap/double tap
  • panning with inertia
  • selection/drag
  • press and tap
  • zoom
  • rotate
  • two finger tap
  • press and hold
  • flicks

although this is for Windows and so that doesn’t mean that every one of those gestures shows up (e.g.) down on the Windows Phone 7 or in particular frameworks like WPF and Silverlight.

Then there’s the touch based manipulations such as rotate, pan and zoom which are often paired with inertia in order to give a more real-world feel to the interactions.

There’s at least 3 sides to this looking across WPF, Silverlight and Silverlight for Windows Phone 7.

  1. Does the platform have native support? If not, can you work around it?
  2. Is that support something that you can hook into?
  3. Do the built-in controls automatically make use of that support to (e.g.) scroll through a ListBox when you flick downwards or upwards?

In this post I’ll look a little at (1) and (2) and I’ll leave (3) for another post.

WPF and Manipulations/Gestures

How does WPF cope when it comes to gestures? Pretty well – WPF is really the Swiss Army Knife of UI frameworks and it’s rare to find something that it doesn’t cope with although as we’ll see in the follow on post that doesn’t mean that we can’t add to it with things like the Surface Toolkit.

WPF supports gestures in 3 ways.

  1. It supports raw touch input as we saw previously so you could use that to build your own gestures but I doubt that you really want to do that given that…
  2. It supports manipulation events on UIElements and those events report translation, rotation and scaling of the element in question. It also supports inertia for those events.
  3. It has direct support for some of those other system gestures that I mentioned previously – “two finger tap”, “press and tap” and so on. These were present in versions of WPF prior to 4.0 because they mostly come over from the stylus world of a Tablet PC and WPF has long-term support there.

Note – I’m not sure that you’d combine (2) and (3) here without caution as I suspect that’s likely to lead to a UI that’s going to at the least confuse the user.

System Gestures

To hook into the system gestures, I can make use of the [Preview]StylusSystemGesture event on UIElement which gives me data about the type of system gesture detected which would be enough for a gesture like “two finger tap” but for a gesture like “drag” it’s likely you’d need more information and that is left to the developer to do either by monitoring stylus events more generally as they lead up to the event or by interrogating the system gesture event data after the event. I made a tiny UI;

<Window
    x:Class="WpfApplication9.MainWindow"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    Title="MainWindow"
    Height="350"
    Width="525">
    <Grid
        Background="Black">
        <Grid.ColumnDefinitions>
            <ColumnDefinition />
            <ColumnDefinition />
        </Grid.ColumnDefinitions>
        <Rectangle
            x:Name="rectInput"
            StylusSystemGesture="OnSystemGesture"
            RadiusX="3"
            RadiusY="3"
            Fill="Green"
            Margin="6" />
        <Grid
            Grid.Column="1">
            <Grid.RowDefinitions>
                <RowDefinition />
                <RowDefinition />
            </Grid.RowDefinitions>
            <Viewbox>
                <TextBlock
                    Foreground="White"
                    Text="{Binding LastGesture}" />
            </Viewbox>
            <DataGrid
                Grid.Row="1"
                ItemsSource="{Binding TouchPoints}"
                AutoGenerateColumns="True" />
        </Grid>
    </Grid>
</Window>

and then added a bit of code behind it to try and visualise the events a little as they came in;

using System.ComponentModel;
using System.Windows;
using System.Windows.Input;

namespace WpfApplication9
{
  public partial class MainWindow : Window, INotifyPropertyChanged
  {
    public event PropertyChangedEventHandler PropertyChanged;

    public MainWindow()
    {
      InitializeComponent();
      this.DataContext = this;
    }
    public SystemGesture LastGesture
    {
      get
      {
        return (_LastGesture);
      }
      set
      {
        _LastGesture = value;
        RaisePropertyChanged("LastGesture");
      }
    }
    SystemGesture _LastGesture;


    public StylusPointCollection TouchPoints
    {
      get
      {
        return (_TouchPoints);
      }
      set
      {
        _TouchPoints = value;
        RaisePropertyChanged("TouchPoints");
      }
    }
    StylusPointCollection _TouchPoints;


    void OnSystemGesture(object sender, StylusSystemGestureEventArgs e)
    {
      this.LastGesture = e.SystemGesture;

      this.TouchPoints = e.GetStylusPoints(this.rectInput);
    }
    void RaisePropertyChanged(string property)
    {
      if (this.PropertyChanged != null)
      {
        this.PropertyChanged(this,
          new PropertyChangedEventArgs(property));
      }
    }

  }
}

and as I run that UI, I can get a picture of the gestures that I’m using;

image

Manipulations

WPF 4.0 also has support for the touch manipulation gestures that drive rotation, translation and scaling on a particular element. The UIElement class has a IsManipulationEnabled property and if you switch that on then ( as long as you don’t swallow the raw touch events for that element ) you’ll get manipulation events such as;

  • ManipulationStarting, ManipulationStarted
  • ManipulationDelta
  • ManipulationCompleted
  • ManipulationBoundaryFeedback
  • ManipulationInertiaStarting

So, if we build what feels like it has become the “Hello World” of the touch user interface like this one defined in XAML;

<Window
    x:Class="WpfApplication10.MainWindow"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    Title="MainWindow"
    Height="768"
    Width="1024">
    <Grid
        Background="#FFD00000">
        <Rectangle            
            IsManipulationEnabled="true"
            ManipulationStarting="OnItemManipulationStarting"
            ManipulationInertiaStarting="OnItemManipulationInertiaStarting"
            ManipulationDelta="OnItemManipulationDelta"
            Fill="Red"
            RadiusX="3"
            RadiusY="3"
            Width="192"
            Height="96"
            HorizontalAlignment="Left"
            VerticalAlignment="Top" />
        <Viewbox
            Width="192"
            Height="96"
            HorizontalAlignment="Right"
            VerticalAlignment="Bottom"
            IsManipulationEnabled="True"
            ManipulationStarting="OnItemManipulationStarting"
            ManipulationDelta="OnItemManipulationDelta"
            ManipulationInertiaStarting="OnItemManipulationInertiaStarting">
            <TextBlock
                Text="Manipulate" />
        </Viewbox>
    </Grid>
</Window>

and you’ll notice that both my Rectangle and my Viewbox here have set IsManipulationEnabled and are handling ManipulationStarting, ManipulationDelta and ManipulationInertiaStarting – I wrote a little bit of code to try and handle those events;

using System.Windows;
using System.Windows.Input;
using System.Windows.Media;

namespace WpfApplication10
{
  public partial class MainWindow : Window
  {
    public MainWindow()
    {
      InitializeComponent();
    }
    void OnItemManipulationStarting(object sender,
      ManipulationStartingEventArgs e)
    {
      e.ManipulationContainer = this;
    }
    void OnItemManipulationDelta(object sender, ManipulationDeltaEventArgs e)
    {
      UIElement element = (UIElement)sender;

      TransformGroup group = element.RenderTransform as TransformGroup;

      if (group == null)
      {
        element.RenderTransformOrigin = new Point(0.5, 0.5);
        group = new TransformGroup();
        group.Children.Add(new RotateTransform());
        group.Children.Add(new ScaleTransform());
        group.Children.Add(new TranslateTransform());
        element.RenderTransform = group;
      }
      RotateTransform rotate = (RotateTransform)group.Children[0];
      ScaleTransform scale = (ScaleTransform)group.Children[1];
      TranslateTransform translate = (TranslateTransform)group.Children[2];

      scale.ScaleX *= e.DeltaManipulation.Scale.X;
      scale.ScaleY *= e.DeltaManipulation.Scale.Y;
      rotate.Angle += e.DeltaManipulation.Rotation;
      translate.X += e.DeltaManipulation.Translation.X;
      translate.Y += e.DeltaManipulation.Translation.Y;
    }
    void OnItemManipulationInertiaStarting(object sender, 
      ManipulationInertiaStartingEventArgs e)
    {
      e.TranslationBehavior = new InertiaTranslationBehavior()
      {
        DesiredDeceleration = 0.01
      };
      e.RotationBehavior = new InertiaRotationBehavior()
      {
        DesiredDeceleration = 0.001
      };
    }
  }
}

there’s undoubtedly a lot more that could be done to make this “correct” and those deceleration numbers are picked out of the air but seem to give me a reasonable experience and I not only get manipulation but also inertia thrown in by the framework and I can drag things around on my UI;

image

Silverlight & Manipulations/Gestures

Silverlight does not have support for manipulations/gestures. As discussed in the previous post, you can get access to the raw touch events but you’d have to write all the code around gestures yourself.

It’s unlikely that you want to do this.

So, what can you do? Grab a library…there are a few around and a few of them are based around the “Microsoft Surface Manipulations and Inertia Sample for Microsoft Silverlight” so I thought it’d make sense to start there.

The Surface Sample for Silverlight

This sample is licensed under the Surface SDK 1.0 Sp1 license agreement and provides an implementation of System.Windows.Input.Manipulations which is present in .NET 4.0. You get an assembly which you can reference but no source for the implementation.

There’s really 2 things in here. There’s the ManipulationProcessor2D and there’s the InertiaProcessor2D. You feed a set of touch events into the former and it combines them into manipulations for you. The latter can continue those manipulations to provide a sense of inertia to user actions.

This is largely giving you what WPF gives you in terms of manipulation and inertia but it hasn’t been pre-wired into the Silverlight UI Framework for you as it has been in WPF by tying it onto UIElement. So, you have to do some work yourself.

Again, I put together a very simple bit of UI just like the previous one except this is now Silverlight;

<UserControl
    x:Class="SilverlightApplication5.MainPage"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    xmlns:local="clr-namespace:SilverlightApplication5"
    xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
    mc:Ignorable="d"
    d:DesignHeight="300"
    d:DesignWidth="400">
    <Grid
        x:Name="LayoutRoot"
        Background="Plum">
        <Rectangle
            HorizontalAlignment="Left"
            VerticalAlignment="Top"
            RadiusX="3"
            RadiusY="3"
            Width="192"
            Height="96"
            Fill="Red">            
            <i:Interaction.Behaviors>
                <local:ManipulationBehavior />
            </i:Interaction.Behaviors>
        </Rectangle>
        <Viewbox
            HorizontalAlignment="Right"
            VerticalAlignment="Bottom"
            Width="192"
            Height="96">
            <i:Interaction.Behaviors>
                <local:ManipulationBehavior />
            </i:Interaction.Behaviors>
            <TextBlock
                Text="Manipulate" />
        </Viewbox>
    </Grid>
</UserControl>

and I added this ManipulationBehavior quickly put together around the ManipulationProcessor2D class that ships in that sample.

I actually wrote 2 quick, sketchy behaviors. One tries to set up the notion of the “current behavior” that has captured touch input and so touch events are then routed to that particular behavior;

  public abstract class CaptureTouchBehavior : Behavior<UIElement>
  {
    static CaptureTouchBehavior()
    {
      activeBehaviors = new List<CaptureTouchBehavior>();
    }
    protected override void OnAttached()
    {
      base.OnAttached();
      EnsureTouchEvents();
      activeBehaviors.Add(this);
    }
    protected override void OnDetaching()
    {
      base.OnDetaching();
      activeBehaviors.Remove(this);

      if (activeBehavior == this)
      {
        activeBehavior = null;
      }
    }
    protected abstract void OnCapture();
    protected abstract void OnRelease();
    protected abstract void OnTouchFrameReported(TouchFrameEventArgs e);

    static void EnsureTouchEvents()
    {
      if (!touchEvents)
      {
        touchEvents = true;
        Touch.FrameReported += OnFrameReported;
      }
    }
    static void OnFrameReported(object sender, TouchFrameEventArgs e)
    {
      if (activeBehavior == null)
      {
        // TODO: Beef this up.
        var point = e.GetPrimaryTouchPoint(null);

        if (point != null)
        {
          foreach (var item in activeBehaviors)
          {
            var elements = VisualTreeHelper.FindElementsInHostCoordinates(point.Position,
              Application.Current.RootVisual);

            if (elements.Contains(item.AssociatedObject))
            {
              activeBehavior = item;
              activeBehavior.OnCapture();
              break;
            }
          }
        }
      }
      if (activeBehavior != null)
      {
        activeBehavior.OnTouchFrameReported(e);

        // TODO: Beef this up too.
        var points = e.GetTouchPoints(null);

        if ((points.Count == 1) && (points[0].Action == TouchAction.Up))
        {
          activeBehavior.OnRelease();
          activeBehavior = null;
        }
      }
    }
    static List<CaptureTouchBehavior> activeBehaviors;
    static bool touchEvents;
    static CaptureTouchBehavior activeBehavior;
  }

and so this is just trying (and I emphasise that because it’s not really finished off) to;

  1. Spot the first touch interaction.
  2. Identify which (if any) of the elements that are using this behavior are hit by that particular touch interaction.
  3. Set that element as the “current” touch element.
  4. Notify the element via the OnCapture() override.
  5. Route future touch elements to that element via the OnTouchFrameReported override.
  6. Try to spot the end of the touch interaction with that element and notify the element that it is no longer going to be the “active” behavior by the OnRelease() override.

That’s the sketchy idea anyway. I then derived a behavior from that;

  public class ManipulationBehavior : CaptureTouchBehavior
  {
    protected override void OnAttached()
    {
      base.OnAttached();
      base.AssociatedObject.RenderTransformOrigin = new Point(0.5, 0.5);
    }
    protected override void OnCapture()
    {
      manipProcessor = new ManipulationProcessor2D(Manipulations2D.All);
      manipProcessor.Completed += OnManipulationCompleted;
      manipProcessor.Delta += OnManipulationDelta;
      manipProcessor.MinimumScaleRotateRadius = 24;
    }
    protected override void OnRelease()
    {
      manipProcessor.CompleteManipulation(DateTime.UtcNow.Ticks);
    }
    protected override void OnTouchFrameReported(TouchFrameEventArgs e)
    {
      var points = e.GetTouchPoints(null);

      var manipulations = points.Select(
          p =>
          new Manipulator2D()
          {
            Id = p.TouchDevice.Id,
            X = (float)p.Position.X,
            Y = (float)p.Position.Y
          });

      manipProcessor.ProcessManipulators(DateTime.UtcNow.Ticks, manipulations);
    }
    void OnManipulationDelta(object sender, Manipulation2DDeltaEventArgs e)
    {
      TransformGroup group = base.AssociatedObject.RenderTransform as TransformGroup;

      if (group == null)
      {
        group = new TransformGroup();        
        group.Children.Add(new RotateTransform());
        group.Children.Add(new ScaleTransform());
        group.Children.Add(new TranslateTransform());
        base.AssociatedObject.RenderTransform = group;
      }
      RotateTransform rotate = (RotateTransform)group.Children[0];
      TranslateTransform translate = (TranslateTransform)group.Children[2];
      ScaleTransform scale = (ScaleTransform)group.Children[1];

      scale.ScaleX *= e.Delta.ScaleX;
      scale.ScaleY *= e.Delta.ScaleY;
      rotate.Angle += e.Delta.Rotation / Math.PI * 180;
      translate.X += e.Delta.TranslationX;
      translate.Y += e.Delta.TranslationY;
    }
    void OnManipulationCompleted(object sender, Manipulation2DCompletedEventArgs e)
    {
      manipProcessor = null;
    }
    ManipulationProcessor2D manipProcessor;
  }

and that “more or less” gives me a usable UI. I’m sure there’s a bunch of refinement that needs adding to it;

image

Within that sample library, there is also an InertiaProcessor2D class which you can use to add inertia. When a manipulation completes, you spin up the inertia processor and hand it the initial velocities and then you run a timer until the decelerations complete.

However, I didn’t set about this because other people have already walked this path and done a better job of it than me so…

Libraries Based on the Surface Sample for Silverlight

There’s a couple of libraries that are based on top of this sample so I thought it’d be worth exploring those. Here’s 2;

  • Multi-Touch Behavior for Blend.
  • Touch – a CodePlex project that gives you Silverlight gesture support and, not only that, supports versions of WPF prior to version 4 in order that you can be consistent across WPF3,4 and Silverlight 4 in the way in which you get touch support.

There’s another library on CodePlex as well which (AFAIK) isn’t based on this sample. This is the MIRIA project which I haven’t taken a look at at this point.

Multi-Touch Behavior for Blend

This comes from the mighty trio of Davide, Laurent and David and it’s based on the Surface sample so you need that assembly as well.

I got a bit confused in that I downloaded from here but that only seemed to give me bits for the Windows Phone 7 so I took the source code from here and built it myself instead. That seemed to work out fine.

I found this pretty easy to use. Making the simplest of UI’s like this one;

<UserControl
    x:Class="SilverlightApplication5.MainPage"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    xmlns:local="clr-namespace:SilverlightApplication5"
    xmlns:dld="clr-namespace:MultiTouch.Behaviors.Silverlight4;assembly=MultiTouch.Behaviors.Silverlight4"
    xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
    mc:Ignorable="d"
    d:DesignHeight="300"
    d:DesignWidth="400">
    <Canvas
        x:Name="LayoutRoot"
        Background="#FF494949">
        <Image
            Canvas.Left="41"
            Canvas.Top="23"
            Height="128"
            Name="image1"
            Stretch="UniformToFill"
            Width="155"
            Source="/SilverlightApplication5;component/2531103366_19e12f5026_o.jpg">
            <i:Interaction.Behaviors>
                <dld:MultiTouchBehavior
                    AreFingersVisible="True"
                    IsInertiaEnabled="True"
                    IsRotateEnabled="True"
                    IsScaleEnabled="False"
                    IsTranslateXEnabled="True"
                    IsTranslateYEnabled="True" />
            </i:Interaction.Behaviors>
        </Image>
    </Canvas>
</UserControl>

then, without code, that pretty much gives me an object that I can drag around on screen. A couple of things I found here though that I’m not 100% sure of;

  1. I seemed to have a better experience here using a Canvas than a Grid – not sure if there’s something Canvas specific in the behavior.
  2. Widths and Heights seemed to get changed by the behavior – I found that my UI elements seemed to always end up being square rather than the widths/heights I’d set them to.

I daresay I was doing something wrong but even with those caveats it works a whole lot better than my sketchy behavior and there’s no code to write;

image

Touch

This project also makes use of the Surface Sample implementation of System.Windows.Input.Manipulations and offers up some helper methods, extension methods and 3 behaviors – AutoClipBehavior, TouchScrollBehavior and TranslateRotateScaleBehavior.

I went with the latter and dropped the behavior onto another one of these simple UIs containing just an image as in;

<UserControl
    x:Class="SilverlightApplication8.MainPage"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    mc:Ignorable="d"
    xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
    xmlns:tg="clr-namespace:Tanagram.Behaviors.Touch.Silverlight;assembly=Tanagram.Behaviors.Touch.Silverlight"
    d:DesignHeight="300"
    d:DesignWidth="400">
    <Grid
        x:Name="LayoutRoot">
        <Canvas
            Background="Black">
            <Image
                Height="119"
                Name="image1"
                Stretch="Fill"
                Width="179"
                Canvas.Left="38"
                Canvas.Top="35"
                Source="/SilverlightApplication8;component/animal_0023.jpg">
                <i:Interaction.Behaviors>
                    <tg:TranslateRotateScaleBehavior 
                        MinimumScale="50"
                        MaximumScale="1000"
                        XLocation="192"
                        YLocation="192"/>
                </i:Interaction.Behaviors>
            </Image>
        </Canvas>
    </Grid>
</UserControl>

and that pretty much got me rotate, scale, translate gestures fairly easily. It’s worth pointing out that if I don’t use a Canvas as the parent here then I get an error from the behavior itself;

image

and so it’s only going to work for me with a Canvas parent.

Silverlight for Windows Phone 7 & Manipulations/Gestures

Unlike on the desktop, Silverlight for Windows Phone 7 does have built-in support for manipulations. The semi-familiar trio of ManipulationStarted, ManipulationDelta and ManipulationCompleted are present on the UIElement class and so that means that the WPF code that I had previously should work fine on the phone except;

  1. I noticed that the phone version has ManipulationStarted, not ManipulationStarting.
  2. I noticed that the phone version does not need the IsManipulationEnabled property that WPF uses.
  3. I noticed that the ManipulationDelta type on the phone lacks the Rotation property so you can only pick up scale, translate deltas.
  4. I noticed that I seemed to get a lot of zeroes through as values for my scale values at times and so had to filter them out.

but with those minor changes it seems to work “ok” although I think I’d need to do something about the phone’s orientation changing but with another one of those trivial UI’s;

    <Grid
        x:Name="LayoutRoot"
        Background="Transparent">
        <Grid.RowDefinitions>
            <RowDefinition
                Height="Auto" />
            <RowDefinition
                Height="*" />
        </Grid.RowDefinitions>

        <!--TitlePanel contains the name of the application and page title-->
        <StackPanel
            x:Name="TitlePanel"
            Grid.Row="0"
            Margin="12,17,0,28">
            <TextBlock
                x:Name="ApplicationTitle"
                Text="MY APPLICATION"
                Style="{StaticResource PhoneTextNormalStyle}" />
            <TextBlock
                x:Name="PageTitle"
                Text="page name"
                Margin="9,-7,0,0"
                Style="{StaticResource PhoneTextTitle1Style}" />
        </StackPanel>

        <!--ContentPanel - place additional content here-->
        <Grid
            x:Name="ContentPanel"
            Grid.Row="1"
            Margin="12,0,12,0">
            <Image
                ManipulationDelta="OnItemManipulationDelta"
                Height="150"
                HorizontalAlignment="Left"
                Margin="70,198,0,0"
                Name="image1"
                Stretch="Fill"
                VerticalAlignment="Top"
                Width="200"
                Source="/WindowsPhoneApplication2;component/animal_0066.jpg" />
        </Grid>
    </Grid>

and a bit of sketchy code;

 public partial class MainPage : PhoneApplicationPage
  {
    // Constructor
    public MainPage()
    {
      InitializeComponent();
    }
    void OnItemManipulationDelta(object sender, ManipulationDeltaEventArgs e)
    {
      UIElement element = (UIElement)sender;

      TransformGroup group = element.RenderTransform as TransformGroup;

      if (group == null)
      {
        element.RenderTransformOrigin = new Point(0.5, 0.5);
        group = new TransformGroup();
        group.Children.Add(new ScaleTransform());
        group.Children.Add(new TranslateTransform());
        element.RenderTransform = group;
      }
      ScaleTransform scale = (ScaleTransform)group.Children[0];
      TranslateTransform translate = (TranslateTransform)group.Children[1];

      if (e.DeltaManipulation.Scale.X != 0.0f)
      {
        scale.ScaleX *= e.DeltaManipulation.Scale.X;
        scale.ScaleY *= e.DeltaManipulation.Scale.Y;
      }
      translate.X += e.DeltaManipulation.Translation.X;
      translate.Y += e.DeltaManipulation.Translation.Y;
    }
  }

then I can get at least the basics working;

image

what about gestures? There’s some support for gestures in the toolkit so if I just add a reference to Microsoft.Phone.Controls.Toolkit then I can have “some UI” like;

        <Grid
            x:Name="ContentPanel"
            Grid.Row="1"
            Margin="12,0,12,0"
            Background="Lime"
            xmlns:tk="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit">
            <tk:GestureService.GestureListener>
                <tk:GestureListener
                    PinchCompleted="OnPinchCompleted"
                    DragCompleted="OnDragCompleted"
                    DoubleTap="OnDoubleTap" />
            </tk:GestureService.GestureListener>
            <TextBlock
                TextWrapping="Wrap"
                HorizontalAlignment="Center"
                VerticalAlignment="Center"
                FontSize="24"
                x:Name="txtGesture" />
        </Grid>

with some simple event handlers to pick up the gestures – this is nice Winking smile 

  public partial class MainPage : PhoneApplicationPage
  {
    // Constructor
    public MainPage()
    {
      InitializeComponent();
      GestureListener gl;
    }
    void OnDragCompleted(object sender, DragCompletedGestureEventArgs e)
    {
      txtGesture.Text =
        string.Format("Drag {0}, {1}", e.HorizontalChange, e.VerticalChange);
    }
    void OnDoubleTap(object sender, GestureEventArgs e)
    {
      txtGesture.Text = "Double Tap";
    }
    private void OnPinchCompleted(object sender, PinchGestureEventArgs e)
    {
      txtGesture.Text =
        string.Format("Pinch {0}, {1}", e.DistanceRatio, e.TotalAngleDelta);
    }
 }

other gestures supported here include Tap, Flick, Hold.

Note that the XNA framework also has support for touch on the Windows Phone 7 and, from what I can see, the GestureListener here is using that in order to build its support (otherwise, I’d be asking why this GestureListener isn’t present in the full version of Silverlight).

Summarising

What if I wanted to do something dead simple? What if I wanted to have a UI that displayed 2 photos and when I did a simple drag gesture across the screen it switched to the other photo?

And what if I wanted to do that in WPF, Silverlight on the desktop and Silverlight on the Windows Phone 7?

What’s the path of least resistance? I’ll put that in another post because this one has got too long…