Windows 7 and WPF 4.0 Multitouch: Inertia

WPF 4.0’s manipulation events certainly made things easier to write an application that supports multitouch gestures. After you start playing with these gestures, however, you’ve found yourself disappointed.

You want more. There’s something missing. It’s just not like it used to be. “It’s not you, Manipulation events,” you say. “No…it’s me.” But then? A spark! You find out something new about them! Your relationship is saved! “Why, Manipulation events, I never knew you could handle…inertia!”

Having a long-term relationship with APIs aside, you’ve certainly landed on something interesting. WPF 4.0’s Manipulation events can also be used to handle inertia, which allows your UI to look a little more natural and fun.

For those of you who didn’t pay attention in 4th grade science, inertia is Newton’s Second Law of Motion. This law states that objects in motion tend to stay in motion, unless acted upon by an outside force. In other words: Ugg move stuff. Ugg let go. Stuff still move. Ugg hungry.

Science.

The idea behind inertia in WPF’s Manipulation events is to make objects that are being manipulated behave as a user would expect. When a user spins a card on a table, he can let go and it will continue spinning until it decelerates to a stop. Adding inertia to your manipulable objects makes users giddy to see things on a computer imitate the physical world.

Let’s start with the same Window as I used in the manipulation post.

In order to handle inertia, we need to create an event handler for our new inertia event, ManipulationInertiaStarting. This goes right along with your ManipulationDelta and ManipulationStarting events.

<Window x:Class="NewTouchTest.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525"
        ManipulationStarting="Window_ManipulationStarting" ManipulationDelta="HandleManipulation" ManipulationInertiaStarting="HandleInertia">

The rest of the XAML is the same.

    <Grid x:Name="AppGrid">
        <Rectangle Fill="Blue" Height="100" Width="200" VerticalAlignment="Top" HorizontalAlignment="Left" x:Name="ManRect1" IsManipulationEnabled="True">
            <Rectangle.RenderTransform>
                <MatrixTransform>
                    <MatrixTransform.Matrix>
                        <Matrix OffsetX="250" OffsetY="200"/>
                    </MatrixTransform.Matrix>
                </MatrixTransform>
            </Rectangle.RenderTransform>
        </Rectangle>
 
        <Rectangle Fill="Red" Height="100" Width="200" VerticalAlignment="Top" HorizontalAlignment="Left" x:Name="ManRect2" IsManipulationEnabled="True">
            <Rectangle.RenderTransform>
                <MatrixTransform>
                    <MatrixTransform.Matrix>
                        <Matrix OffsetX="50" OffsetY="50"/>
                    </MatrixTransform.Matrix>
                </MatrixTransform>
            </Rectangle.RenderTransform>
        </Rectangle>
    </Grid>
</Window>

For the code behind, we once again see our old friends.

public partial class MainWindow : Window
{
    public MainWindow()
    {
        InitializeComponent();
    }
 
    private void Window_ManipulationStarting(object sender, ManipulationStartingEventArgs e)
    {
        e.ManipulationContainer = this;
        e.Handled = true;
    }

Once the user stops performing the gesture, ManipulationInertiaStarting is fired. Our event handler, HandleInertia, is actually a very simple method. It is used to set the values of deceleration for the various manipulation components.

You can set deceleration for each of the transformations supported by manipulation: translation, scaling, and rotation. Don’t get too worried about the numbers we have here (I pulled this from the inertia explanation on MSDN originally, I think). You don’t have to be so specific to take into account your DPI to ensure you have the exact right deceleration in physical terms. These values work pretty well, though.

    private void HandleInertia(object sender, ManipulationInertiaStartingEventArgs e)
    {
        // Decrease the velocity of the Rectangle's movement by 
        // 10 inches per second every second.
        // (10 inches * 96 pixels per inch / 1000ms^2)
        e.TranslationBehavior.DesiredDeceleration = 10 * 96.0 / (1000.0 * 1000.0);
 
        // Decrease the velocity of the Rectangle's resizing by 
        // 0.1 inches per second every second.
        // (0.1 inches * 96 pixels per inch / (1000ms^2)
        e.ExpansionBehavior.DesiredDeceleration = 0.1 * 96 / 1000.0 * 1000.0;
 
        // Decrease the velocity of the Rectangle's rotation rate by 
        // 2 rotations per second every second.
        // (2 * 360 degrees / (1000ms^2)
        e.RotationBehavior.DesiredDeceleration = 720 / (1000.0 * 1000.0);
 
        e.Handled = true;
    }

Once it has set these deceleration values, it once again fires the ManipulationDelta event – if you recall, this is the event whose handler applies all of the transformations. It populates its ManipulationDeltaEventArgs with the previous values, decreased by our deceleration values. It continues to fire the event with diminishing values, causing the object to slowly come to a stop.

Since we are just reusing our already-defined ManipulationDelta handler, inertia is an incredibly easy addition to make to your manipulable objects.

    private void HandleManipulation(object sender, ManipulationDeltaEventArgs e)
    {
        Rectangle rectToManipulate = e.OriginalSource as Rectangle;
 
        Rect shapeBounds = rectToManipulate.RenderTransform.TransformBounds(new Rect(rectToManipulate.RenderSize));
        Rect containingRect = new Rect(((FrameworkElement)this).RenderSize);
        ManipulationDelta manipDelta = e.DeltaManipulation;

The only change we have to make to our handler is to check to make sure our object doesn’t fly away. This is a simple solution where, if the object goes out of the window, it completes the inertia and provides a bounce effect to give feedback to the user it has reached the edge of the screen. ***Correction: the e.Complete() method now appears to cancel the ReportBoundaryFeedback method (I wrote this application while everything was in beta). You can have the bounce effect without the e.Complete(), but your rectangle then flies out of the window. Let me know if you have a simple solution for allowing both to happen, as I likely won’t put any effort into it…*** You could easily change the behavior here to make the object more realistically react to its bounds if you like.

        // Check if the rectangle is completely in the window.
        // If it is not and intertia is occuring, stop the manipulation.
        if (e.IsInertial && !containingRect.Contains(shapeBounds))
        {
            // if both are uncommented, e.Complete() overrides e.ReportBoundaryFeedback()
 
            // comment out for a bounce, uncomment to stop the rectangle
            e.Complete();
            // comment out to stop the rectangle, uncomment for a bounce
            // e.ReportBoundaryFeedback(bounceDelta);
        }
 
        Matrix rectsMatrix = ((MatrixTransform)rectToManipulate.RenderTransform).Matrix;
        Point rectManipOrigin = rectsMatrix.Transform(new Point(rectToManipulate.ActualWidth / 2, rectToManipulate.ActualHeight / 2));
 
        // Rotate the Rectangle.
        rectsMatrix.RotateAt(manipDelta.Rotation, rectManipOrigin.X, rectManipOrigin.Y);
 
        // Resize the Rectangle.  Keep it square 
        // so use only the X value of Scale.
        rectsMatrix.ScaleAt(manipDelta.Scale.X, manipDelta.Scale.Y, rectManipOrigin.X, rectManipOrigin.Y);
 
        // Move the Rectangle.
        rectsMatrix.Translate(manipDelta.Translation.X, manipDelta.Translation.Y);
 
        // Apply the changes to the Rectangle.
        rectToManipulate.RenderTransform = (MatrixTransform)(new MatrixTransform(rectsMatrix).GetAsFrozen());
 
        e.Handled = true;
    }
}

That concludes my series on WPF 4.0 multitouch. Let me know in the comments what kinds of UI elements you’ve touchified with these new events.

Windows 7 and WPF 4.0 Multitouch: Manipulation

In a recent post, I showed you how to react to touch events in WPF 4.0. You can use that to implement the showcase multitouch gestures: scaling, rotating, and translation. It’s not too hard. Really, I’ve done it. Just dust off your geometry and trigonometry hats and get to it.

Are you done yet? No? Too lazy? Well, how about we make this easier. As I like to say regarding programmers: if necessity is the mother of invention, laziness is most certainly the father.

Luckily for us, Windows 7 has multitouch gesture recognition built in, and WPF now supports listening for it in its upcoming 4.0 release. Here’s how you can implement these gestures in your application.

We’ll first define a window that will contain two rectangles to manipulate.

The containing control defines handlers for the ManipulationStarting and ManipulationDelta events. These events are fired when a multitouch gesture is first recognized and when it changes, respectively.

<Window x:Class="NewTouchTest.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525"
        ManipulationStarting="Window_ManipulationStarting" ManipulationDelta="HandleManipulation">

The IsManipulationEnabled property is set to true for each object that we plan to manipulate. This property tells WPF to watch for gestures on manipulable controls. I would guess that forcing you to explicitly define the elements that react to gestures improves the performance of gesture recognition.

    <Grid x:Name="AppGrid">
        <Rectangle Fill="Blue" Height="100" Width="200" VerticalAlignment="Top" HorizontalAlignment="Left" x:Name="ManRect1" IsManipulationEnabled="True">
            <Rectangle.RenderTransform>
                <MatrixTransform>
                    <MatrixTransform.Matrix>
                        <Matrix OffsetX="250" OffsetY="200"/>
                    </MatrixTransform.Matrix>
                </MatrixTransform>
            </Rectangle.RenderTransform>
        </Rectangle>
 
        <Rectangle Fill="Red" Height="100" Width="200" VerticalAlignment="Top" HorizontalAlignment="Left" x:Name="ManRect2" IsManipulationEnabled="True">
            <Rectangle.RenderTransform>
                <MatrixTransform>
                    <MatrixTransform.Matrix>
                        <Matrix OffsetX="50" OffsetY="50"/>
                    </MatrixTransform.Matrix>
                </MatrixTransform>
            </Rectangle.RenderTransform>
        </Rectangle>
    </Grid>
</Window>
public partial class MainWindow : Window
{
    public MainWindow()
    {
        InitializeComponent();
    }

The ManipulationStarting handler sets up the manipulation container in order to specify a frame of reference that the values will be relative to. For example, it establishes the origin (0,0) for x and y coordinates.

    private void Window_ManipulationStarting(object sender, ManipulationStartingEventArgs e)
    {
        e.ManipulationContainer = this;
        e.Handled = true;
    }

The ManipulationDelta handler is used to perform the transformations as the gesture is being performed. It will fire continuously as long as the gesture is changing.

    private void HandleManipulation(object sender, ManipulationDeltaEventArgs e)
    {
        Rectangle rectToManipulate = e.OriginalSource as Rectangle;
        ManipulationDelta manipDelta = e.DeltaManipulation;

First, grab the rectangle’s current transform matrix so we can use that as a baseline.

        Matrix rectsMatrix = ((MatrixTransform)rectToManipulate.RenderTransform).Matrix;

Re-establishing the base line each time is important, as the values that the ManipulationDelta sends are not absolute. Each time the handler is called, the values are relative to the previous event firing. For example, if a user gestures a total rotation of 30 degrees, the events would look something like this:

# of Events e.DeltaManipulation.Rotation Total Rotation
1 5 5
2 5 10
3 5 15
4 5 20
5 5 25
6 5 30

Next, we establish an origin to use for the following manipulations. This specifies the point around which the rectangle will rotate and scale. Here, we’re setting it up at the middle of the rectangle.

        Point rectManipOrigin = rectsMatrix.Transform(new Point(rectToManipulate.ActualWidth / 2, rectToManipulate.ActualHeight / 2));

Finally, we apply the transformations to the baseline matrix and set this matrix to the sending rectangle’s RenderTransform as frozen.

        rectsMatrix.RotateAt(manipDelta.Rotation, rectManipOrigin.X, rectManipOrigin.Y);
        rectsMatrix.ScaleAt(manipDelta.Scale.X, manipDelta.Scale.Y, rectManipOrigin.X, rectManipOrigin.Y);
        rectsMatrix.Translate(manipDelta.Translation.X, manipDelta.Translation.Y);
 
        rectToManipulate.RenderTransform = (MatrixTransform)(new MatrixTransform(rectsMatrix).GetAsFrozen());
        e.Handled = true;
    }
}

See? Easy. Now, maybe you should get to that housework you’ve been putting off.

Windows 7 and WPF 4.0 Multitouch: Touch Points

Update: if you’re looking to just implement standard multitouch gestures, check out my post on manipulation.

One of the most popular posts on this blog is my writeup on getting multitouch events in Windows 7 using WPF and .NET 3.5. Now that .NET 4.0 is in open beta, its time for an update. That’s a lot of periods in two sentences.

Microsoft has made it much easier to access touch events in WPF. The touch events are likened to the mouse events you are likely very comfortable with, but with a little more information in order to support multitouch.

I’ll lay out a full application for you to play with. First, the XAML of the main window class:

<Window x:Class="NewTouchTest.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525" 
        TouchDown="Window_TouchDown" TouchMove="Window_TouchMove" TouchUp="Window_TouchUp">
    <Grid x:Name="AppGrid">
 
    </Grid>
</Window>

Did you see that? I hooked up multitouch events in my XAML. GAME CHANGER.

Yes, it is that easy. You are already set up to receive touch events. Wizardry!

Now, let’s do something worthwhile with our new found power. This application will create squares for every touch point and show its associated ID. This kind of application is useful when messing with new hardware to see how accurate the touch is. It is basically an expanded version of the last example, supporting INFINITE touch points. Infinite up to a certain power of 2, anyway.

We’ll start with an array of colors to choose from for our infinite points.

public partial class MainWindow : Window
{
    Brush[] ColorList = new Brush[] { Brushes.Black, Brushes.Yellow, Brushes.Turquoise, Brushes.Purple, Brushes.Orange, Brushes.Navy, Brushes.Pink, Brushes.Brown, Brushes.DarkKhaki };
    public MainWindow()
    {
        InitializeComponent();
    }

Upon the first touch, we create a new Border and move it to the corresponding location using a TranslateTransform. We also create a child TextBlock in order to display the touch point’s ID.

The ID is very important when doing something more interesting with multitouch, as it signifies a unique finger. If you are coding any gestures, you’ll need to make sure you keep track of your fingers. Actually, that’s probably a pretty sound piece of advice for life in general.

    private void Window_TouchDown(object sender, TouchEventArgs e)
    {
        Border newTouch = new Border();
        TextBlock idText = new TextBlock();
        int id = e.GetTouchPoint(this).TouchDevice.Id;
        idText.Text = id.ToString();
        idText.Foreground = Brushes.White;
        newTouch.Child = idText;
        newTouch.Background = ColorList[id % ColorList.Length];
        newTouch.Width = 20;
        newTouch.Height = 20;
        newTouch.HorizontalAlignment = System.Windows.HorizontalAlignment.Left;
        newTouch.VerticalAlignment = System.Windows.VerticalAlignment.Top;
        AppGrid.Children.Add(newTouch);
        Point touchLoc = e.GetTouchPoint(this).Position;
        newTouch.RenderTransform = new TranslateTransform(touchLoc.X, touchLoc.Y);
    }

We update the position on every subsequent move event, finding the associated Border by its child TextBlock.

    private void Window_TouchMove(object sender, TouchEventArgs e)
    {
        foreach (UIElement child in AppGrid.Children)
        {
            if (child is Border)
            {
                TouchPoint touch = e.GetTouchPoint(this);
                if (((TextBlock)((Border)child).Child).Text == touch.TouchDevice.Id.ToString())
                {
                    Point touchLoc = touch.Position;
                    child.RenderTransform = new TranslateTransform(touchLoc.X, touchLoc.Y);
                    break;
                }
            }
        }
    }

After the touch is released, we remove the associated border.

    private void Window_TouchUp(object sender, TouchEventArgs e)
    {
        foreach (UIElement child in AppGrid.Children)
        {
            if (child is Border)
            {
                TouchPoint touch = e.GetTouchPoint(this);
                if (((TextBlock)((Border)child).Child).Text == touch.TouchDevice.Id.ToString())
                {
                    AppGrid.Children.Remove(child);
                    break;
                }
            }
        }
    }
}

There. Easy! Keep an eye out for a post regarding the new gesture events.

Windows 7 Multitouch Using WPF 3.5

Update: If you’re using .NET 4.0, be sure to check out my posts about the new touch events.

Finally!  Another post about programming!  I know!  And Windows 7, too!  That ever sure is topical!  This one is for all of you developers running the Win7 beta on the HP TouchSmart (moneyhat go).

Win7 is supposed to woo and wow you with fixing Vista’s many shortcomings new features like multi-touch support.  If you’re curious on how it all works, I’d suggest you watch this great PDC 2008 video on the subject.  Windows will give you everything you need to fancify your touch application, once you’ve set it up to do so.  They’ll tell you how to get multi-touch working in unmanaged code.  There’s also some examples out there showing how to use interop to use this method in C#.  We’ve bridged the gap from unamanaged to managed code – what am I still writing this post for?

Well, WPF is a bit different.  Not only is native multi-touch not present in WPF right now (look forward to .NET 4.0 some time after Win7 releases), but you actually can’t use interop to support multi-touch in your applications.  Yeah, I know.  Something’s amiss when interop fails.

Actually, it is just that WPF doesn’t accept the WM_Touch messages that are sent to windows when the user touches the screen.  Since you don’t get this notification, you can’t capture information regarding Win7 gestures or raw data using interop.

Hold on!  Don’t run to make your shiny, new, and intuitive application in C++ just yet.

As Anson Tao alludes to in the Q&A session after the presentation in the video above, you can recieve the raw data from stylus events in WPF 3.5 SP1, which is already released.  However, you have to do just a tad bit of fidgeting to get it working.

Here’s an application that will show you how to access this information. It just moves two rectangles to the two points you touch on the window.

I’ll start with the simple XAML:

<Window x:Class="MultitouchTest.Window1"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    Title="Window1" Height="800" Width="1200">
    <Canvas>
        <Rectangle Canvas.Left="-20" Canvas.Top="0" Height="20" Name="Touch1" Stroke="Black" Fill="Black"  Width="20" />
        <Rectangle Canvas.Left="-20" Canvas.Top="0" Height="20" Name="Touch2" Stroke="Red" Width="20" Fill="Red" />
    </Canvas>
</Window>

Now, the business logic:

public partial class Window1 : Window
{
    #region Class Variables
 
    private int Touch1ID = 0;    // id for first touch contact
    private int Touch2ID = 0;    // id for second touch contact
 
    #endregion
 
    #region P/Invoke
 
    // just a little interop.  it's different this time!
    [DllImport("user32.dll")]
    public static extern bool SetProp(IntPtr hWnd, string lpString, IntPtr hData);
 
    #endregion
 
    #region Constructors/Initialization
 
    public Window1()
    {
        InitializeComponent();
 
        // here's the first thing you need to do.  upon window load, you want to set the tablet
        // property to receive multi-touch data.  you need to the window loaded to ensure the handle is created.
        this.Loaded += new RoutedEventHandler(
           delegate(object sender, RoutedEventArgs args)
           {
               var source = new WindowInteropHelper(this);
 
               SetProp(source.Handle,
                   "MicrosoftTabletPenServiceProperty", new IntPtr(0x01000000));
 
           });
 
        // then, simply subscribe to the stylus events like normal.  you'll get an event for each contact.
        // so, when you move both fingers, you get a StylusMove event for each individual finger
        this.StylusDown += new StylusDownEventHandler(Window1_StylusDown);
        this.StylusMove += new StylusEventHandler(Window1_StylusMove);
        this.StylusUp += new StylusEventHandler(Window1_StylusUp);
    }
 
    #endregion
 
    #region Touch Events
 
    void Window1_StylusDown(object sender, StylusDownEventArgs e)
    {
        Point p = e.GetPosition(this);   // get the location for this contact
 
        // attribute an id with a touch point
        if (Touch1ID == 0)
        {
            Touch1ID = e.StylusDevice.Id;
            // move the rectangle to the given location
            Touch1.SetValue(Canvas.LeftProperty, p.X - Touch1.Width / 2);
            Touch1.SetValue(Canvas.TopProperty, p.Y - Touch1.Height / 2);
        }
        else if (Touch2ID == 0)
        {
            Touch2ID = e.StylusDevice.Id;
            // move the rectangle to the given location
            Touch2.SetValue(Canvas.LeftProperty, p.X - Touch2.Width / 2);
            Touch2.SetValue(Canvas.TopProperty, p.Y - Touch2.Height / 2);
        }
    }
 
    void Window1_StylusMove(object sender, StylusEventArgs e)
    {
        Point p = e.GetPosition(this);
        // determine which contact this belongs to
        if (Touch1ID == e.StylusDevice.Id)
        {
            // move the rectangle to the given location
            Touch1.SetValue(Canvas.LeftProperty, p.X - Touch1.Width / 2);
            Touch1.SetValue(Canvas.TopProperty, p.Y - Touch1.Height / 2);
        }
        else if (Touch2ID == e.StylusDevice.Id)
        {
            // move the rectangle to the given location
            Touch2.SetValue(Canvas.LeftProperty, p.X - Touch2.Width / 2);
            Touch2.SetValue(Canvas.TopProperty, p.Y - Touch2.Height / 2);
        }
    }
 
    void  Window1_StylusUp(object sender, StylusEventArgs e)
    {
         // reinitialize touch id and hide the rectangle
        if (e.StylusDevice.Id == Touch1ID)
        {
            Touch1.SetValue(Canvas.LeftProperty, -Touch1.Width);
            Touch1ID = 0;
        }
        else if (e.StylusDevice.Id == Touch2ID)
        {
            Touch2.SetValue(Canvas.LeftProperty, -Touch2.Width);
            Touch2ID = 0;
        }
    }
 
    #endregion
}