Fast User Switching and Session Changes in WPF

While I haven’t been working on C#/WPF in awhile, I figured I’d go ahead and complete a draft that’s been sitting in my queue for awhile.

Let's switch. Your pink background makes my eyes bleed.

Let's switch. Your pink background makes my eyes bleed.

Fast user switching is a concept Microsoft first included with Windows XP that allows the OS to quickly switch to another user and back by keeping the first user’s applications running while the second uses the computer.

This sounds like a great idea, and for the most part, it is; however, it places a new responsibility on application developers, because the context in which an application is running could change at any time.

That is, user1 could be using app.exe, switch to user2, and user2 could use app.exe. Then you have two users using the same executable at the same time, and user1 expects that app.exe will retain his place in the app for when he returns.

Now, before you start shouting, “Anarchy!”, note that there are a variety of things you can do to address this responsibility.

You can just tell user2 that he can’t use the application until user1 closes it. This is a poor approach, but you can feel free to annoy your users with how lazy you are if you like.

Alternatively, you can set your application to begin listening for window events that signal a change in the computer’s session.

Just as we have before, we’ll be using our friend, WndProc, to capture messages that WPF doesn’t fire for us. The one we’re looking for this time is WM_WTSSESSION_CHANGE, which will notify the application that the Windows session has changed. In order to receive notifications for this event, we’ll have to register using the function WTSRegisterSessionNotification.

Let’s kick things off with some imports for the Win32 APIs we’ll be using and the constants associated with them (see the MSDN links in the previous paragraph).

    [DllImport("WtsApi32.dll")]
    private static extern bool WTSRegisterSessionNotification(IntPtr hWnd, [MarshalAs(UnmanagedType.U4)]int dwFlags);
    [DllImport("WtsApi32.dll")]
    private static extern bool WTSUnRegisterSessionNotification(IntPtr hWnd);
    [DllImport("kernel32.dll")]
    public static extern int WTSGetActiveConsoleSessionId();
 
    // dwFlags options for WTSRegisterSessionNotification
    const int NOTIFY_FOR_THIS_SESSION = 0;     // Only session notifications involving the session attached to by the window identified by the hWnd parameter value are to be received.
    const int NOTIFY_FOR_ALL_SESSIONS = 1;     // All session notifications are to be received.
 
    // session change message ID
    const int WM_WTSSESSION_CHANGE = 0x2b1;
 
    public enum WTSMessage
    {
        // WParam values that can be received:
        WTS_CONSOLE_CONNECT = 0x1, // A session was connected to the console terminal.
        WTS_CONSOLE_DISCONNECT = 0x2, // A session was disconnected from the console terminal.
        WTS_REMOTE_CONNECT = 0x3, // A session was connected to the remote terminal.
        WTS_REMOTE_DISCONNECT = 0x4, // A session was disconnected from the remote terminal.
        WTS_SESSION_LOGON = 0x5, // A user has logged on to the session.
        WTS_SESSION_LOGOFF = 0x6, // A user has logged off the session.
        WTS_SESSION_LOCK = 0x7, // A session has been locked.
        WTS_SESSION_UNLOCK = 0x8, // A session has been unlocked.
        WTS_SESSION_REMOTE_CONTROL = 0x9 // A session has changed its remote controlled status.
    }

The first thing we’ll need to do is register for notifications. You’ll probably want this near the start of your application, in case your user is fast and/or part of your QA. After registration is successful, we want to capture the initial session ID to use for comparisons later on.

    private int initialSessionId;
        if (!WTSRegisterSessionNotification((new WindowInteropHelper(this)).Handle, NOTIFY_FOR_THIS_SESSION))
        {
            // throw exception - registration has failed!
        }
        initialSessionId = Process.GetCurrentProcess().SessionId;

When our WndProc is executed, we can check the message to see if it corresponds to the WM_WTSSESSION_CHANGE we’ve defined (confused? should’ve clicked that link earlier…).

We once again get the active session ID for the event to allow us to compare against our initial value. After all, we may not want to do anything if the session ID hasn’t changed.

It’s also useful to check the wParamValue in order to understand the type of session change that has occurred. If a user logs out, we can perform auto-save functions or some clean up to get rid of unneeded resources as a preparation for the new user to log in. Alternatively, we can expand the use of this feature and show low-res images to make repaints faster if it’s a remote connection.

    protected override void WndProc(ref Message m)
    {
        int evtSessionID = WTSGetActiveConsoleSessionId();
 
        switch (m.Msg)
        {
            case WM_WTSSESSION_CHANGE:
                {
                    WTSMessage wParamValue = (WTSMessage)m.WParam.ToInt32();
                    Console.WriteLine("Session message " + wParamValue + " Active Session ID:" + evtSessionID + " Current Process Session ID:" + initialSessionId);
                    // do something useful
                }
                break;
        }
        base.WndProc(ref m);
    }

Lastly, let’s be a good neighbor and unregister our subscription for notifications using WTSUnRegisterSessionNotification. Be sure to do this before your window handle is destroyed, such as in your Window_Closing event.

    WTSUnRegisterSessionNotification((new WindowInteropHelper(this)).Handle);

All done! Now, let me know how you use it.

Windows 7 and WPF 4.0 Multitouch: Inertia

WPF 4.0’s manipulation events certainly made things easier to write an application that supports multitouch gestures. After you start playing with these gestures, however, you’ve found yourself disappointed.

You want more. There’s something missing. It’s just not like it used to be. “It’s not you, Manipulation events,” you say. “No…it’s me.” But then? A spark! You find out something new about them! Your relationship is saved! “Why, Manipulation events, I never knew you could handle…inertia!”

Having a long-term relationship with APIs aside, you’ve certainly landed on something interesting. WPF 4.0’s Manipulation events can also be used to handle inertia, which allows your UI to look a little more natural and fun.

For those of you who didn’t pay attention in 4th grade science, inertia is Newton’s Second Law of Motion. This law states that objects in motion tend to stay in motion, unless acted upon by an outside force. In other words: Ugg move stuff. Ugg let go. Stuff still move. Ugg hungry.

Science.

The idea behind inertia in WPF’s Manipulation events is to make objects that are being manipulated behave as a user would expect. When a user spins a card on a table, he can let go and it will continue spinning until it decelerates to a stop. Adding inertia to your manipulable objects makes users giddy to see things on a computer imitate the physical world.

Let’s start with the same Window as I used in the manipulation post.

In order to handle inertia, we need to create an event handler for our new inertia event, ManipulationInertiaStarting. This goes right along with your ManipulationDelta and ManipulationStarting events.

<Window x:Class="NewTouchTest.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525"
        ManipulationStarting="Window_ManipulationStarting" ManipulationDelta="HandleManipulation" ManipulationInertiaStarting="HandleInertia">

The rest of the XAML is the same.

    <Grid x:Name="AppGrid">
        <Rectangle Fill="Blue" Height="100" Width="200" VerticalAlignment="Top" HorizontalAlignment="Left" x:Name="ManRect1" IsManipulationEnabled="True">
            <Rectangle.RenderTransform>
                <MatrixTransform>
                    <MatrixTransform.Matrix>
                        <Matrix OffsetX="250" OffsetY="200"/>
                    </MatrixTransform.Matrix>
                </MatrixTransform>
            </Rectangle.RenderTransform>
        </Rectangle>
 
        <Rectangle Fill="Red" Height="100" Width="200" VerticalAlignment="Top" HorizontalAlignment="Left" x:Name="ManRect2" IsManipulationEnabled="True">
            <Rectangle.RenderTransform>
                <MatrixTransform>
                    <MatrixTransform.Matrix>
                        <Matrix OffsetX="50" OffsetY="50"/>
                    </MatrixTransform.Matrix>
                </MatrixTransform>
            </Rectangle.RenderTransform>
        </Rectangle>
    </Grid>
</Window>

For the code behind, we once again see our old friends.

public partial class MainWindow : Window
{
    public MainWindow()
    {
        InitializeComponent();
    }
 
    private void Window_ManipulationStarting(object sender, ManipulationStartingEventArgs e)
    {
        e.ManipulationContainer = this;
        e.Handled = true;
    }

Once the user stops performing the gesture, ManipulationInertiaStarting is fired. Our event handler, HandleInertia, is actually a very simple method. It is used to set the values of deceleration for the various manipulation components.

You can set deceleration for each of the transformations supported by manipulation: translation, scaling, and rotation. Don’t get too worried about the numbers we have here (I pulled this from the inertia explanation on MSDN originally, I think). You don’t have to be so specific to take into account your DPI to ensure you have the exact right deceleration in physical terms. These values work pretty well, though.

    private void HandleInertia(object sender, ManipulationInertiaStartingEventArgs e)
    {
        // Decrease the velocity of the Rectangle's movement by 
        // 10 inches per second every second.
        // (10 inches * 96 pixels per inch / 1000ms^2)
        e.TranslationBehavior.DesiredDeceleration = 10 * 96.0 / (1000.0 * 1000.0);
 
        // Decrease the velocity of the Rectangle's resizing by 
        // 0.1 inches per second every second.
        // (0.1 inches * 96 pixels per inch / (1000ms^2)
        e.ExpansionBehavior.DesiredDeceleration = 0.1 * 96 / 1000.0 * 1000.0;
 
        // Decrease the velocity of the Rectangle's rotation rate by 
        // 2 rotations per second every second.
        // (2 * 360 degrees / (1000ms^2)
        e.RotationBehavior.DesiredDeceleration = 720 / (1000.0 * 1000.0);
 
        e.Handled = true;
    }

Once it has set these deceleration values, it once again fires the ManipulationDelta event – if you recall, this is the event whose handler applies all of the transformations. It populates its ManipulationDeltaEventArgs with the previous values, decreased by our deceleration values. It continues to fire the event with diminishing values, causing the object to slowly come to a stop.

Since we are just reusing our already-defined ManipulationDelta handler, inertia is an incredibly easy addition to make to your manipulable objects.

    private void HandleManipulation(object sender, ManipulationDeltaEventArgs e)
    {
        Rectangle rectToManipulate = e.OriginalSource as Rectangle;
 
        Rect shapeBounds = rectToManipulate.RenderTransform.TransformBounds(new Rect(rectToManipulate.RenderSize));
        Rect containingRect = new Rect(((FrameworkElement)this).RenderSize);
        ManipulationDelta manipDelta = e.DeltaManipulation;

The only change we have to make to our handler is to check to make sure our object doesn’t fly away. This is a simple solution where, if the object goes out of the window, it completes the inertia and provides a bounce effect to give feedback to the user it has reached the edge of the screen. ***Correction: the e.Complete() method now appears to cancel the ReportBoundaryFeedback method (I wrote this application while everything was in beta). You can have the bounce effect without the e.Complete(), but your rectangle then flies out of the window. Let me know if you have a simple solution for allowing both to happen, as I likely won’t put any effort into it…*** You could easily change the behavior here to make the object more realistically react to its bounds if you like.

        // Check if the rectangle is completely in the window.
        // If it is not and intertia is occuring, stop the manipulation.
        if (e.IsInertial && !containingRect.Contains(shapeBounds))
        {
            // if both are uncommented, e.Complete() overrides e.ReportBoundaryFeedback()
 
            // comment out for a bounce, uncomment to stop the rectangle
            e.Complete();
            // comment out to stop the rectangle, uncomment for a bounce
            // e.ReportBoundaryFeedback(bounceDelta);
        }
 
        Matrix rectsMatrix = ((MatrixTransform)rectToManipulate.RenderTransform).Matrix;
        Point rectManipOrigin = rectsMatrix.Transform(new Point(rectToManipulate.ActualWidth / 2, rectToManipulate.ActualHeight / 2));
 
        // Rotate the Rectangle.
        rectsMatrix.RotateAt(manipDelta.Rotation, rectManipOrigin.X, rectManipOrigin.Y);
 
        // Resize the Rectangle.  Keep it square 
        // so use only the X value of Scale.
        rectsMatrix.ScaleAt(manipDelta.Scale.X, manipDelta.Scale.Y, rectManipOrigin.X, rectManipOrigin.Y);
 
        // Move the Rectangle.
        rectsMatrix.Translate(manipDelta.Translation.X, manipDelta.Translation.Y);
 
        // Apply the changes to the Rectangle.
        rectToManipulate.RenderTransform = (MatrixTransform)(new MatrixTransform(rectsMatrix).GetAsFrozen());
 
        e.Handled = true;
    }
}

That concludes my series on WPF 4.0 multitouch. Let me know in the comments what kinds of UI elements you’ve touchified with these new events.

Windows 7 and WPF 4.0 Multitouch: Manipulation

In a recent post, I showed you how to react to touch events in WPF 4.0. You can use that to implement the showcase multitouch gestures: scaling, rotating, and translation. It’s not too hard. Really, I’ve done it. Just dust off your geometry and trigonometry hats and get to it.

Are you done yet? No? Too lazy? Well, how about we make this easier. As I like to say regarding programmers: if necessity is the mother of invention, laziness is most certainly the father.

Luckily for us, Windows 7 has multitouch gesture recognition built in, and WPF now supports listening for it in its upcoming 4.0 release. Here’s how you can implement these gestures in your application.

We’ll first define a window that will contain two rectangles to manipulate.

The containing control defines handlers for the ManipulationStarting and ManipulationDelta events. These events are fired when a multitouch gesture is first recognized and when it changes, respectively.

<Window x:Class="NewTouchTest.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525"
        ManipulationStarting="Window_ManipulationStarting" ManipulationDelta="HandleManipulation">

The IsManipulationEnabled property is set to true for each object that we plan to manipulate. This property tells WPF to watch for gestures on manipulable controls. I would guess that forcing you to explicitly define the elements that react to gestures improves the performance of gesture recognition.

    <Grid x:Name="AppGrid">
        <Rectangle Fill="Blue" Height="100" Width="200" VerticalAlignment="Top" HorizontalAlignment="Left" x:Name="ManRect1" IsManipulationEnabled="True">
            <Rectangle.RenderTransform>
                <MatrixTransform>
                    <MatrixTransform.Matrix>
                        <Matrix OffsetX="250" OffsetY="200"/>
                    </MatrixTransform.Matrix>
                </MatrixTransform>
            </Rectangle.RenderTransform>
        </Rectangle>
 
        <Rectangle Fill="Red" Height="100" Width="200" VerticalAlignment="Top" HorizontalAlignment="Left" x:Name="ManRect2" IsManipulationEnabled="True">
            <Rectangle.RenderTransform>
                <MatrixTransform>
                    <MatrixTransform.Matrix>
                        <Matrix OffsetX="50" OffsetY="50"/>
                    </MatrixTransform.Matrix>
                </MatrixTransform>
            </Rectangle.RenderTransform>
        </Rectangle>
    </Grid>
</Window>
public partial class MainWindow : Window
{
    public MainWindow()
    {
        InitializeComponent();
    }

The ManipulationStarting handler sets up the manipulation container in order to specify a frame of reference that the values will be relative to. For example, it establishes the origin (0,0) for x and y coordinates.

    private void Window_ManipulationStarting(object sender, ManipulationStartingEventArgs e)
    {
        e.ManipulationContainer = this;
        e.Handled = true;
    }

The ManipulationDelta handler is used to perform the transformations as the gesture is being performed. It will fire continuously as long as the gesture is changing.

    private void HandleManipulation(object sender, ManipulationDeltaEventArgs e)
    {
        Rectangle rectToManipulate = e.OriginalSource as Rectangle;
        ManipulationDelta manipDelta = e.DeltaManipulation;

First, grab the rectangle’s current transform matrix so we can use that as a baseline.

        Matrix rectsMatrix = ((MatrixTransform)rectToManipulate.RenderTransform).Matrix;

Re-establishing the base line each time is important, as the values that the ManipulationDelta sends are not absolute. Each time the handler is called, the values are relative to the previous event firing. For example, if a user gestures a total rotation of 30 degrees, the events would look something like this:

# of Events e.DeltaManipulation.Rotation Total Rotation
1 5 5
2 5 10
3 5 15
4 5 20
5 5 25
6 5 30

Next, we establish an origin to use for the following manipulations. This specifies the point around which the rectangle will rotate and scale. Here, we’re setting it up at the middle of the rectangle.

        Point rectManipOrigin = rectsMatrix.Transform(new Point(rectToManipulate.ActualWidth / 2, rectToManipulate.ActualHeight / 2));

Finally, we apply the transformations to the baseline matrix and set this matrix to the sending rectangle’s RenderTransform as frozen.

        rectsMatrix.RotateAt(manipDelta.Rotation, rectManipOrigin.X, rectManipOrigin.Y);
        rectsMatrix.ScaleAt(manipDelta.Scale.X, manipDelta.Scale.Y, rectManipOrigin.X, rectManipOrigin.Y);
        rectsMatrix.Translate(manipDelta.Translation.X, manipDelta.Translation.Y);
 
        rectToManipulate.RenderTransform = (MatrixTransform)(new MatrixTransform(rectsMatrix).GetAsFrozen());
        e.Handled = true;
    }
}

See? Easy. Now, maybe you should get to that housework you’ve been putting off.

Just a Bit: Improving Graphics Card Performance

I spent a little of time with some people over at AMD the other day, looking at ways to better utilize the video card using WPF.

A useful little chunk that came from that was using the Freeze method on UI elements that are being manipulated. This tells the video card to use the texture already in video memory instead of unloading the old one, performing the manipulation, and loading a new texture into memory. Since this is the most expensive action that can be done with a video card, using Freezable members can make things look much smoother.

Here’s an example:

private void Window_TouchMove(object sender, TouchEventArgs e)
{
    Point touchLoc = e.GetTouchPoint(this).Position;
    TranslateTransform unfrozenTransform = new TranslateTransform(touchLoc.X, touchLoc.Y);
    ManipulatingChild.RenderTransform = (TranslateTransform)unfrozenTransform.GetAsFrozen();
}

Windows 7 and WPF 4.0 Multitouch: Touch Points

Update: if you’re looking to just implement standard multitouch gestures, check out my post on manipulation.

One of the most popular posts on this blog is my writeup on getting multitouch events in Windows 7 using WPF and .NET 3.5. Now that .NET 4.0 is in open beta, its time for an update. That’s a lot of periods in two sentences.

Microsoft has made it much easier to access touch events in WPF. The touch events are likened to the mouse events you are likely very comfortable with, but with a little more information in order to support multitouch.

I’ll lay out a full application for you to play with. First, the XAML of the main window class:

<Window x:Class="NewTouchTest.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525" 
        TouchDown="Window_TouchDown" TouchMove="Window_TouchMove" TouchUp="Window_TouchUp">
    <Grid x:Name="AppGrid">
 
    </Grid>
</Window>

Did you see that? I hooked up multitouch events in my XAML. GAME CHANGER.

Yes, it is that easy. You are already set up to receive touch events. Wizardry!

Now, let’s do something worthwhile with our new found power. This application will create squares for every touch point and show its associated ID. This kind of application is useful when messing with new hardware to see how accurate the touch is. It is basically an expanded version of the last example, supporting INFINITE touch points. Infinite up to a certain power of 2, anyway.

We’ll start with an array of colors to choose from for our infinite points.

public partial class MainWindow : Window
{
    Brush[] ColorList = new Brush[] { Brushes.Black, Brushes.Yellow, Brushes.Turquoise, Brushes.Purple, Brushes.Orange, Brushes.Navy, Brushes.Pink, Brushes.Brown, Brushes.DarkKhaki };
    public MainWindow()
    {
        InitializeComponent();
    }

Upon the first touch, we create a new Border and move it to the corresponding location using a TranslateTransform. We also create a child TextBlock in order to display the touch point’s ID.

The ID is very important when doing something more interesting with multitouch, as it signifies a unique finger. If you are coding any gestures, you’ll need to make sure you keep track of your fingers. Actually, that’s probably a pretty sound piece of advice for life in general.

    private void Window_TouchDown(object sender, TouchEventArgs e)
    {
        Border newTouch = new Border();
        TextBlock idText = new TextBlock();
        int id = e.GetTouchPoint(this).TouchDevice.Id;
        idText.Text = id.ToString();
        idText.Foreground = Brushes.White;
        newTouch.Child = idText;
        newTouch.Background = ColorList[id % ColorList.Length];
        newTouch.Width = 20;
        newTouch.Height = 20;
        newTouch.HorizontalAlignment = System.Windows.HorizontalAlignment.Left;
        newTouch.VerticalAlignment = System.Windows.VerticalAlignment.Top;
        AppGrid.Children.Add(newTouch);
        Point touchLoc = e.GetTouchPoint(this).Position;
        newTouch.RenderTransform = new TranslateTransform(touchLoc.X, touchLoc.Y);
    }

We update the position on every subsequent move event, finding the associated Border by its child TextBlock.

    private void Window_TouchMove(object sender, TouchEventArgs e)
    {
        foreach (UIElement child in AppGrid.Children)
        {
            if (child is Border)
            {
                TouchPoint touch = e.GetTouchPoint(this);
                if (((TextBlock)((Border)child).Child).Text == touch.TouchDevice.Id.ToString())
                {
                    Point touchLoc = touch.Position;
                    child.RenderTransform = new TranslateTransform(touchLoc.X, touchLoc.Y);
                    break;
                }
            }
        }
    }

After the touch is released, we remove the associated border.

    private void Window_TouchUp(object sender, TouchEventArgs e)
    {
        foreach (UIElement child in AppGrid.Children)
        {
            if (child is Border)
            {
                TouchPoint touch = e.GetTouchPoint(this);
                if (((TextBlock)((Border)child).Child).Text == touch.TouchDevice.Id.ToString())
                {
                    AppGrid.Children.Remove(child);
                    break;
                }
            }
        }
    }
}

There. Easy! Keep an eye out for a post regarding the new gesture events.

Buzzwords: Managed and Unmanaged Code

Buzzwords will be a recurring segment where I explain some of the words and phrases I pick up on as I grow in my development knowledge. Some will be simple definitions; others will delve further into the concepts being presented to explore their meaning.

After I started at HP, my vocabulary was challenged every day with new abbreviations and HP jargon. There were also a few technical terms, two of which came up rather frequently: managed and unmanaged code. Using context clues, I quickly figured it out, but it was something I hadn’t been exposed to during school.

Unmanaged code is code that compiles into machine language to be executed using the computer’s hardware. That is to say, that there is no intermediary between your executable and the instructions given to your computer. Standard usage of C, C++, assembly, etc. can create binaries with these instructions.

Managed code is a term used to describe code that depends on .NET’s Common Language Runtime (CLR). C#, C++/CLI, VB.NET, etc. all will build assemblies with an Intermediate Language (IL). The CLR will interpret this language and compile each part into machine language when it is to be used (this is called Just-in-Time [JIT] compiling). This methodology allows some help for the programming, such as garbage collection and security checking (though at a cost to performance, since it is automatic).

The distinction between the two is important in Microsoft’s world, as managed code can be written in a variety of languages. .NET supports C++ (C++/CLI, above), so assuming that all of a C++ program is being compiled into machine code and executes without .NET might be incorrect.

The term “managed” is usually applied to applications that use .NET, specifically; however, I’ve also heard people use the term when referring to Java. While the term was coined by Microsoft to distinguish .NET code, I don’t see any harm in using it to describe Java, which uses similar concepts in its underbelly.

Post a comment on which you use most frequently. Be sure to list the advantages that made you make this decision.

Attaching to WndProc in WPF

WPF, like any other UI program, has an inner loop that continually runs in order to update the state of the application and render the UI.  One part of this loop is a call to the function WndProc, which is the function through which Windows communicates the messages your window is receiving (be it input or system notifications).

WPF hides this function from you (presumably to make things easier) and instead just fires events off for anything [it thinks] you’ll ever need.  Sometimes, however, it is useful to attach to this loop in order to address messages that don’t have a related WPF event, such as messages sent by other applications.

Here’s how you do it.

In your window’s SourceInitialized event, create an HwndSource object from your window’s handle. Use the AddHook method to attach an event handler to all of your window’s events using the supplied function.

private void Window_SourceInitialized(object sender, EventArgs e)
{
    IntPtr windowHandle = (new WindowInteropHelper(this)).Handle;
    HwndSource src = HwndSource.FromHwnd(windowHandle);
    src.AddHook(new HwndSourceHook(WndProc));
}

As its always good practice to define the methods you reference, be sure to define WndProc and, hopefully, do something useful with it. Its parameters describe the message by giving you its ID, as well as the parameters sent along with it. For some more on windows messages, check out my earlier post regarding emulating those messages. If your desired message has been captured and handled, be sure to set handled to true.

private IntPtr WndProc(IntPtr hWnd, int msg, IntPtr wParam, IntPtr lParam, ref bool handled)
{
    // address the messages you are receiving using msg, wParam, lParam
    if (msg == WM_LOOK_FOR_DROIDS)
    {
        if (wParam == DROIDS_IM_LOOKING_FOR)
        {
            CaptureDroids(lParam);
            handled = true;
        }
        else
        {
            AskToMoveAlong(lParam);
        }
    }
    return IntPtr.Zero;
}

Lastly, be sure to remove the hook when your window is closing.

private void Window_Closing(object sender, EventArgs e)
{
    IntPtr windowHandle = (new WindowInteropHelper(this)).Handle;
    HwndSource src = HwndSource.FromHwnd(windowHandle);
    src.RemoveHook(new HwndSourceHook(this.WndProc));
}

Easy as pie. If you’re feeling nostalgic and don’t want to use events, you can implement your entire application in the WndProc method (other than the rendering, which WPF also hides from you). I wouldn’t recommend it, though…

The New HP TouchSmart PCs

For the past year and a half, I’ve been working on software for HP’s TouchSmart all-in-one PCs. I develop and manage the deliverables for some of the tiles found within the TouchSmart software suite.  Allow me to take a moment and give you a TouchSmart commercial, explain my role in its creation, and how you can develop on the platform.

Two New TouchSmarts

HP released (in October – yes, this post is a tad late) two new TouchSmarts to coincide with the release of Windows 7, the 300 and 600 series. The 600 is the more performant and larger of the two, but both retain the same touch technology and form factor.  It sports the Core 2 Duo, while the smaller brother uses a 64-bit AMD processor.  You can pick and choose various components for the 300 and 600 at HP’s shopping site, so be sure to check out the specs there if you’re interested in more details.

The TouchSmarts use optical touch solutions, using two infrared sensors to triangulate the positions of up to two touches. This 2-camera system presents some inaccuracies when two touches are involved, and it certainly created some challenges for my own development. These restrictions are found in all of the multi-touch all-in-ones currently in the market, as they are all based on the same technology, but our mathemagicians perform some voodoo on the data to more accurately approximate the location of the user’s fingers. It is a much more cost-effective solution when compared to capacitive touch screens, such as the iPhone’s, as the cost for that type of screen exponentially increases with surface area.

The TouchSmart software suite has seen quite a few changes itself. In case you are not familiar with the TouchSmart software for the 500 and 800 series (TouchSmart 2.0), it is a collection of touch-based applications displayed in two scrollable rows of “tiles”. These tiles are not interactive in this view, but the user can view information in each tile. To interact with the tile, the user taps on it and enters the full view of the application. You can find a video showing the framework here.

TouchSmart 2.0

TouchSmart 2.0

The big change in TouchSmart 3.0 is interactive tiles. The top row’s tiles have been widened and the user can now interact within each tile. Not only that, but the list of TouchSmart applications has grown beyond 20, and each of the existing applications have seen major enhancements.  A video for the new framework can be viewed here.

TouchSmart 3.0

My Role

The four applications for which I am responsible are:

Canvas
Create collages and tag your photos using multitouch gestures or voice commands. I was the developer on this application, which sprung from a sample application that I wrote to show our vendors how to calculate the touch gestures. You may have seen some of the results of that sample early if you’ve been traveling through Chicago recently, as the application being demoed is using a library I wrote.  Here, you can find a tutorial video (without voice commands but surely some bugs, since the video was shot far before we shipped).

Hulu
Watch videos on Hulu through the touch-friendly interface of Hulu Desktop, residing in the TouchSmart framework. This application was developed by Hulu, but I manage the deliverables for it, provide technical consultation for its integration, as well as ensure it passes through our qualification process.

Twitter
Twitter client for TouchSmart. If you’re into Twitter, you’ll know what’s here (it’s the standard fare for Twitter apps). If you’re not, you don’t care anyway. My responsibilities for this app are similar to my responsibilities for Hulu.

Clock
Clock application for TouchSmart. It’s identical to the TouchSmart 2.0 version, but has been updated to work with 3.0 and ported to Windows 7.

As I mentioned above, there are over 20 applications for the TouchSmart, so what I’ve done barely scratches the surface. The two teams working on the suite worked incredibly hard and have pulled off some amazing stuff. There are apps from photo editing to recipe management. For a full description of all of the applications, you can find them here.

How You Can Get Involved

You are free to create your own TouchSmart tiles if you have an idea for a touch application that would fit well within the TouchSmart framework. It is rather simple – in fact, an existing windows application can be living nicely in TouchSmart within a couple hours of development.

You can find the TouchSmart 3.0 SDK at the TouchSmart Dev Zone, a community based around TouchSmart application development.  Be sure to get involved here, as there are plenty of people willing to aid you in your development, and you can submit your completed application to this site for distribution.

The biggest highlight to this new SDK is that there is now a library for WPF to help you through some of the requirements for TouchSmart applications.  In it, you’ll find a Window class that will define your window with the necessary properties for a tile (no chrome, layout notifications, off-screen launching, etc).  There are also helper classes for common functions (loading localized language files, creating notifications in TouchSmart, sending requests to TouchSmart, etc).  Check the SDK for details and feel free to send any questions my way.

If you aren’t developing using WPF, it includes all of the information you need in order to create an application without the library.

Sound off in the comments any ideas you have for apps, as well as your interest in TouchSmart development.

Multithreading in WPF

If you’re unfamiliar with multithreading, be sure to check out my previous entries on the topic.

In WPF, creating a thread is as easy as it is with C#. You can find an example on that here.  Alternatively, you could use the BackgroundWorker, which basically will create a thread and will give you a generalized, simplified interface in which to interact with it for a common threading task: doing an extra task in the background (such as downloading or progress bar updating).

In an earlier post, I used a mysterious method to enable responsiveness in the UI while loading a bunch of content (in that case, images).

This mystical object is called The Dispatcher.

THE DISPATCHER

No, this isn’t an edge-of-your-seat thrill ride movie that smacks explosions, swords, and alien guts into your M&M-filled mouth. It is an object used to manage the work for threads within WPF.  It maintains a queue of work items that are requested of any given thread, based on their order and priority.  This is the object you want to get to know if you’re going to be playing with your UI on a separate thread.

As mentioned previously, UI objects can’t be accessed outside of the threads that created them.  You can, however, use a separate thread to determine what changes you’ll be making and to what objects you will make them, then use their thread to actually apply that change.  In order to do this, use the object’s dispatcher to schedule the work on their queue.

For example, take a look at the code for loading images mentioned above:

private void LoadImage(string fname)
{
	// instantiate and initialize the image source
	BitmapImage bmi = new BitmapImage();
	bmi.BeginInit();
	bmi.UriSource = new Uri(fname, UriKind.Relative);
	bmi.EndInit();
 
	bmi.Freeze();		// freeze the image source, used to move it across the thread
 
	// this method tells the separate thread to run the following method to run on the UI thread
	// the (ThreadStart)delegate(){ } notation is a shorthand for creating a method and a delegate for that method
	TheImage.Dispatcher.BeginInvoke(System.Windows.Threading.DispatcherPriority.Normal, (ThreadStart)delegate ()
	{
		TheImage.Source = bmi;
	});
}

This method creates the BitmampImage object on a separate thread, leaving the main thread free for user input, and freezes it so that it can be used on another thread.  It then uses TheImage’s Dispatcher to modify TheImage on its own thread by calling its dispatcher’s BeginInvoke method.

There are two ways to invoke using the dispatcher: BeginInvoke() and Invoke().  BeginInvoke() will queue the work for the dispatcher and continue the separate thread’s execution.  It puts in the request for the UI thread to execute the delegate, then continues on it merry way for its own execution.  This is useful when your separate thread does not rely on what it is requesting the UI thread to do.

The Invoke() method will wait until the delegate is executed and returns.  If you are modifying anything that you need or if the modification must be done before continuing in the separate thread, you should go with this one.

The Dispatcher is something you’ll get pretty cozy with if you plan on changing your UI elements from a separate thread.  If you’re just doing a progress bar or something else that is rather predictable, you can skip it by using the BackgroundWorker’s ReportProgress method and ProgressChanged event.  Just be sure to give it some time if you are calling the dispatcher often.

In case you didn’t notice all of my linking to previous posts, you may want to check out the rest of my posts on multithreading.

Threading Complexities

As I explained previously, threads are like workers with separate to-do lists that share the same tools and materials and perform tasks at the same time. There are a couple of tough situations that these coworkers will often find themselves in, however, and you need to make sure that their employer has the proper processes to provide solutions.  Yes, I am going to run this metaphor straight into the ground.

First, you need to know what “concurrent” means to a computer.  Dictionary.com defines it as “occurring or existing simultaneously or side by side”.  Your computer, however, defines it as “switching between the tasks fast enough so that nobody notices that they aren’t occurring simultaneously”.  So, when I say that your employees are working at the same time, I actually mean that they are working one at a time but switching between who is working fast enough that the boss doesn’t realize they’re taking breaks.  The CPU is constantly juggling which thread gets to execute, based on priority and (usually) the order by which they come.  This same kind of exercise is happening with all of the processes that are currently running.

If you don’t notice, why should you care?  Well, there is a delay when switching between threads (or processes) called a context switch.  During a context switch, the CPU must save the state of the currently active thread, choose the next thread to give time to, restore that thread’s state, and continue its execution.  What this boils down to is that there is cost a  associate with multithreading.  You need to be aware of this cost, otherwise you may find your application running slower with multiple threads.  The reason is that your threads are unbalanced – that they are switching back and forth so much that the time it takes to do all of the context switching is greater than the time you save running two tasks simultaneously!

I ran into an example of this recently when attempting to improve the responsiveness of an application.  I was trying to do things on a separate thread to keep things smooth in the UI for the user; however I had to perform a task on a set of elements on the original thread (more on that later).  This forced me to call back to the original thread so often that the experience ended up being even worse.  I’ll explain more detail about how that works in WPF in a future post, but here’s the basic idea in pseudocode:

create a new thread to perform a CPU-intensive task; run:
   foreach object in somelist
      call back to original thread with the following task:
         perform an action on object

Because I went back to the original thread so often and so quickly, the CPU was spending most of its time context switching.  In order to fix it, I added one line:

create a new thread to perform a CPU-intensive task; run:
   foreach object in somelist
      call back to original thread with the following task:
         perform an action on object
      sleep for x amount of time

I added a sleep command in the separate thread.  This gave the main thread time to perform the task on the object, redraw, and settle in a little before I gave it another task.  This added a visual delay to the action on-screen (since there is x amount of time between each object being acted upon), but that was acceptable in this case to give the user a smooth experience.

Secondly, there is a key part of this metaphor that you have to consider: each worker is sharing resources.  This is good – it means that each thread can access the data it needs while executing; however, it comes with a caveat: you have to ensure no thread is changing that data while another thread is trying to access it.

Imagine if we have two workers sharing a drill.  Worker One is going to use it to screw a shelf to a wall, while Worker Two is going to drill a hole for the next shelf.  Now, imagine that Worker One has placed the screw where he wants it and is about to pull the trigger on the drill when a context switch occurs.  Worker One freezes, and Worker Two grabs the drill.  He pulls out the Phillips head bit that Worker One was using and replaces it with a drill bit.  Like an episode of Seinfeld, the worst thing happens at the worst possible time: another context switch.  Worker one takes the drill back and uses the drill bit on his screw, damaging the screw and quite possibly his hand.  This is called a race condition: multiple independent algorithms are dependent on a single shared resource, thus making the timing of each access of that resource critical to the success of each algorithm.

This means you have to be weary of using your global variables or the members of your class in a thread.  You must be certain that you aren’t changing something that your other thread is depending on.  The common way to handle this is by using mutual exclusion (or mutex) algorithms.  The basic concept is that, when using a variable that is common to other threads, you must ensure that the variable is not currently in use by another thread, often via queues or access flags.  Take a look at the previous link for a list of well-known algorithms with examples.  There is a wealth of knowledge related to solving race conditions, and I’m not even going to attempt to address it all.

If you take a close look at the pseudo code above, you’ll notice that I’m using a single thread to perform all actions on the set of objects, thus avoiding race conditions (as only one thread accesses them at a time).  This isn’t by my own design, however; this is a restriction given to UI elements in most, if not all, languages.  Because of their nature, UI elements aren’t thread safe.  They can be accessed by you, the graphics engine, or even the user.  Because of the amount of overhead required to allow UI elements to work in threads, they are restricted to being accessed only by the thread that created them.  Above, since I cannot access the UI elements in my worker thread, I have to call back to the thread that created the object and tell it to do the modifications I need.  This makes moving work to separate threads in a UI-heavy application complicated at times, but you get used to it pretty quickly.

So, it doesn’t sound near as simple as it did in my first post; however, don’t let this scare you away from using multithreading.  Once you get the hang of it, it is really quite simple to use.  Besides, you’ll need to be familiar with it before digging your hands into any serious user-oriented application.