Pages

Thursday, May 29, 2008

Windows 7 multi-touch SDK being readied for PDC in October

As details continue to emerge about Microsoft's evidently well-made plans for its next operating system, we learn that full documentation for how multi-touch capabilities will work in Windows, will be ready for demonstration by this fall.

For Microsoft's next Professional Developers' Conference currently scheduled for late October in Los Angeles, the company plans to demonstrate the use of a system developers' kit for producing multi-touch applications for Windows 7. Such applications would follow the model unveiled yesterday by executives Bill Gates and Steve Ballmer at a Wall Street Journal technology conference in Carlsbad, California yesterday.

For the session tentatively entitled "Windows 7: Touch Computing," the PDC Web site -- which went live just this morning -- describes, "In Windows 7, innovative touch and gesture support will enable more direct and natural interaction in your applications. This session will highlight the new multi-touch gesture APIs and explain how you can leverage them in your applications."

We were surprised to find the PDC site reads better when viewed in Internet Explorer.

The early suggestions from Microsoft's developers -- some of whom have been openly hinting that multi-touch was coming to Windows 7 since last December -- is that the next version of Windows will be endowed with technology that emerged from the company's Surface project, its first to implement such controls. Surface is actually an extension of the Windows Vista platform -- specifically, it's the Windows Presentation Foundation extended so that it sees a surface display device as essentially just another container control, with an expanded list of supported graphic devices.

What is not known at this stage is how much today's Windows Vista will have to be extended to enable multi-touch in Windows 7, especially for the sake of downward compatibility with existing and earlier applications.

Prior to the advent of Windows XP, when applications were largely compiled using Microsoft Foundation Classes (MFC), application windows were very generic containers with standardized window gadgets and menu bars. When a developer used the standard MFC library, he could be assured that scroll bars could respond to mouse events and that contents that spilled off the edge of the visible area would not, as a result, descend into some invisible twilight zone.

Holding that MFC fabric together was the concept that graphic elements responded to individual events, often called "mouse events." And the basic premise of a mouse event was that it had to do with a single element positioned at one spot, or one set of coordinates, on the screen. A keyboard event could alternately trigger a mouse event (pressing Enter while the highlight was over "OK," for example), but the developer would only have to write one event handler for managing what happened after clicking on OK.

The first touch sensitivity in Windows came by way of Tablet PC, which was a platform extension to Windows, coupled with a series of drivers. Adding a stylus as a new device for input could indeed change the way applications worked unto themselves; they could add all kinds of new gadgets that would have been pointless under mouse control only.

In addition, Microsoft opened up a wide array of so-called semantic gestures, which was a library of simple things one could do with a stylus that could potentially mean something within an application. For example, scratching on top of a word could be taken to mean, "Delete this word." Drawing a long arrow beside a graphic object could mean, "Please move this object over here." It all depended on how the application developer wanted the user to see things; and there were certainly some good suggestions, but not the kind or level of standardization as prescribed by IBM's Common User Access model (PDF available here) of the early 1990s.

However, outside of the application's native context, whatever a stylus can do in the Windows workspace is relegated to substituting for a mouse event. In other words, the Windows desktop was not supposed to know or care whether the user was operating a mouse, a keyboard, or a stylus, just as long as the same events were triggered.

For instance, a tap of the stylus on the surface will send an event whose constant code in Visual Studio is WM_LBUTTONDOWN, followed immediately by WM_LBUTTONUP, as though the user had pressed and released the left mouse button (the "L" in these constant codes). By comparison, holding down the pen on the surface will trigger the WM_RBUTTONDOWN event just after the time the pen touches the surface, followed by WM_RBUTTONUP when the user lifts it from the surface. However Windows would normally respond to a left or right button click, respectively, is how the Tablet PC developer would expect Windows to respond to a stylus tap or a press-and-hold.

Here, because standard Windows functions must be capable of working reasonably within a Tablet PC environment, the interface between the general functions and the outside world is standardized.

Since that time, we've seen the advent of Windows Presentation Foundation, a little piece of which is distributed with every copy of Silverlight. An application built to support WPF operates under a new set of rules.

As we saw last year with the first demonstrations of Surface development, a gadget that can be used in a Surface application can essentially be the same gadget used in everyday Windows, just wrapped within a new and more versatile container. That container can then be assigned to the Surface container, which is an alternate space that doesn't have to abide by all the rules of the Windows desktop. There, most importantly, a gadget can be sensitive to more than one thing happening at a time; it can register something that takes place on multiple sets of screen coordinates (generally two) as a single event -- something which MFC could never do.

In the Surface world, as Microsoft's first demos showed, a gadget can be stretched and shrunk using two-handed or two-fingered gestures. It can be tossed around and spun, and depending on the level of physics in play at the time, gadgets can pretend to adhere to laws of gravity. This way a Surface display hanging on a wall, for instance, can contain gadgets which, when pinned, descend toward the floor rather than float as if in space.

These are the types of extensions made possible by WPF, and many of these same types of extensions were seen in the videos released yesterday by Microsoft, including windows that spin around -- something typical applications windows in Windows have never done before.

But as the Surface demo showed, the world inside Surface works essentially by registering itself within the underlying Windows kernel as a world within a world. It is an application, as far as Windows knows; and like a Tablet PC app that enables semantic gestures where the rest of Windows won't, a Surface demo is a world of enhanced physics, the likes of which have never been attempted on a Windows desktop.

So the question becomes this: What type of world is Windows 7? Will it adapt a Tablet PC-like model, where the real gist of the enhancements are available only to applications that are "multi-touch-aware?" Or can it open existing Windows applications to the realm of touch sensitivity? Put another way: Could today's Office 2007, running in Windows 7, allow its main application window to be stretched by two hands? Or will the types of functions we saw yesterday only be feasible to developers using the new Windows 7 multi-touch SDK, the existence of which was first confirmed this morning?

We may not know the answer to this next month, when Microsoft throws its TechEd conference in Orlando. But we know that we will know the answer by October; and we can infer from that news the fact that Windows 7 system developers' kits at a very low level will be distributable to developers this fall.

No comments:

Related Posts Plugin for WordPress, Blogger...