(Multi) touch gestures on existing OS

I have my research question heading in the right direction –it’s not definitive yet– and I have defined some sub questions, I had a chat with my teacher last Monday. All went pretty well, but he thinks I need to shift my focus more towards the practical research. I do agree with him to some extend, but I still want to have a solid understanding of what I am working with. I just need to find the right balance.

Since I am doing two things at the same time –building the system and writing the supportive narrative– I have decided that I work for a week on the same subject. So this week it was researching for the supportive narrative, and next week it is building / coding the system.

What I first want to do is get myself some gestures to work with for the rest of the project. I am planning on using four gestures and work them out in depth.


Before I can talk about in what context multi touch operates, I need to explain the current way of computing. Although a GUI and WIMP are by far the most common ways of computing, there are still a lot of systems that uses command line or other methods like all kinds of servers for instance. It’s however out of the scope of this article to talk about those. The term WIMP is often incorrectly used as a synonym for GUI.

GUI stands for Graphical User Interface. It is an interface based on graphics rather than a command line.

WIMP stands for Window, Icons, Menu, Pointer. Different sources have different interpretations of the meaning, but the general concept stays the same. It’s an interface where applications run independently of each other, at the same time in their own window’s, objects like files are represented by icons, menu’s gives access to tasks and a pointing device to perform certain tasks in certain circumstances. Note however that the pointer is most of the time a mouse, but does not have to be. It can be a pen tablet just as easily. And –generally speaking– there is only one pointer.

The difference in GUI and WIMP is that because a WIMP interface relies on Icons and a Pointer –which are graphical by there nature– WIMP is considered a GUI. But not all GUI have to rely on all the –if any– WIMP elements.
The context of multi touch

A system based on multi touch is always a GUI and uses a lot of the WIMP elements but not all. There is no pointer for instance. On mobile phones –most of the time– the applications run one at the time and don’t have their own window. Applications can –if the os supports it– run in the background, but they don’t have a window.

Because the commonly used OSes we use today (Windows, OS X and Linux) are GUI’s based on WIMP, the multi touch we experience today on desktop and laptop’s –mobile touch devices have there own OS which actually is developed around touch input– takes over some of the pointer tasks.

Windows 7 gestures

Windows 7 supports natively multi touch gestures. The gestures make actions that where before difficult or not well known, more easily to do.

  • Zoom – the zoom gesture is performed by placing two fingers on the surface and moving them away or towards each other, away to zoom out, towards to zoom in. This is useful on viewing photo’s for instance. So although on any self-respecting photo view software there are zoom buttons available.
  • Single finger and two finger pan – Panning over an object larger than the viewport. This as well can be a great extension for viewing photo’s, but for any kind of object with large amount of data like a map for instance. Again, one could use scrollbars to do so, but it’s impossible to scroll diagonally with scrollbars. And while there was a mouse action to click and drag with certain buttons, that is not a well known/often used action.
  • Rotate – Rotating objects –again like photo’s– was only doable with a button. The limitation of that button is that it only rotates the object only as far as the developer decided it to. While with two fingers one could –with the sensitivity of the device in mind– rotate the object to any degree possible.
  • Two finger tab – Because the concept of only one pointer is out of the window, a two finger tab is different than a one finger tab. And as such, can be treated differently.
  • Press and tab – Just like the two finger tab, a press and tab is possible because of the multiple registration points. While one finger presses on the surface, a second finger can tab next to it.

These are not the only multi touch gestures that Windows 7 registers, but it gives an idea of how Windows 7 is using multi touch as an addition to it’s pointer concept. It takes actions that are less known or hard to to, and makes it better accessible.

OS X gestures

OS X is treating multi touch differently than Windows 7. Where the focus of Windows 7 lies on bringing ‘new’ or not well known actions to the attention, OS X uses multi touch to map gestures to existing actions, to new actions as well –like the photo manipulation in windows 7– but the focus lies on the existing ones. I give a few examples.

  • Scrolling, drag two fingers down/up – a substitute to the scrollbars.
  • Pinching, two fingers away or towards each other – same as the Windows 7 one.
  • Rotating, two fingers rotating around each other – same as the Windows 7 one.
  • Two finger panning – Basically the same as scrolling but in every direction. This one can only be used if the program is fit for it.
  • Three finger swipe, move three fingers across the surface – to go to the next previous tab/window, depending on the direction of the swipe.
  • Four finger swipe up/down – to enter or close expose
  • Four finger swipe left/right – to switch application, like the cmd+tab keystroke

The difference between Windows 7 and OS X

OS X is really trying to interact with the OS actions in a different way, there where Windows 7 seems to want to push new functionality. So they both have a different approach, although they share some similarities as well. What they still do is keeping the original OS in tact. Which is not surprising because multi touch is not broad supported yet. But when you compare that to mobile operating systems, they are taking full advantage of touch interaction. That is not a fair comparison, because a mobile OS does not have the performance of a PC, Nor does it need it. And the intentions of a mobile system is different. But it illustrates the contrast between the two.

When I think of an OS for a PC or laptop, I think that it can borrow some of the WIMP principles, but it really needs to be different in a lot of things. Naturally the single pointer concept needs to go out of the window. And in it’s place should come gestures that feels natural. And maybe the whole concept of on screen menu’s can go as well, and we summon menu’s by performing a gesture. That leaves room for the actual content in the application. Which way it will go, I don’t know yet, but I think that the single pointer concept has had it’s time.

Let me know what you think about it!

Author Peter Goes

I am an Amsterdam (The Netherlands) based digital media developer. I develop and/or create concepts for websites, web apps and mobile apps. On the side I do some VJing, but what really makes my heart beat a little bit faster is creative coding and physical computing. On this website my work is divided into these three areas, namely Development, VJing and Creative coding.

More posts by Peter Goes

Join the discussion 2 Comments

Leave a Reply