The interactivity of the most traditional computer applications comes down to using a mouse and keyboard. The popularity of mobile devices, especially smartphones and tablets, increased the demand for applications where the interaction is performed directly on the place where the information is displayed.
The combination of gestures and elements of an interface that can be operated through these gestures is currently represented by the acronym NUI (Natural User Interfaces). Despite this new way of interacting with electronic devices to be increasingly present in everyday life, there are few professionals who know how to develop interfaces and applications that can extract the true potential of these interfaces.
Based on this context, this article will present how to work with multi-touch interfaces in Java through the use of a framework. The aim of this article is initially present what are the applications, hardware, software and the possibilities that multi-touch interaction, combined with a suitable interface, can provide. The article also shows how to install, configure and work with the MT4J framework for the development of multi-touch platform-independent applications. Based on what was presented in this article the reader will have an overview of how to implement multiple interfaces that recognize gestures and thus adapt what was seen in their own projects.
This first part talks about the interaction that is provided by gestural interfaces. The details about controls in the interface are also discussed based in what is the interaction model proposed by traditional applications.
The interaction of a user interface to a system or application is one of the main aspects that can determine if it will be success or a failure. While more traditional applications such as management systems, point of sale, inventory, access control and others focus on functionality rather than usability, developers are increasingly focusing their efforts on designing the interface in order to align the user's interaction with functionality provided by the application. As an analogy we can imagine building a house: it must have functional and comfortable areas such as bedrooms, living rooms, bathrooms, kitchen, garage and more. But the decoration of these environments is equally important because they provide functionality (sleeping, bathing, cooking, car parking, etc.) and they must also have decorative elements that please the people who use these places such as being comfortable, visually pleasant, contain mild odors, and so on.
The natural interaction in computer applications make the use more pleasant, simple, intuitive and even more friendly, which may enhance its use and gain the user's preference that can reach heights up to worship the application and its creator. Although there are several actions that can be used to interact with applications, for the rest of this article we will focus only on actions carried out by the interactions performed with the fingertips directly on the screen that displays the information.
From the standpoint of the developer it is required to understand that the natural interaction requires a paradigm shift as to how the interface is seen. One must not think in the traditional model of interaction with mouse and keyboard, because the very nature of the application no longer works in this model. This means that the developer need to analyze, study and think about how the user will interact and manipulate the elements of the interface so that it make use of the application's functionality. For some developers this may sound complex and a design task but it is important that both professionals, designers and programmers, keep in mind that the model keyboard/mouse is no longer suitable for certain applications. It is also important to mention that besides the application interface, the rest of the development, such as connection to the database, organization of components, creation of classes, methods, variables, etc should continue to be done in the same way as before.
While the market for applications that follow the traditional model of interaction mouse/keyboard is large, well established and common, new opportunities are emerging for applications with natural interfaces. These opportunities are interesting because they involve not only the user who already have specialized knowledge of the traditional computing interaction mode but to anyone who can interact with the application using gestures. For instance: it is not uncommon to find people who cannot type in a keyboard, use the mouse or even know the details of the controls provided by an operating system. However, these people can easily understand, use and take advantage of simple gestural interfaces, such as those found on mobile phones, tablets, and ATMs that allow the directly interaction with the screen elements.
Examples of applications that can benefit from the natural interaction include: art installations, interactive kiosks, digital murals, educational boards, electronic games, musical instruments, virtual tutors, point of sale boxes, interactive books and films, automatic vending machines and several other types of applications that can benefit from the natural interaction to meet the need to use the system. In this context, the applications are suitable for the general public, because the learning curve is fast and there is no need for training and hard to master skills to use the system.
Like the traditional interaction performed by the keyboard/mouse, gestural interaction also has some details that should be considered carefully and that can be harmful to the user when not properly treated. For example, when working excessively with keyboard/mouse is the possibility of problems related to fatigue and physical stress (tendonitis, blisters on fingers, etc). Due to this fact, it is important that the developer align the type of work according to the interface and the interactions needed to use it. Obviously, applications such as databases entry systems and other tasks that require a lot of typing are more appropriate for the keyboard as well as applications that require a lot of elements' moving in the screen area are perfect candidates for the mouse. On the other hand, quick, short and simple manipulations such as moving pictures (zoom, rotation, translation, etc) are typically easier to be carried out with the fingertips.
Usually developers and designers are used to manipulate and insert into the interface elements already provided by the system operating according to the type of WIMP (Windows Icons Menus Pointers) interface provided by a GUI (Graphical User Interface). That is, place buttons to represent actions that the user can take, insert fields for entering text, scroll bars to navigate between panels, etc. Although these elements make sense in the interaction model mouse/keyboard, they often do not work well with gestural interfaces. For example, the traditional menu displayed by the operating system and most applications works as follows: the user accesses the menu name (with a mouse click or a hotkey), and this menu opens from the top down. In an application where the interaction takes place with your fingertips, if a user open a menu below where the touch was made, the items in this menu will be hidden by the user's hand forcing a movement of the hand so that the user sees the menu elements. Perhaps in this case it is more appropriate to use a pie menu that opens sideways in order to make easier the visualization of its items without forcing the user to move the hand to see the menu's items.
Another example involves the manipulation of elements in a free work area that must respect the laws of physics (such as speed, collision or inertia) because intuitively the users expects that when manipulations occur in these free work areas they can handle the elements as if they were physical objects placed on a table. There is also a classic case of the mouse pointer: in applications that use gestural interaction there is no pointer that represents where the user is focusing all the time, as you do not keep your finger pressed at the screen all the time. However, when using the mouse in WIMP interfaces the cursor must always be visible on the screen for the user to quickly know where the mouse pointer is.
As classic examples of gestural interaction with the fingertips in applications we can mention the gestures shown in Figure 1. These examples are simple, easily understood and implemented and may allow, respectively, handling click, drag, rotate and perform zoom out (decrease the size) and zoom in (increase the size) of some element of the interactive interface. Because these gestures are becoming increasingly common in natural interfaces, it is common to note that some users already expect to perform these gestures when they discover that the interface is multi-touch capable. This fact provides a good indicator for designers and programmers to plan how it will design the interaction of the interfaces. For more examples of types of gestural interactions refer to the links in the references section of this article.
Figure 1. Typical gestures of gestural enabled interfaces with multi-touch capability.
The interaction with an application through gestures combined with a suitable interface for this interaction has been increasingly exploited in various devices such as smartphones, tablets, kiosks, tables and other multi-touch devices. Due to the demand for applications that have this type of interaction, this article focuses on developing applications that use multi-touch interfaces in Java using the framework MT4J. Initially the article shown what are the applications, hardware, and software applications that multi-touch interaction can provide. The article also describes how to install, configure and work with the MT4J framework for the development of multi-touch platform-independent applications.
In this first part of the article we saw how the multi-touch interface is different from the traditional keyboard/mouse model of interaction provided by most desktop applications.
In part two, we will see Devices and Hardware e development alternatives.
Specification and implementations of the protocol TUIO.
Information on the TouchSmart line of computers from HP.
Touch Gesture Reference Guide.
Information about the Microsoft Sphere.
Information about the Gesture Cube.
Information about the Microsoft Touch Mouse.
Information about the BendDesk.
Information about the multi-touch floor.
Information about the SixthSense project.
A guide for creating multi-touch table with low cost.
Information about the Microsoft Surface.