The touch represents the heart of iOS interaction; it provides the core way that users communicate their intent to an application. Touches are not limited to button presses and keyboard interaction. You can design and build applications that work directly with users’ gestures in meaningful ways. This chapter introduces direct manipulation interfaces that go far beyond prebuilt controls. You’ll see how to create views that users can drag around the screen. You’ll also discover how to distinguish and interpret gestures, which are a high-level touch abstraction, and gesture recognizer classes, which automatically detect common interaction styles like taps, swipes, and drags. By the time you finish reading this chapter, you’ll have read about many different ways you can implement gesture control in your own applications.
Touches
Cocoa Touch implements direct manipulation in the simplest way possible. It sends touch events to the view you’re interacting with. As an iOS developer, you tell the view how to respond. Before jumping into gestures and gesture recognizers, you should gain a solid foundation in this underlying touch technology. It provides the essential components of all touch-based interaction.
Each touch conveys information: where the touch took place (both the current and previous location), what phase of the touch was used (essentially mouse down, mouse moved, mouse up in the desktop application world, corresponding to finger or touch down, moved, and up in the direct manipulation world), a tap count (for example, single-tap/double-tap), and when the touch took place (through a time stamp).
iOS uses what is called a responder chain to decide which objects should process touches. As their name suggests, responders are objects that respond to events, and they act as a chain of possible managers for those events. When the user touches the screen, the application looks for an object to handle this interaction. The touch is passed along, from view to view, until some object takes charge and responds to that event.
At the most basic level, touches and their information are stored in UITouch objects, which are passed as groups in UIEvent objects. Each UIEvent object represents a single touch event, containing single or multiple touches. This depends both on how you’ve set up your application to respond (that is, whether you’ve enabled Multi-Touch interaction) and how the user touches the screen (that is, the physical number of touch points).
Your application receives touches in view or view controller classes; both implement touch handlers via inheritance from the UIResponder class. You decide where you process and respond to touches. Trying to implement low-level gesture control in non-responder classes has tripped up many new iOS developers.
Handling touches in views may seem counterintuitive. You probably expect to separate the way an interface looks (its view) from the way it responds to touches (its controller). Further, using views for direct touch interaction may seem to contradict Model-View-Controller design orthogonality, but it can be necessary and can help promote encapsulation.
Consider the case of working with multiple touch-responsive subviews such as game pieces on a board. Building interaction behavior directly into view classes allows you to send meaningful semantically rich feedback to your core application code while hiding implementation minutia. For example, you can inform your model that a pawn has moved to Queen’s Bishop 5 at the end of an interaction sequence rather than transmit a meaningless series of vector changes. By hiding the way the game pieces move in response to touches, your model code can focus on game semantics instead of view position updates.
Drawing presents another reason to work in the UIView class. When your application handles any kind of drawing operation in response to user touches, you need to implement touch handlers in views. Unlike views, view controllers don’t implement the all-important drawRect: method needed for providing custom presentations.
Working at the UIViewController class level also has its perks. Instead of pulling out primary handling behavior into a secondary class implementation, adding touch management directly to the view controller allows you to interpret standard gestures, such as tap-and-hold or swipes, where those gestures have meaning. This better centralizes your code and helps tie controller interactions directly to your application model.
In the following sections and recipes, you’ll discover how touches work, how you can respond to them in your apps, and how to connect what a user sees with how that user interacts with the screen.
Phases
Touches have life cycles. Each touch can pass through any of five phases that represent the progress of the touch within an interface. These phases are as follows:
- UITouchPhaseBegan-Starts when the user touches the screen.
- UITouchPhaseMoved-Means a touch has moved on the screen.
- UITouchPhaseStationary-Indicates that a touch remains on the screen surface but that there has not been any movement since the previous event.
- UITouchPhaseEnded-Gets triggered when the touch is pulled away from the screen.
- UITouchPhaseCancelled-Occurs when the iOS system stops tracking a particular touch. This usually happens due to a system interruption, such as when the application is no longer active or the view is removed from the window.
Taken as a whole, these five phases form the interaction language for a touch event. They describe all the possible ways that a touch can progress or fail to progress within an interface and provide the basis for control for that interface. It’s up to you as the developer to interpret those phases and provide reactions to them. You do that by implementing a series of responder methods.
Touches and Responder Methods
All subclasses of the UIResponder class, including UIView and UIViewController, respond to touches. Each class decides whether and how to respond. When choosing to do so, they implement customized behavior when a user touches one or more fingers down in a view or window.
Predefined callback methods handle the start, movement, and release of touches from the screen. Corresponding to the phases you’ve already seen, the methods involved are as follows:
- touchesBegan:withEvent:-Gets called at the starting phase of the event, as the user starts touching the screen.
- touchesMoved:withEvent:-Handles the movement of the fingers over time.
- touchesEnded:withEvent:-Concludes the touch process, where the finger or fingers are released. It provides an opportune time to clean up any work that was handled during the movement sequence.
- touchesCancelled:WithEvent:-Called when Cocoa Touch must respond to a system interruption of the ongoing touch event.
Each of these is a UIResponder method, often implemented in a UIView or UIViewController subclass. All views inherit basic nonfunctional versions of the methods. When you want to add touch behavior to your application, you override these methods and add a custom version that provides the responses your application needs. Notice that UITouchPhaseStationary does not generate a callback.
Your classes can implement all or just some of these methods. For real-world deployment, you will always want to add a touches-cancelled event to handle the case of a user dragging his or her finger offscreen or the case of an incoming phone call, both of which cancel an ongoing touch sequence. As a rule, you can generally redirect a cancelled touch to your touchesEnded:withEvent: method. This allows your code to complete the touch sequence, even if the user’s finger has not left the screen. Apple recommends overriding all four methods as a best practice when working with touches.
NOTE: Views have a mode called exclusive touch that prevents touches from being delivered to other views in the same window. When enabled, this property blocks other views from receiving touch events within that view. The primary view handles all touch events exclusively.