iPhone Bootcamp Summary

Posted on December 05, 2008 by Scott Leberknight

So, after having actually written a blog entry covering each day of the iPhone bootcamp at Big Nerd Ranch, I thought a more broad summary would be in order. (That, and I'm sitting in the airport waiting for my flight this evening.) Anyway, the iPhone bootcamp was my second BNR class (I took the Cocoa bootcamp last April and wrote a summary blog about it here.)

As with the Cocoa bootcamp, I had a great time and learned a ton about iPhone development. I met a lot of really cool and interesting people with a wide range of backgrounds and experiences. This seems to be a trend at BNR, that the people who attend are people who have a variety of knowledge and experience, and bring totally different perspectives to the class. The students who attend are also highly motivated people in general, which, when combined with excellent instruction and great lab coding exercises all week, makes for a great learning environment.

Another interesting thing that happens at BNR is that in this environment, you somehow don't burn out and can basically write code all day every day and many people keep at it into the night hours. I think this is due to the way the BNR classes combine short, targeted lecture with lots and lots and lots of hands-on coding. In addition, taking an afternoon hike through untouched nature really helps to refresh you and keep energy levels up. (Maybe if more companies, and the USA for that matter, encouraged this kind of thing people would actually be more productive rather than less.) And because of the diversity of the students, every meal combines good food with interesting conversation.

So, thanks to our instructors Joe and Brian for a great week of learning and to all the students for making it a great experience. Can't wait to take the OpenGL bootcamp sometime in the future.

iPhone Bootcamp Day 5

Posted on December 05, 2008 by Scott Leberknight

Today is Day 5 — the final day — of the iPhone bootcamp at Big Nerd Ranch, taught by Joe Conway. The last day of Big Nerd Ranch bootcamps are half-days so you have breakfast, have class until lunch, have lunch, and then shuttle back to the airport. Unfortunately for me my flight isn't until 7pm so I have a lot of time to write this entry!

See here for a list of each day's blog entries.

Preferences

First up on this last day of iPhone bootcamp is preferences, which allow an app to keep user settings in a centralized location. While you can store user settings yourself using a custom UI, the easiest way is to use the iPhone's built-in Settings app. The Settings app is the central location where apps can store user settings. It is also where all the other common iPhone settings are, which is convenient.

You work with the NSUserDefaults object to store app settings based on the app's bundle ID, for example com.acme.MyKillerApp. You'll also want to ensure you provide "factory default" settings using the registerDefaults method; these settings will be used if the user has not changed anything. NSUserDefaults is basically a dictionary of key-value pairs, and you obtain the standard user defaults using the standardUserDefaults method. From that point you can write user settings using methods like setObject:forKey, setInteger:forKey, and so on, and you can remove settings using removeObjectForKey. Similarly, you read settings using methods like objectForKey, integerForKey, and so forth.

As mentioned earlier, the Settings app is a uniform way of setting preferences for all apps. Users know where to look for setting preferences for apps there. In addition, managing the settings separately from your application also saves screen real estate, which is important since the screen isn't all that big. You define the settings for your application using the Settings Bundle. This is a standard Property List. You use special property keys, such as PSToggleSwitchSpecifier, to define the settings for your application. The Settings app uses the Settings bundle and property list to create the settings UI for your app. For example, if you define settings for a PSToggleSwitchSpecifier, a PSTextFieldSpecifier, and a PSMultiValueSpecifier, the user will see a toggle switch, a text field, and a slider when she goes to edit your app's settings.

For the lab exercise, we modified our Random Number app (from Day 4 during the Web Services section) to have user settings for how the random numbers are generated — for example minimum number and the range of numbers. It is quite easy to add user settings to your app in the iPhone using the Settings app and bundle.

Instruments

Joe briefly talked about using the Instruments app to do powerful debugging, for example to detect memory leaks in your app. He then demonstrated using the Leaks tool within Instruments to track a memory leak — which he had previously introduced intentionally — in the Tile Game app. Leaks was able to pinpoint the exact line of code where a leaked object (i.e. it was not released) was allocated. This is pretty cool.

Joe also showed what leaks look like when they come from C code versus Objective-C code. C-code allocation leaks (i.e. allocated using malloc) show up in Instruments as just 'General Block' and from what I gather are probably not that fun to track down the exact cause. Leaks in Objective-C code are easier, and as I just mentioned, Leaks can show you the exact line of code where a leaked object was allocated.

Networking

By this time, it is getting closer to lunch, and the end of the class. Sad, I know. Anyway, probably one of the hardest and broadest topics was saved for last, given that you could spend an entire course on nothing but network programming. Anyway, on the iPhone a key concept when doing network programming is "reachability." Reachability does not mean that a network resource is definitely available; rather it means the resource is "not impossible" to reach. Another important thing on the iPhone is figuring out what type of connection you have, for example EDGE or 3G or Wi-Fi. You might choose to do different things based on the type of connection.

With the SystemConfiguration framework you can determine if the outside world is reachable, again with the caveat the "reachable" means "not impossible to reach." When coding you work with "reachability references" and "reachability flags" which describe the type of connection, for example reachable, connection required, is wide-area network, and so on.

We then learned a bit about Bonjour, which is basically a zero-configuration network service discovery mechanism that works on all platforms, has implementations in many languages, and is easy to use. For example, Bonjour makes it ridiculously easy to find printers on a network, as opposed to the way you have to do it in other operating systems. You can use Bonjour on the iPhone to discover and resolve Bonjour services. You use NSNetServices and NSNetServiceBrowser to accomplish this, and you specify a delegate to receive notifications when services appear and disappear.

In addition to Bonjour, you can also send and receive data over the network as you would expect. To send and receive data you can either use a C-based Core Foundation interface or use a stream-based interface with CFStream. You register read and write callbacks with the main iPhone run loop, which calls your functions when network events occur.

Network programming is really hard, since there are so many things that can go wrong and you need to deal with many if not all of them. Joe recommended that before jumping right into hardcore networking, developers should become familiar with the concepts and recommended the Beej's guide to network programming as a good primer. Given that I've really only done simple socket programming in higher-level APIs like Java's, and have not really done much of it, I'll need to check that out. And so this concludes this episode of iPhone bootcamp blog, since it is now time for lunch and, sadly, the course is over!

Random Thoughts

This week has been a really intensive introduction to iPhone programming. We've gone from a very simple app on Day 1 and covered all kinds of things like form-based UIs and text manipulation; Core Location; Core Graphics; view transitions and Core Animation; using the iPhone camera and accelerometer; retrieving web-based data; manipulating Address Book data; setting user preferences, and finally networking and debugging using Instruments. We've basically covered an entire book's worth of content and written about 10 iPhone applications ranging from simple to not-so-simple. If you are interested in programming the iPhone, I definitely recommend checking out the iPhone bootcamp at Big Nerd Ranch. Because the OpenGL stuff we did on the iPhone was so cool, I'm thinking my next course to take might be the OpenGL bootcamp but that probably won't happen until next year sometime! With my new knowledge I plan to go home and try my hand at creating a snowflake or snowstorm application that my daughters can play with. If I actually get it working maybe I'll write something about it.

iPhone Bootcamp Day 4

Posted on December 04, 2008 by Scott Leberknight

Today is Day 4 of the iPhone bootcamp at Big Nerd Ranch, taught by Joe Conway. Unfortunately that means we are closer to the end than to the beginning.

See here for a list of each day's blog entries.

View Transitions

Since we had not completely finished up the Core Graphics lab from Day 3, we spent the first hour completing the lab exercise, which was to build a "Tile Game" application where you take an image, split it into tiles, and the user can move the tiles around and try to arrange them in the correct way to produce the full image. Actually you don't really split the image; you use a grid of custom objects — which are a subclass ofUIControl — and translate the portion of the image being split to the location in the grid where the tile is located. When a user moves the tiles around, each tile still displays its original portion of the image. So, you are using the same image but only displaying a portion of the image in each tile.

After completing the lab, Joe went over view transitions. View transitions allow you to easily add common animations (like the peel and flip animations you see on many iPhone apps) to your applications to create nice visual effects and a good user experience. For example, in our Tile Game app, when you move a tile it abruptly jumps from the old location to the new location with no transition of any kind. Also, when you touch the "info" button to select a new tiled-image there is no animation. It would be better to make the tiles slide from square to square, and to flip the tile over to the image selection screen. View transitions let you do this pretty easily. (Joe mentioned that later we'll be covering Core Animation which is more powerful and lets you perform all kinds of advanced animations.)

So, to reiterate, view transitions are meant for simple transitions like flipping, peeling, and so on. There are several "inherently animatable" properties of a UIView: the view frame, the view bounds, the center of the view, the transform (orientation) of the view, and the alpha (opacity) of the view. For example, to create a pulsing effect you could peform two alpha animations one after the other: the first one would transition the view from completely opaque to completely transparent, and the second animation would transition back from completely transparent to completely opaque. For the View Transition lab we used a "flip" animation to flip between views.

You use class methods of UIView to begin animations, setup view transitions, and commit the transitions. You use beginAnimations to begin animations. Then you define the animations using methods like setAnimationDuration and setAnimationTransition, which set the length of time over which the animation occurs and the type of animation such as peel or flip, respectively. Then you perform actions like add and remove subviews or perform transitions on any of the "inherently animatable" properties in an "animation block." To start the animations on screen you call commitAnimations after the animation block. This seems similar to how transactions work in a relational database, in that you begin a transaction, define what you want done, and then when satisfied you commit those "changes." Joe mentioned that Core Graphics essentially uses a hidden or offscreen view to store the actions and only shows the animations and actions when the animations are committed.

Core Animation

Next up: Core Animation. While view transitions are for simple animations, with Core Animation you can animate almost anything you want. The basic idea here is that when using Core Animation, the animation takes place on a CALayer rather than the UIView. There is one CALayer per UIView, and the CALayer is a cached copy of the content in the UIView. As soon as an animation begins, the UIView and CALayer are interchanged, and all drawing operations during the animation take place on the CALayer; in other words when an animation begins the CALayer becomes visible and handles drawing operations. After the animation ends the UIView is made visible again and the CALayer is no longer visible, at which point the UIView resumes responsibility for drawing.

You create animations using subclasses of CAAnimation and adding them to the CALayer. CABasicAnimation is the most basic form of animation: you can animate a specific property, known as a key path, to perform a linear transition from an original value to a final value. For example, using a key path of "opacity" you can transition an object from opaque to translucent, or vice versa.

You can also combine multiple animations using a CAAnimationGroup, which is itself a subclass of CAAnimation (hey, that's the Composite design pattern for anyone who cares anymore). You can use CAKeyframeAnimation to perform nonlinear animations, in which you have one or more "waypoints" during the animation. In other words, with CAKeyframeAnimation you define transitions for a specific attribute — such as opacity, position, and size — that have different values at the various waypoints. For example, in the Core Animation lab exercise we defined a rotation animation to "jiggle" an element in the Tile Game we created earlier to indicate that a tile cannot be moved. The "jiggle" animation uses the CAKeyframeAnimation to rotate a tile left, then right, then left, then right, and finally back to its original position to give the effect of it jiggling back and forth several times.

To finish up Core Animation, Joe covered how to use Media Timing Functions to create various effects during animations. For example, we used CAMediaTimingFunctionEaseIn while sliding tiles in the Tile Game. Ease-in causes the tile animation to start slowly and speed up as it approaches the final location. Finally, I should mention that, as with most iPhone APIs, you can set up a delegate to respond when an animation ends using animationDidStop. For example, when one animation stops you could start another one and chain several animations together.

Camera

After a cheeseburger and fries lunch, we learned how to use the iPhone's camera. The bad news about the camera is that you can't do much with it: you can take pictures (obviously) and you can choose photos from the photo library. The good news is that using the camera in code is really simple. You use a UIImagePickerController, which is a subclass of UINavigationController, and push it modally onto the top view controller using the presentModalViewController:animated method. Since it is modal, it takes over the application until the user cancels or finishes the operation. Once a user takes a picture or selects one from the photo library, UIImagePickerController returns the resulting image to its delegate.

There are two delegate methods you can implement. One is called when the user finishes picking an image and the other is called if the user cancelled the operation. The delegate method called when a user selects a picture returns the chosen image and some "editingInfo" which allows you to move and scale the image among other things. The lab exercise for the camera involved modifying the Tile Game to allow the user to take a photo which then becomes the game's tiled-image.

Accelerometer

The accelerometer on the iPhone is cool. It measures the G-forces on the iPhone. You use a simple, delegate-based API to handle accelerations. While the API itself is pretty simple, the math involved and thinking about the orientation of the iPhone and figuring out how to get the information you want is the hard part. But you'd kind of expect that transforming acceleration information in 3D space into information your application can use isn't exactly the easiest thing. (Or, maybe it's just that I've been doing web applications too long and don't know how to do math anymore.)

There is one shared instance of the accelerometer, the UIAccelerometer object. As you might have guessed, you use a delegate to respond to acceleration events, specifically the accelerometer:didAccelerate method which provides you the UIAccelerometer and a UIAcceleration object. You need to specify an update interval for the accelerometer, which determines how often the accelerometer:didAccelerate delegate method gets called.

The UIAcceleration object contains the measured G-force along the x, y, and z axes and a time stamp when the measurements were taken. One thing Joe mentioned is that you should not assume a base orientation of the iPhone whenever you receive acceleration events. For example, when your application starts you have no idea what the iPhone's orientation is. To do something like determine if the iPhone is being shaken along the z-axis (which is perpendicular to the screen) you can take an average of the acceleration values over a sample period and look at the standard deviation; if the value exceeds some threshold your application can decide the iPhone is being shaken and you can respond accordingly. For the lab exercise, we modified the Tile Game to use the accelerometer to slide the tiles around as the user tilts the iPhone and to randomize the tiles if the user shakes the iPhone. Pretty cool stuff!

Web Services

Well, I guess the fun had to end at some point. That point was when we covered Web Services, mainly because it reminded me that, no, I'm not going to be programming the iPhone next week for the project I'm on, and instead I'm going to be doing "enterprisey" and "business" stuff. Oh well, if we must, then we must.

Fortunately, Joe is defining web services as "XML over HTTP" and not as WS-Death-*, though of course if you really want to reduce the fun-factor go ahead Darth. To retrieve web resources you can use NSURL to create a URL, NSMutableURLRequest to create a new request, and finally NSURLConnection to make the connection, send the request, and get the response. You could also use NSURLConnection with a delegate to do asynchronous requests, which might be better to prevent your application's UI from locking up until the request completes or times out.

If you have to deal with an XML response, you can use NSXMLParser, which is an event-based (SAX) parser. (By default there is no tree-based (DOM) parser on the iPhone, but apparently you can use libxml to parse XML documents and get back a doubly-linked list which you can use to traverse the nodes of the document.) You give NSXMLParser a delegate which recieves callbacks during the parse, for example parserDidStartDocument and parser:didStartElement:namespaceURI:qualifiedName:attributes. Then it's time to kick it old school, handling the SAX events as they come in, maintaining the stack of elements you've seen, and writing logic to respond to each type of element you care about. For our Web Services lab we wrote a simple application that connected to random.org, which generates random numbers based on atmospheric noise, and made a request for 10 random numbers, received the response as XHTML, extracted the numbers, and finally displayed them to the user.

Address Book

Last up for today was using the iPhone Address Book. There are two ways to use the address book. The first way leverages the standard iPhone Address Book UI. The second uses a lower-level C API in the AddressBook framework.

When you use the Address Book UI, you use the standard iPhone address book interface in your application. It allows the user to select a person or individual items like an email address or phone number. You must have a UIViewController and define a delegate conforming to ABPeoplePickerNavigationControllerDelegate protocol. You then present a modal view controller passing it a ABPeoplePickerNavigationController which then takes over and displays the standard iPhone address book application. You receive callbacks via delegate methods such as peoplePickerNavigationController:shouldContinueAfterSelectingPerson to determine what action(s) to take once a person has been chosen. You, as the caller, are responsible for removing the people picker by calling the dismissModalViewControllerAnimated method. We created a simple app that uses the Address Book UI during the first part of the lab exercise.

Next we covered the AddressBook framework, which is bare-bones, pedal-to-the-metal, C code and a low-level C API. However, you get "toll-free bridging" of certain objects meaning you can treat them as Objective-C objects, for example certain objects can be cast directly to an Objective-C NSString. Another thing to remember is that with the AddresBook framework there is no autorelease pool and you must call CFRelease on objects returned by methods that create or copy values. Why would you ever want to use the AddressBook framework over the Address Book UI? Mainly because it provides more functionality and allows you to directly access and potentially manipulate, via the API, iPhone address book data. For the second part of the Address Book lab, we used the AddressBook API to retrieve specific information, such as the mobile phone number, on selected contacts.

Random Thoughts

Using View Transitions and Core Animation can really spice up your apps and create really a cool user experience. (Of course you could probably also create really bad UIs as well if overused.) The camera is pretty cool, but the most fun today was using the accelerometer to respond to shaking and moving the iPhone. Web services. (Ok, that's enough on the enterprise front.) Last, using the various address book frameworks can certainly be useful in some types of applications where you need to select contacts.

Something in one of the labs we did today was pretty handy, namely that messages to nil objects are ignored in Objective-C. There are a lot of times this feature would be hugely useful, though I can also see how it might cause debugging certain problems to be really difficult! There's even a design pattern named the Null Object Pattern since in languages like Java you get nasty things like null-pointer exceptions if you try to invoke methods on a null object.

Man, we really covered a lot today, and now I am sad that tomorrow is the last day of iPhone bootcamp. I should become a professional Big Nerd Ranch attendee; I think that would be a great job!

iPhone Bootcamp Day 3

Posted on December 03, 2008 by Scott Leberknight

Today is Day 3 of the iPhone bootcamp at Big Nerd Ranch, taught by Joe Conway.

See here for a list of each day's blog entries.

Media

Today we started off learning how to play audio and video files by creating a simple application that allows you to play a system sound, an audio file, and a movie. If all you want to do is play .caf, .wav. or .aiff audio files tat are less than 30 seconds in length, you're in luck because you can simply use AudioServicesCreateSystemSoundID to register a sound with the system and then use AudioServicesPlaySystemSound to play the sound. On the other hand, if you want to play almost any type of audio file, you can use the AVAudioPlayer, which really isn't all that much more complicated. You create a AVAudioPlayer and then implement AVAudioPlayerDelegate methods like audioPlayerDidFinishPlaying to respond to audio player events. You simply call start and stop to control playback and you can use isPlaying to check playback status. Recording audio is apparently more difficult, and we didn't really cover it in lecture or lab, though the exercise book has a whole appendix devoted to creating your own voice recorder application.

For movie playback you can use MPMoviePlayerController. It is also pretty easy to use. But, one caveat is that it completely takes over your iPhone application when you call play, and the user has no control until the movie ends or until the user exits your application.

Low Memory Warnings

We next took a (very) short detour talking about low memory warnings. When your application is taking too much memory, the iPhone sends your application the applicationDidReceiveMemoryWarning message. In that method you are supposed to release as much memory as you possibly can. However, according to Joe, the iPhone does not really provide much information about how much memory you need to release, or how much memory will cause a low memory warning in the first place! Joe says to just release as much memory as you possibly can immediately or else the iPhone can simply terminate your application. All your UIViewControllers are sent didReceiveMemoryWarning. The default implementation checks if you have implemented loadView, and if so releases its view instance. The next time the view needs to be shown on screen, it gets reconstructed. One last thing is that the iPhone simulator allows you to simulate a low memory warning via the "Simulate Memory Warning" option on the Hardware menu.

OpenGL

OpenGL ES is the implementation of OpenGL on the iPhone. Basically it is a low-level C API that allows you to draw 2D and 3D graphics on the iPhone in various colors and textures. Triangles, lines, and points comprise the basic geometrical shapes you can use to compose graphics. When coding OpenGL ES you basically need to define all the vertices in two or three dimensional space, then define the color of each vertex, and then, via the EAGLContext, render the graphic. The color of a vertex is defined in RGBA8 format, which allows you to specify red, blue, green color channels and an alpha transparency channel, using 8-bits per channel. The EAGLContext is the bridge between Cocoa Touch and OpenGL ES, and is what allows you to use OpenGL on the iPhone.

When coding OpenGL ES, you use various buffers. The frame buffer describes one frame of drawing. The render buffer contains the pixels created by drawing commands sent to the frame buffer. When you draw to the screen, you really draw to a CAEAGLLayer object, and you use a timer to request drawing updates, e.g. schedule a timer to update the drawing 60 times per second creates a 60 frame per second rendering. Another important thing is that you must call setCurrentContext on the EAGLContext before performing any drawing commands. Last, according to Joe, OpenGL is not at all forgiving if you screw something up, for example if you have an empty buffer or mismatched vertex data in the buffer. When that occurs, your application simply crashes.

The lab exercise for OpenGL was to create a "Fireworks" application that randomly generates "fireworks" using OpenGL and that simulates those fireworks exploding and then burning out. Pretty cool stuff, but man is it a lot of code to just to create relatively simple things because you must fully define all the geometry and colors and then use OpenGL functions to enable various drawing states and draw the geometry. You also of course need to implement logic to change the drawing over time, for example once a firework explodes you need to define the logic to animate the particles using lots of math and even some good old physics equations to compute position, velocity, and acceleration. I bet if they taught physics by having students implement particle engines on the iPhone more people would be into science!

Textures

After various types of pizza, including a really good barbeque pizza, for lunch, we learned about using textures in OpenGL ES. You use per-pixel coloring to spread an image file across geometric primitives (triangles, points, and lines). Textures add depth to a scene and can also be used to create shadow effects and is how you draw text using OpenGL. We extended our Fireworks application by adding texture to each exploding particle. Essentially you pin an image file to your geometry. Since adding textures is still OpenGL coding, it is low level and requires a fair amount of code to define the mapping coordinates for the texture in order to pin it to the scene geometry. But the end result is pretty cool!

Multi-touch Events

Before getting into touch events, we took an afternoon hike. It was really nice and I wished we had done the zip lines at Banning Mills today, since it is supposed to rain tomorrow. When we got back from the hike, we plowed into how to handle touch events on the iPhone. First up, Joe told us all about UIResponder, which is the base class for UIView, UIController, UIWindow, and UIApplication. Because of this inheritance relationship, subclasses gain similar functionality automatically for handling the different phases for UIResponser. The phases of UIResponser are: touchesBegan, touchesMoved, and touchesEnded. These phases allow you to handle all kinds of touch events, including up to five simultaneous touches.

The UITouch object is what you work with when handling touches. By default, multi-touch is not enabled, so you have to enable multi-touch either in code using setMultipleTouchEnabled:YES or you can set the property in Interface Builder. Each of the touch callback methods, for example touchesBegan, sends you the touches as a set of UITouch objects and a UIEvent object. You can use the event object to determine the number of touches and respond appropriately. In the lab exercise we extended the Fireworks application to respond to single touches, touch-and-drag, and multiple-touches. When handling multiple touches you need to use math and geometry to figure out things like how far apart the touches are; in the Fireworks application we made it so the further away the touches the faster we disperse the firework particles.

Core Graphics

Last for today was Core Graphics, which allows drawing in Cocoa Touch. Core Graphics is much simpler to use than OpenGL but is not as powerful. It is not designed to draw super fast animations and games like OpenGL is, and according to Joe should mostly be used for UIs where you need to do drawing.

You use the drawRect method to define your drawing commands. You draw to a CGContext graphics context. The main iPhone run loop, not you, is responsible for creating the graphics context and calling drawRect; you simply need to define what to draw, not when to do it. The basic process you follow to draw is as follows. First, get a reference to a CGContext graphics context. Second, create a CGMutablePathRef path object and then draw points, lines, curves, and shapes to the path. Next, you set the color of the graphics context, and finally you stroke or fill the path object.

During drawing you can save and restore the state of a CGContext. For example, you might perform some drawing commands, then save the context state, change a few drawing attributes and perform several more drawing operations, restore the context state, and continue drawing using the original state. According to Joe, you should not save/restore state a lot or it can really slow down your application.

Core Graphics uses the "Painter's Method" when drawing objects on screen, meaning objects are drawn from back to front. Objects drawn later are drawn over top objects that were drawn earlier, effectively replacing the existing pixels. One other thing to mention is that, using Core Graphics, you can do 2D transforms applied to CGContext to perform rotation, translation, scaling, and skewing of the shapes on screen.

Random Thoughts

My brain is starting to hurt after three days of hardcore learning and coding. Several of us started watching the Bourne Ultimatum after dinner on the nice big flat-screen TV in the Banning Mills lodge to relax a bit.

OpenGL, while powerful, is not what I'd call the most fun API I've ever worked with, and it would probably take a while to learn the ins and outs and become an expert in it. Adding textures using OpenGL can really spice up your application and make it look better. Our Fireworks application went from exploding popcorn before adding texture to looking like real, exploding firework particles after the texture was added to the particles.

Being able to handle multi-touch events is just plain cool.

iPhone Bootcamp Day 2

Posted on December 02, 2008 by Scott Leberknight

Today is Day 2 of the iPhone bootcamp at Big Nerd Ranch.

See here for a list of each day's blog entries.

Localization

After a nice french toast breakfast — which my dumbass CEO Chris couldn't eat because he is allergic to eggs and complained a lot and made them give him a separate breakfast — we headed down to the classroom and started off learning about localizing iPhone apps. (UPDATE: The "dumbass CEO" comment was made totally as a joke, since he was sitting right next to me in class as I wrote it. Plus, I've known him for over 12 years so it's all good. So lest you think I don't like him or something, it's just a joke and is not serious!) As with Cocoa, you basically have string tables, localized resources (e.g. images), and a separate XIB file for each different locale you are supporting. (Interface Builder stores the UI layout in an XML format in a XIB file, e.g. MainWindow.xib.) This means that, unlike Java Swing for example, you literally define a separate UI for each locale. We localized our DistanceTracker application that we built on day one for English and German locales. To start you use the genstrings command line utility to generate a localizable string resource and then in Xcode make it localizable; this creates separate string tables for each language which a translator can then edit and do the actual translation. You also need to make the XIB files localized and then redo the UI layout for each locale. Sometimes this might not be too bad, but if the locale uses, for example, a right-to-left language then you'd need to reverse the position of all the UI controls. While having to create essentially a separate UI for each locale seems a bit onerous, it makes a certain amount of sense in that for certain locales the UI might be laid out completely differently, e.g. think about the right-to-left language example. Finally, you use NsLocalizedString in code, which takes a string key and a comment intended for the translator which is put into the string tables for each locale - at runtime the value corresponding to the specified key is looked up based on the user's locale and is displayed. If a value isn't found, the key is displayed as-is which might be useful if you have a situation where all locales use the same string or something like that.

View Controllers

After localization we tackled view controllers. View controllers control a single view, and are used with UITabBarController and UINavigationController. We created a "navigation-based application" which sets up an application containing a UINavigationController. The difference between UITabBarController and UINavigationController is that the tab bar controller stores its views in a list and allows the user to toggle back and forth between views. For example, the "Phone" application on the iPhone has a tab bar at the bottom (which is where it is displayed in apps) containing Favorites, Recents, Contacts, Keypad, and Voicemail tab bar items. Tab bar items are always visible no matter what view is currently visible. As you click the tab bar items, the view switches. So, a UITabBarController is a controller for switching between views in a "horizontal" manner somewhat similar to the cover flow view in Finder. On the other hand you use UINavigationController for stack-based "vertical" navigation through a series of views. For example, the "Contacts" iPhone application lists your contacts; when you touch a contact, a detail view is pushed onto the UINavigationController stack and becomes the visible and "top" view. You can of course create applications that combine these two types of view controllers. In class we created a "To Do List" application having a tab bar with "To Do List" and "Date & Time" tabs. If you touch "To Do List" you are taken to a list of To Do items. Touching one of the To Do items pushes the To Do detail view onto the stack, which allows you to edit the item. Touching "Date & Time" on the tab bar displays the current date and time in a new view.

It took me a while to get my head wrapped around combining the different types of view controllers in the same application and how to connect everything together. Since I am used to web app development, this style of development requires a different way of thinking. I actually like it better since it has a better separation of concerns and is more true to the MVC pattern than web applications are, but I'm sure I'll have to try to build more apps using the various view controllers to get more comfortable.

Table Views

We had a good lunch and after that headed back down to cover table views. Table views are views that contain cells. Each cell contains some data that is loaded from some data source. For example, the "Contacts" application uses a UITableView to display all your contacts. The "To Do List" application we created earlier also uses a UITableView to list the To Do items. Basically, UITableView presents data in a list style.

Table views must have some data source, such as an array, a SQLite database, or a web service. You are responsible for implementing data retrieval methods so that when the table view asks for the contents of a specific cell, you need to offer up a cell containing the data appropriate for the row index. Since the iPhone has limited real estate, UITableView re-uses cells that have been moved off screen, for example if they are scrolled out of view. This way, rather than create new cells every single time UITableView asks for a cell, you instead ask if there are any cells that can be re-used. If so, you populate the cell with new data and return it.

UITableView also provides some basic functionality, like the ability to drag cells, delete them, etc. You only need to implement the logic needed when events, like move or delete, occur. Another thing you typically do with table views is respond when a user touches a cell. For example, in the "To Do List" application a new "detail" view is displayed when you touch a cell in the table view. This is probably the most common usage; table views display aggregate or summary data, and you can implement "drill down" logic when a user touches a cell. For example, when a user touches a To Do item cell, a new view controller is pushed onto the stack of a UINavigationController showing the "detail" view and allowing you to edit the To Do item. Another cool thing you can do with table views is to subclass UITableView in order to lay out subviews in each cell. In the "To Do List" app, we added subviews to each cell to show the To Do item title, part of the longer item description in smaller font, and an image to the left of the title if the item was designated as a "permanent" item. The "Photo Albums" iPhone application also uses subviews; it shows an image representing the album and the album title in each cell.

Saving and Loading Data Using SQLite

By this time, it was already late afternoon, and we hiked out to the old paper mill and back. It stared getting colder on the way back as the sun started to set. When we got back, we learned all about saving and loading data using SQLite. First, we learned about the "Application Sandbox" which can only be read/written by your application, for security reasons. There are several locations that each iPhone application can store data. Within you application's bundle (e.g. <AppName.app>) there is Documents, Library/Preferences, Library/Caches, and tmp folders. Documents contains persistent data, such as a SQLite database, that gets backed up when you sync your iPhone. Library/Preferences contains application preference data that gets backed up when you sync. Library/Caches, which Joe and Brian (the other instructor who is helping Joe this week) just found out about before our hike, is new in version 2.2 of the iPhone software, and stores data cached between launches of your application. The tmp directory is, as you expect, used for temporary files. You as the developer are responsible for cleaning up your mess in the tmp folder, however!

After discussing the sandbox restrictions, we learned how you can locate the Documents directory using the NSSearchPathForDirectoriesInDomains function. Finally, we learned how to use SQLite to add persistence to iPhone applications using SQLite, which I continually misspell with two "L"s even though there is really only one! SQLite supports basic SQL commands like select, insert, update, and delete. The reason you need to know how to find the Documents directory is because you'll need to create a copy of a default SQLite database you ship with your application into Documents. Basically, you supply an empty database with your application's Resources — this is read-only to your app. In order to actually write new data, the database must reside in a location (such as Documents) where you have access to write. So, the first thing to do is copy the default database from Resources to the Documents directory where you can then read and write, and where the database will then automatically be backed up when the user syncs her iPhone.

We then added SQLite persistence to the "To Do List" application. While I think SQLite is cool in theory, actually using it to query, insert, update, and delete data is painful, as you have to handle all the details of connecting, writing and issuing basic CRUD queries, stepping through the results, closing the statements, cleaning up resources, etc. It feels like writing raw JDBC code in Java but possibly worse, if that's possible. Someone told me tonight there is supposedly some object-relational mapping library which makes working with SQLite more palatable, though I don't remember what it was called or if it is even an object-relational mapper in the same sense as say, Hibernate. Regardless, I persisted (ha, ha) and got my "To Do List" application persisting data via SQLite.

WebKit

Joe apparently gave a short lecture on using WebKit in iPhone applications at this point. Unfortunately I decided to go for a run and missed the lecture. In any case, WebKit is the open source rendering engine used in Safari (both desktop and mobile versions) to display web content. You use UIWebView in your application and get all kinds of functionality out-of-the-box for hardly any work at all. UIWebView takes care of all the rendering stuff including HTML, CSS, and JavaScript. You can also hook into events, such as when the web view started and finished loading, via the UIWebViewDelegate protocol. In our lab exercises after dinner, we implemented a simple web browser using UIWebView and used an activity progress indicator to indicate when a page is loading.

In addition to loading HMTL content in UIWebView, you can also load things like images and audio content. It is ridiculously easy to add a UIWebView to your application in order to display web content.

Random Thoughts

Another long day has come and gone. It is amazing how much energy everyone has to basically learn all day and keep going well into the night hours.

Today we learned about localization on the iPhone. The most important thing is that you need to create a separate UI for each locale you are supporting. This means you should not work on localizing until right before you are ready to ship; otherwise you'll spend all your time continually tweaking all the localized UI after every little change you make.

Tab and navigation view controllers are powerful ways to implement application navigation using a tab paradigm or a stack-based, guided navigation scheme, respectively. Combined with table views, you can accomplish a lot with just these three things.

While I think having a relational database available for persistence in your iPhone apps is nice, I really do not want to write the low-level code required to interact with SQLite; once you get used to using an ORM tool like Hibernate or ActiveRecord you really don't want to go back to hand-writing basic CRUD statements, marshaling result sets into objects and vice-versa, and managing database resource manually. Guess I'll need to check into that SQLite library someone mentioned.

It is surprisingly easy to integrate web content directly into an iPhone application using UIWebView!

Tomorrow looks to be really cool, covering things like media and OpenGL. Until then, ciao!

iPhone Bootcamp Day 1

Posted on December 01, 2008 by Scott Leberknight

Today is the first day of the iPhone bootcamp at Big Nerd Ranch at Historic Banning Mills B&B in Whitesburg, GA. It is being taught by Joe Conway. My goal is to write a blog entry for each day of the class, so we'll see how that goes.

See here for a list of each day's blog entries.

Simple iPhone Application

We started off the course by creating a simple iPhone application using a bunch of UI controls that come out of the box. We dragged and dropped controls onto an iPhone Window application, hooked up some events in Interface Builder, and wrote a bit of event handling code. We ran the application on the simulator (i.e. not on our phones) and played around a bit with it. Cool to get something up and running in the first hour of a five day course!

App Icon and Default Image

After getting the initial iPhone app up and running we added an icon for the application which is what displays on the iPhone "desktop" and added a default image, the purpose of which is to "fool" the user into thinking the application launched immediately without delay. So, in other words, when an iPhone app launches, the first thing that happens is the Default.png image is displayed. Joe mentioned some people use this for so-called "splash" screens, perhaps to display a company logo or some advertising. While this is nice in theory, Joe mentioned this is about the worst thing you can do - when Apple makes the iPhone faster over time, instead of a two or three second splash screen, now you might see something flicker into and out of existence in a split second and users won't know what's going on. In any case the best thing to do is take a screenshot of your application. By the time the user gets around to actually clicking something the Default.png image will have been replaced by the actual application, and your app has the appearance of an immediate startup, which users always like.

Objective-C

Then it was onto a short introduction to the Objective-C language. Having attended the Cocoa bootcamp last April and read through Programming in Objective-C I was already comfortable with Objective-C. You basically learn that Objective-C is a dynamically-typed language built on top of C, adding objects and messaging, i.e. you send messages to objects. We blew through the basics of creating objects, initializing them, and creating accessors. Thankfully Objective-C 2.0 added properties which gets rid of the getter/setter method tedium. We briefly covered some of the basic classes like NSString and NSArray and NSMutableArray. After all that, it was on to memory management. Although Apple introduced a garbage collector in Mac OS Leopard, it is not available to iPhone applications and you must manage retain counts of objects manually using retain, release, and the autorelease pool. This is tedious at best, but Joe provided several concrete and relatively simple rules to follow when writing iPhone apps, which I'm not really going to delve into right now, but suffice it to say that you have to pay a lot more attention to memory issues writing iPhone apps than, say, writing web apps in a garbage-collected language like Ruby, Java, or C#. We wrote several sample applications that demonstrated Objective-C basics, counting object references, and using the autorelease pool.

Using Text Controls

Next up was the chapter on "text" on the iPhone, which focused on using the UITextField and UITextView controls. For this section we created an application that allowed you to search a large block of text in a UITextView for specific text typed into a text field. During this section we learned how to deal with the (virtual) keyboard and how it automatically becomes the "first responder" when a user touches a text UI control. Joe showed how the Responder Chain delegates events until some object handles it, or if no objects are interested, the event simply and silently drops off into nothingness. (For Gang of Four aficionados, this would be the Chain of Responsibility pattern. Does anyone actually care anymore?) This allows you to delegate events up a chain of objects. When an text-capable object becomes the first responder, the virtual keyboard automatically appears for the user to type something. To remove it you can write code to resign the first responder status. Last in the text section was using notifications and the notification center to observe when the keyboard is about to show (be displayed on screen) and write a log message.

Delegates

Even though it was cold outside (about 40 degrees Fahrenheit) we took a 30 minute hike around 3 o'clock and got refreshed. Then we continued to learn about using delegates and protocols. Basically a delegate handles certain functionality passed off to it by another object. The delegate essentially extends the functionality of an object without needing to resort to all kinds of subclassing everywhere; in other words it is a way to perform callbacks and extend functionality. For example, an XML parser knows how to parse the XML and it might send messages to a delegate object. The delegate object in this case knows what to do when it sees specific elements or attributes, while the parser remains completely generic. This enables re-use of the parser without it needing to know anything about how your application actually responds to various elements and attributes. We extended our text search application using delegates to perform the actual searching logic as well as resigning and becoming the first responder.

Core Location

The last, and coolest, topic today was Core Location, which is the framework that allows your iPhone to figure out where you are in the world. We wrote an application that uses your current location and tracks the distance you've traveled after you enable tracking. Since I have not actually enabled my iPhone device yet, I had to run this only on the simulator which was not quite as cool since it only reports one location (which IIRC was the lat/long of Apple's headquarters but I might have caught that wrong.) Basically you use a CLLocationManager which sends location updates to a delegate (good thing I paid attention to the section on delegates earlier); the delegate does pretty much whatever it wants to. Again, the delegate implements application-specific logic and the CLLocationManager just sends you the updated lcoation information, resulting in a clean separation of concerns. You can configure the manager for the level of accuracy you'd like, for example ten meters, a hundred meters, or "best" possible accuracy. The higher the accuracy, the faster your battery will drain, so setting this to best accuracy and leaving it on continuously might not be the best thing to do. Joe also mentioned that if you turn off the iPhone using the top button, the active application can still be doing things and so you want to make sure you check for the "application will passivate" event and stop updating the location to prevent excessive battery drainage! (Maybe that's what happened to me a few weeks ago when my full battery completely drained overnight.) You can also configure a distance filter if, for example, you only want to receive updates after the phone has moved a certain distance. Cool stuff!

Provisioning Profiles

After dinner we returned to the classroom to set up our provisioning profiles, which is going to allow those of us who have not yet registered with Apple to actually run apps on our iPhones (known as "devices"). I am planning to buy my developer certificate but for now Big Nerd Ranch has provisioning profiles for students since apparently it is now taking a day or two to get your developer certs and other required stuff.

Random Thoughts

After a long day, I have a few random thoughts in no particular order. Interface Builder is really, really nice for building UIs. Of course I already knew this having taken the Cocoa bootcamp last April. Regardless, building UIs using a sophisticated and refined tool like Interface Builder is sooooooo much nicer than the current kludge of web technologies that splatter HTML, CSS, and JavaScript all over the place and various data exchange formats like XML, JSON, or whatever combined with eight different JavaScript frameworks and a potentially very different server-side programming model and finally trying to jam it all together. Can you tell I've been doing web development for a while?

Delegates and Core Location were probably the coolest things from today, and it is really nice that after only one day I can build iPhone apps, even if they are pretty simple. Then again, it is really cool how easy it is to integrate location into an iPhone application. Objective-C as a language is actually not all that bad, and is really easy to read since the methods (at least the ones from the Apple SDK) tend to be very well-named. Of course I like the fact that Objective-C is dynamically typed and I don't have to be told by the compiler what I can and cannot do at every step of the way, e.g. I can send any message to any object and so long as it responds, no problem. Of course the code does still have to compile in the XCode IDE so it isn't a total dynamic language free-for-all. The thing I don't like is having to manually manage memory. After doing malloc and free in C on a VAX for the first several years out of college, I was quite happy to not have had to do that for over 10 years. Oh well, I suppose the autorelease pool and simply sending retain and release messages is better than malloc and free.

Big Nerd Ranch is all about coding. While there is lecture, you code hands-on most of the time, and this is why you learn so much. The hikes really relieve the tendency to want to go to sleep after lunch! It's 10:25 and I've finally got ten my iPhone provisioned and the apps we developed today all working directly on the phone. I also finally got the SDK updated to the latest version as well as update my phone's software to version 2.2. All in all a great first day!

iPhone Bootcamp Blogs

Posted on November 29, 2008 by Scott Leberknight

Check out my blog entries this week while I'm attending the iPhone bootcamp at Big Nerd Ranch.