How to Debug iOS Extensions using Xcode

This video is a tutorial showing how to debug an iOS share extension using Xcode. In the tutorial, the extension is created to accept documents from any application, and Microsoft Word for iOS is used as the document share source. The objective of the technique is to set breakpoints and view debug logs within the extension after Microsoft Word sends a document to the extension.

These techniques are applicable to many scenarios where you need to debug processes that are embedded with your main iOS application bundle, but running outside the main process on the device.

The video demonstrates two techniques:

  1. Using the debugger to wait for the extension’s process to be started by the external application.
  2. Starting the debugger with the extension’s scheme.

Both techniques are useful, and the demo highlights some advantages enjoyed by the second alternative.

Using WebKit to call WKWebView Javascript from Swift and Swift from Javascript

Many mobile applications incorporate remote web pages, either as passive (static) content — or as in this case as integral parts of the UI. Using the WebKit/WKWebView techniques presented here, your native apps can be better integrated with web content and provide a superior experience to end-users.

Two-way Integration between Swift and JavaScript

In this article we’ll build a full working example of a hybrid native/web application that uses two-way function calls between a native iOS app (written in Swift) and a mobile web page (written in HTML/JavaScript).

By leveraging these two features allows us to build a highly robust, hybrid application where the native and web components cooperate as equal partners in delivering a valuable customer solution.

Solution Overview

The finished solution consists of two components:

  1. A native iOS application, developed in Swift
  2. A static HTML/JavaScript web page hosted on a remote web server (hosted in Microsoft Azure in this case).

The finished learning app implements three main features:

#1 — Loading the web page from the remote. If you’ve used a WKWebView, you know all about this feature. As a UIViewController is loaded, a web page URL is set to the WKWebView, which uses an HTTP GET method to fetch the HTML content.

#2 — Manipulate WebView appearance from Swift. Next we gently wade into the interop waters by sending a command to the WKWebView content page to change the page background color according to a user selection in a native Segment control.

#3 — Callback to Swift from HTML/JavaScript. Finally, we make the solution more complex and interesting by exposing a geolocation function in the native iOS application to the web page. When the user enters an address presses a button on the web view page, the following will be done:

  1. The web page (using JavaScript) calls a Swift function, passing in the user-entered address as a JSON object.
  2. The Swift native app makes an asynchronous call to Apple using CLLocation, to determine the latitude & longitude of the user-entered address.
  3. When the latitude/longitude are returned from Apple, the Swift native app calls a JavaScript function in the web page to update the web page with the latitude/longitude for the entered address.

Solution Demo

Before walking through the code, let’s demo what the completed application looks like (animated GIF).

UI Storyboard Design

The learning application contains a single UIViewController named ViewController. ViewController has only two UI controls in the Storyboard:

  1. A UISegmentedControl which allows the user to change the WebView background color to one of five colors.
  2. A UIView, which is placed in the Storyboard to serve as a container view for the WKWebView control.

Changing Web Page Color

To wade into the hybrid solution water, let’s implement a simple call from Swift to the WKWebView.

ViewController has a member array of colors corresponding to the color choices in the Segment control at the top of the native view.

let colors = [“black”, “red”, “blue”, “green”, “purple”]

When the user taps a new segment in the Segment control, an event handler calls the JavaScript function changeBackgroundColor, passing the string corresponding to the user selection:

@IBAction func colorChoiceChanged(_ sender: UISegmentedControl) {
   completionHandler: nil)

The Swift code doesn’t really know that the web page has a JavaScript routine named changeBackgroundColor. It’s job is to format a JavaScript fragment that will successfully run in the WebView.

The HTML content in the WKWebView has the matching JavaScript routine, which simply sets the background color of the page to the string passed to it from Swift:

function changeBackgroundColor(colorText) { = colorText;

Setting up a Message Handler

The next feature is to send a user-entered address from the HTML page to the native Swift app for geocoding. There are three steps to implement this feature:

  1. Add a message handler to the WKWebView’s WKUserContentController. This establishes a contract that promises that the Swift code can respond to the named message handler when it’s called from the HTML page via JavaScript.
  2. Implement the WKScriptMessageHandler delegate method didReceive message to receive the call from JavaScript.
  3. Call the message handler from the web content JavaScript.

Create a Message Handler (1)

// A
let contentController = WKUserContentController();
contentController.add(self, name: “geocodeAddress”)
// B
let config = WKWebViewConfiguration()
config.userContentController = contentController
// C
webView = WKWebView(frame: webViewContainer.bounds, configuration: config)

A WKUserContentController is created at (A). The contentController holds the registration of the geocodeAddress message handler.

The WKUserContentController is added to a new WKWebViewConfiguration at (B).

Finally (C), as the WKWebView is instantiated, the configured WKWebViewConfiguration created in (B) is passed in to the initializer.

Implement the WKScriptMessageHandler delegate (2)

Now that the geocodeAddress handler is defined to the WKWebView, we need to implement a delegate method, which is triggered when WKWebView event handlers are called.

In this solution, an extension is defined to implement the WKScriptMessageHandler protocol on the ViewController class.

extension ViewController:WKScriptMessageHandler {
   func userContentController(
           _ userContentController:
           didReceive message: WKScriptMessage) {
    if == “geocodeAddress”, 
       let dict = message.body as? NSDictionary {
            geocodeAddress(dict: dict)

The didReceive handler checks whether the message name is as expected (geocodeAddress), and if so extracts the JSON object from the message body (as an NSDictionary), and calls the ViewController instance method geocodeAddress.

Note that the message handler is stringly typed, so be careful that the string comparison in didReceive properly matches the original message handler registration made with the WKUserContentController.

Calling geocodeAddress from the HTML/JavaScript page (3)

In HTML, the form’s INPUT button calls a JavaScript function called geocodeAddress:

<input type=”submit” value=”Geocode Address” onclick=”geocodeAddress();”>

The body of the JavaScript geocodeAddress function responds by calling the Swift Message Handler of the same name, passing in address details as a JSON object.

function geocodeAddress() {
    try {
                street: document.getElementById(“street”).value,
                city: document.getElementById(“city”).value,
                state: document.getElementById(“state”).value,
                country: document.getElementById(“country”).value
        document.querySelector(‘h1’).style.color = “green”;
    } catch(err) {
        document.querySelector(‘h1’).style.color = “red”;

Note: In the JavaScript geocodeAddress() function, the H1 style changes are merely here for testing purposes and are not part of the actual solution.

Passing back Latitude/Longitude to the HTML page

So far, the HTML page has accepted an address entry from the user in a series of INPUT fields, and sent it to the native Swift application. Now let’s complete the final requirement — geocoding the address and returning it to the web page UI.

Recall that the Swift message handler calls a Swift function called geocodeAddress(dict:) to do the heavy-lifting of geocoding the address.

func geocodeAddress(dict: NSDictionary) {
    let geocoder = CLGeocoder()

    let street = dict[“street”] as? String ?? “”
    let city = dict[“city”] as? String ?? “”
    let state = dict[“state”] as? String ?? “”
    let country = dict[“country”] as? String ?? “”

    let addressString = “(street), (city), (state), (country)”
        completionHandler: geocodeComplete)

This part of the solution is straightforward CoreLocation. After the geocodeAddressString asynchronous function sends the address to Apple, the response is provided to the Swift method geocodeComplete:

func geocodeComplete(placemarks: [CLPlacemark]?, error: Error?) {
    if let placemarks = placemarks, placemarks.count > 0 {
        let lat = placemarks[0].location?.coordinate.latitude ?? 0.0
        let lon = placemarks[0].location?.coordinate.longitude ?? 0.0
            “setLatLon(‘(lat)’, ‘(lon)’)”, completionHandler: nil)

This method checks to make sure at least one placemark was found for the provided address, extracts the latitude and longitude from the first place mark, and then sends them back to the HTML page by calling its setLatLon JavaScript function.

Updating the HTML page

The process of sending the latitude/longitude back to the web page is functionally identical to the previous feature which set the background color.

The setLatLon JavaScript function is implemented as follows:

function setLatLon(lat, lon) {
    document.getElementById(“latitude”).value = lat;
    document.getElementById(“longitude”).value = lon;

As with the background color function, setLatLon simply sets the HTML form’s INPUT field values to the passed parameter values.


The most common use of WKWebView to provide a simple display of web content within the context of a native iOS application — but it can do much more, and in this article we’ve seen how to incorporate web and native components to build enhanced native applications, or even hybrid native/web applications.

Download the Code

The above fragments provide the core functionality for the learning solution. The full Xcode project can be downloaded from Github here.

Flexible and Easy Unit Testing of CoreData Persistence Code

Modern and high-quality iOS applications are expected to perform flawlessly. An important input to ensuring flawless, regression-resistant code is to add comprehensive unit and integration testing as part of the development process. This article steps through a methodology for building repeatable, automated database unit tests for iOS applications using CoreData as their persistence layer.

Intended Audience

This article assumes you know the basics of using CoreData in an iOS application, and have probably used it in your own work. However the focus of this article is architectural, and even if you don’t know how to code with CoreData, the concepts here should still make sense if you understand the basics of data persistence and unit testing in iOS.

Code Samples

The code and concepts in this article were developed with Xcode 10 (beta) and Swift 4.2.

This article includes code excerpts to illustrate the concepts, but rather than embed all the code for this solution within the article text, a link to an example application in my GitHub repository is provided at the end of this article.

What is CoreData?

CoreData is the default local persistence choice for iOS (and macOS) applications. Core data is fundamentally an object-relational mapping (ORM) layer over a persisted data store. While the physical storage of CoreData objects is abstracted from the developer, CoreData is almost always used with SQLite.

If you’re new to CoreData, or just need a refresher, there are many great resources out there, such as Apple’s own Core Data Programming guide, and Getting Started with Core Data Tutorial guide at

How CoreData Fits in an iOS Application

The following is a highly simplified diagram of how a typical application accesses CoreData. I’ll discuss each element of the architecture below.

AppDelegate. This object represents the entry point of an iOS application, and should already be familiar to all iOS developers. If you create a project with the Use CoreData option in Xcode 10, Xcode will create basic CoreData stack for you. Within the AppDelegate object, you’ll find the the following property exploded.

This property is, essentially, the hook your application uses to access data managed by CoreData.

class AppDelegate: UIResponder, UIApplicationDelegate {
lazy var persistentContainer: NSPersistentContainer = { ... }

An NSPersistentContainer property has within it a setting that specifies whether its data should be saved to SQLite disk files (NSSQLiteStoreType), memory (NSInMemoryStoreType) or somewhere else (NSBinaryStoreType). The latter case is uncommon, and I won’t discuss it in this article. When no setting is specified (the default), NSSQLiteStoreType is used by CoreData to configure the container.

<projectname>.xcdatamodel. When creating a project with CoreData support, Xcode will automatically create a data model file, with a root name matching the new project name and the extension xcdatamodel. The Xcode data model editor stores your evolving design in this file, and uses this metadata file to generate low-level CoreData entity model classes for you. In Xcode 10, the generated model classes will automatically be availalbe to your XCTest target (which was not the case in some older versions of Xcode, so yay!).

StorageManager. While it’s certainly possible and acceptable to access CoreData and the auto-generated entity model classes directly throughout your application, it’s quite common to encapsulate data operations in a service class. In this architecture, I’ve done this. This approach simplifies data access code for the rest of the application, and provides some degree of encapsulation just in case the the underlying database physical layer changes in the future.

As the StorageManager object is initialized (refer to the red circle numbers in the diagram called out in these bullets):

  • It uses the .xcdatamodel (1) generated model classes to perform underlying database access.
  • It will use the global persistentContainer object (2) instantiated in the AppDelegate class, which uses the deafult SQLite (3) backing for data storage.

Production App Code (e.g. ViewController). This box in the diagram represents wherever data is fetched or saved within the app. This may be code within a View Controller, View Model, or other classes you write yourself. In this architecture, all such accesses are made by calling methods of the StorageManager object, rather than interacting directly with CoreData.

SQLite DB. In the production app, StorageManager fetches and makes database changes to physical files stored in the App’s sandbox, indicated by (3) in the above diagram. These changes are not in RAM, and the database persists between runs of the program.

The main goal for this article is to create a hybrid architecture where the persistent SQLite database is used for the production app, while a volitile in-memory database is used for unit testing.

Repeatable Unit Tests vs Persistent Disk Storage

A basic requirement for unit tests is that the application state should be exactly the same at the beginning of each run of a unit test. Having a disk-based SQLite database presents a challenge to this requirement. The database files are by definition persistent, so each test run by definition affects their state and fails to guarantee each test is identical.

That said, we could simply and easily add unit tests to the project, using the existing CoreData configuration. The resulting architecture would be as follows:

In this approach both the production app and the unit test target use the same StorageManager and xcdatamodel generated model classes. This is good because the data access objects and calling methods are unchanged.

The problem, though, is that both app and test targets will use the same Container type, which is configured with the default SQLite setting (1), resulting in the unit tests using a disk-based data store (2) that won’t start in the same state for all test runs — without writing additional pre-test initialization code.

Unit Testing with a SQLite-backed container

We could deal with this challenge by reinitializing the database, perhaps in one of the following ways:

  1. Truncate all tables
  2. Delete and recreate the disk file(s) associated with the database before each unit test

Either approach may be reasonable, and should ensure the state of all disk files would be the same before every unit test. But each of these approaches requires additional code to achieve, and may need additional maintenance as the database evolves over time. If only there was an easier way — and there is!

By leveraging CoreData’s container abstraction, we have a third — and more elegant — approach that requires no physical disk file manipulation at all.

Using In Memory Persistence for Unit Tests

To give unit tests a clean, consistent environment before each test begins requires only a minor change to the existing code base. In fact, if you compare the architecture diagram below to the previous one, you’ll note that there are no additional code modules.

The coding change is to create a custom NSPersistentContainer within the Unit Test code — one which continues to use the xcdatamodel-generated CoreData model classes, but provides a PersistentContainer configured to use a volitile, in-memory persistent storage component. This is where CoreData’s abstraction between the programming model and physical storage model comes into play.

When the Unit Test is run, the custom, in-memory backed container is passed to the Storage Manager (1), which is configured for in-memory data storage.

By contrast, the production app initializes a StorageManager without passing a Container object. In this case, StorageManager uses the Container configured in AppDelegate (2), which uses the default SQLite container type.

CoreData will use SQLite or in-memory for database access automatically (3) depending on the container configuration.

NSPersistentContainer initialization in the production App

The key to making this strategy work is to initialize StorageManager differently depending on whether it’s being used from the main App target or the Unit Test target. The following are simplified versions of the initializations for each case.

When the production app target accesses the database, it always uses the persistentContainer created as a global property of AppDelegate, illustrated in the following abridged code excerpt.

Note that this initialization is very simple, and CoreData will use its default SQLite storage configuration.

Abridged AppDelegate excerpt

class AppDelegate: UIResponder, UIApplicationDelegate {
    lazy var persistentContainer: NSPersistentContainer = {
        let container = NSPersistentContainer(name: "CoreDataUnitTesting")
        container.loadPersistentStores(completionHandler: { (storeDescription, error) in
        return container

To use this default SQLite CoreData stack, application code needs only to create a StorageManager instance and call its methods. StorageManager will use the AppDelegate.persistentContainer whenever a custom container is not provided.

Abridged ViewController excerpt

class ViewController: UIViewController {
@IBAction func saveButtonTapped(_ sender: Any) {
            let mgr = StorageManager()

            if let city = cityField.text, let country = countryField.text {
                mgr.insertPlace(city: city, country: country)

NSPersistentContainer initialization in a Unit Test

When data is accessed by a unit test, the unit test target creates its own custom Container, then passes it to the StorageManager class initializer.

StorageManager doesn’t know that the persistent layer will be in-memory (and it doesn’t care). It just passes the container it’s given to CoreData, which handles the underlying details.

The following is a simplified example of the Unit Test class.

CoreDataUnitTestingTests Excerpt

class CoreDataUnitTestingTests: XCTestCase {

    // this class instantiates its own custom storage manager, using an in-memory data backing
    var customStorageManager: StorageManager?
// Using the in-memory container unit testing requires loading the xcdatamodel to be loaded from the main bundle
    var managedObjectModel: NSManagedObjectModel = {
        let managedObjectModel = NSManagedObjectModel.mergedModel(from: [Bundle.main])!
        return managedObjectModel
// The customStorageManager specifies in-memory by providing a custom NSPersistentContainer
    lazy var mockPersistantContainer: NSPersistentContainer = {
       let container = NSPersistentContainer(name: "CoreDataUnitTesting", managedObjectModel: self.managedObjectModel)
       let description = NSPersistentStoreDescription()
       description.type = NSInMemoryStoreType
       description.shouldAddStoreAsynchronously = false

        container.persistentStoreDescriptions = [description]
        container.loadPersistentStores { (description, error) in
        return container
// Before each unit test, setUp is called, which creates a fresh, empty in-memory database for the test to use
    override func setUp() {
        customStorageManager = StorageManager(container: mockPersistantContainer)
// Example of how a unit test uses the customStorageManager
    func testCheckEmpty() {
        if let mgr = self.customStorageManager {
            let rows = mgr.fetchAll()
            XCTAssertEqual(rows.count, 0)
        } else {

Note the following points in the preceding code sample:

  1. A key difference is the NSPersistentContainer definition vs. the AppDelegate version. This version overrides the default SQLite storage behavior with the optional in-memory storage.
  2. Since the xcdatamodel used for testing is part of the main app bundle, it’s necessary to reference it explicitely by initializing an NSManagedObjectModel. This was not necessary in AppDelegate, since the model and container exist in the same namespace.
  3. The initialization of StorageManager includes the in-memory container, whereas in the previous ViewController code, StorageManager’s convenience initializer that takes no parameters was used to initialize the CoreData stack with the default SQLite container.


While there‘s’ always more than one way to achieve a solid testing architecture, and this isn’t the only good solution, this architectural approach has some distinct advantages:

  1. By using in-memory (rather than SQLite) for unit testing, we know for certain that there are never remnants of prior tests included in the database that we’re testing code against.
  2. Using in-memory eliminates the need to write and maintain code that clears data objects or deletes physical files before tests run. By definition, we get a fresh, new database for every run of every unit test.
  3. If we’re already using a StorageManager pattern to encapsulate CoreData calls (which is a good practice anyway), this pattern can be applied to existing projects merely by adding a convenience initializer to the StorageManager object!
  4. This approach can be achieved entirely using out-of-the-box Xcode and iOS SDK components.

Get the Code

The code for a full, runnable sample application that incorporates the above architecture is available in my GitHub account. Use this for further study of this technique, and/or as a boilerplate for your own projects.

GitHub CoreDataUnitTesting Repository

My Favorite WWDC 2018 Sessions

Every year I look forward to WWDC — it’s like Christmas morning for apple developers, where we get to take the wrapping paper off the next version of Xcode and the various iOS, tvOS, macOS and watchOS SDKs.

This year is no different! The press focuses more on the operating systems themselves. But I’m a lot more interested in what SDK goodness is coming down the line to provide more tools and hooks to build even better software! 2018 hasn’t disappointed at all!

Here are the top five sessions I saw in terms of value to me personally this year:

Platform State of the Union

Always my first stop, to get the executive vision of where the platform is heading.

Practical Approaches to Greater App Performance

High value session packed with practical techniques based on real-world experience. Excellent session packed with practical knowledge!

Building Faster in Xcode

Lately I’ve been working in more complex projects, developing frameworks and just working larger codebases. This session was quite enlightening in terms of how to solve for dependencies and speed up the build process.

What’s New in Swift

As Swift continues to evolve, yet mercifully more slowly now, we have to keep up! Last year I developed instructional content for Packt Press where I had to really understand every nuance of Swift as part of that effort, and I’m always up to learn and start using the new language features.

Introduction to Siri Shortcuts

I did some work with Siri in the past, and have to admit being disappointed it was so limited to specific domains (none of which I work with!). I’m really excited to see Siri start to branch out, and found this session really informative.

Creating simple frame animations for Android using Kotlin

User Interface Animation is a technique that can really make any mobile application pop off the screen, making almost any app feel more fluid and engaging. This article is a walk-through for using Android’s AnimationDrawable to add simple frame animations.

What is AnimationDrawable?

AnimationDrawable is a built-in Android class (since API Level 1) used to create frame-by-frame animations with a list of Drawable objects as the source for each frame in the Drawable Animation.

While any Drawable resource can theoretically be used with AnimationDrawable, it’s most often used with raster images (e.g. png files) — which is what I’ll demonstrate in this walk-through.

What We’ll Build

In this walk through, I’ll build an appliation that shows an animation of a robot walking. This is a simple frame animation that has a little fun with the UI while demonstrating how frame animations can still look fluid. Here’s the completed UI

Note: this is an animated GIF; if using the Medium app on a mobile device, open this article in a browser to view the animation.

Robot Walker App Demo

To make following along easier, the source code for the completed application can be downloaded from my github account here.

The AnimationDrawable Class

AimationDrawable was added in API version 1, so this is a technique that will work with virtually any Android application. Using AnimationDrawable is fairly simple. The overall process is as follows:

  1. Create a Drawable resource in your application, which contains a list of item
  2. Assign the Drawable in step 1 to the container element where it will appear — commonly this is the background of an ImageView.
  3. Call the start method on the AnimationDrawable to begin the frame animation.

Note: a common mistake when using AnimationDrawable is to attempt to start the animation in the onCreate method — before the AnimationDrawable is fully attached to the Window. When this is done, typically the first frame is displayed, but the image doesn’t animate. The Android documentation provides the following warning:

“Note: Do not call this in the onCreate(Bundle) method of your activity, because the AnimationDrawable is not yet fully attached to the window. If you want to play the animation immediately without requiring interaction, then you might want to call it from the onWindowFocusChanged(boolean)method in your activity, which will get called when Android brings your window into focus.”

In the walk-through app, I’ll be calling start from a button press handler, which is also a perfectly safe way to approach this.

Creating the Drawable Resource

Creating the resource is fairly straightforward. For the RobotWalker application, a resource is added to the Drawable folder containing a single animation-list element, which in turn contains one item per animation frame.

<?xml version="1.0" encoding="utf-8"?>
<animation-list xmlns:android=""



[etc...the sample app has about 40 frames]

Each item contains a drawable key that specifies a related Drawable object to use for that frame, and a duration (in milliseconds) for that frame to be displayed before the animation moves to the next frame.

The oneshot key at the animation-list level is true when the animation should be displayed once and then stop, or false if the animation should repeat from the first frame when it reaches the end.

Using the Drawable Resource in Kotlin

With the Drawable created in the res/drawable folder, all that’s left is to use the resource in your program.

Within main_activity.xml of the source project, I’ve added an ImageView(#1) and two Button objects: one to start the animation from the beginning (#2), and the other to stop animating (#3). The design view of the main activity is as follows:

Setting the AnimationDrawable as the background for the ImageView is the simplest and most common approach — which is what I’ve done here.

The final step is to add a listener for each button, and then to call the start and stop methods within the listeners.

The final Kotlin code is as follows:


import android.os.Bundle

class MainActivity : AppCompatActivity() {

    override fun onCreate(savedInstanceState: Bundle?) {

        startWalking.setOnClickListener {
val bgImage = imageView.background as AnimationDrawable

stopWalking.setOnClickListener {
val bgImage = imageView.background as AnimationDrawable

I hope this how-to was helpful, and gets you started using simple frame animation that works with any version of Android! If this was helpful, please tap the clap button and let me know!

To download the referenced project source, click on this link to my GitHub account.

Understanding UI Testing using iOS, Xcode 9 and Swift

Xcode provides a fully-featured, scriptable UI Testing framework. A key to using the framework is understanding its architecture and how to best leverage its capabilities.

Understanding an Xcode UI Test

When you create a new project in Xcode, the new project wizard asks if you’d like to Include Unit Tests, and whether you’d like to Include UI Tests.

Xcode Test Target Selection

One might wonder — is a UI Test not a Unit test? If not, then what is it?

Actually, these checkboxes and their outcomes are primarily there to inform Xcode which targets to create within your project. Each checkbox, when checked, generates a different type of test target in your project.

The fundamental differences between an Xcode Unit Test and an Xcode UI Test:

  • Unit Tests are used to test that source code generates expected results. For example: ensuring that a function, when passed a specific parameter, generates some expected result.
  • UI Tests test that a user interface behaves in an expected way. For example: a UI Test might programmatically tap on a button which should segue to a new screen, and then programmatically inspect whether the expected screen did load, and contains the expected content.

Both Unit Tests and UI Tests support full automation, and enable regression testing of applications over their lifecycle.

Generally speaking, an Xcode Unit Test exercises and evaluates your Swift code, but does not inspect the impact is has on the User Interface, while an Xcode UI Test evaluates the UI behavior in response to actions, but does not inspect your code.

As always, these are generalized statements that have exceptions. It is certainly possible to get some insight into the UI from a (code) Unit Test, and to get some insight into the state of your code objects from a UI Test. But the tools are designed to be used according to this generalized statement.

Example of a UI Test

Before examining the architecture and implementation of UI Test, let’s take a look at a finished test in operation. The user story for this test is as follows:

On the first screen, the user can select a cell within a table view, which opens a second form showing the selected value in a label. The user can then key in a new value, in a text box beneath the original label. When the user subsequently returns to the first form, the new value will be shown in the Table View.

If a QA tester were to manually check this process they would do the following sequence:

  1. Launch the app
  2. Tap on a row
  3. Observe the table view row text is on the second form when it loads
  4. Type in a new value in the text field
  5. Press the back button
  6. Observe the value they typed has replaced the original text in the table view

The manual testing process would look as follows (this is an animated .gif — if using the Medium app, you may need to open this page in a browser to view the animation).

UI Test Process

Wouldn’t it be nice if we could automate this process so our QA tester didn’t have to repeat this process manually before every release? That’s exactly what UI Testing is for — and we’ll walk through how to automate this test process!

UI Testing Architecture

Before digging into the code, it’s important to understand how the Xcode UI Test framework operates. By understanding how the UI Tests access and manipulates your UI components, you’ll be able to make your UI easy to build tests for.

As with Unit Tests (the ones that exercise your source code), XCode uses XCTest to run your UI Tests. But how does the XCTest code know how to inspect and manipulate the UI that you designed in Storyboards and/or Swift code?

To gain access to your UI at runtime, XCTest uses metadata exposed by iOS Accessibility. This is the same technology used to enable iOS to read your screen to blind and low vision users, for example. At runtime, XCTest iterates over your UI controls, looking for Accessibility properties such as accessibilityIdentifier and accessibilityLabel to find the UI components you’d like XCTest to tap on, change or inspect as part of your UI Test.

While it’s possible to design UI Tests without doing any preparation of Accessibility metadata in your app — and you’ll find many examples on the Internet that do this — you can maintain better control and predictability in UI Tests by planning for UI Tests in advance, and preparing Accessibility metadata in the UI. Similarly, if you’re retrofitting UI Tests to an existing application, you should consider retrofitting Accessibility metadata as part of the process.

UI Test Recording

Xcode’s UI Test suite provides an easy way to get started implementing a UI Test: the Record UI Test button.

To begin recording a UI Test:

  1. Create a new UI Test function in the UI Test target source .swift file (assuming you created a UI Test target when you created your project — or added it later)
  2. Place the editing cursor within the empty test function
  3. Press the Record UI Test button below the source code editing pane

Xcode will compile and run the application using the debug device (i.e. simulator). Then, just walk through the test sequence on the simulator (or other debug device). When you’re finished, stop the debug session. Xcode will have created a set of commands to re-create the UI experience during the recording. In the case of the test sequence outlined above, the following code would be generated:

func testChangeTableRowText() {
   let app = XCUIApplication()
   app.tables["MyTable"].staticTexts["Fourth Row"].tap()
   let newvalueTextField = app.textFields["newValue"]
   let app2 = app
   newvalueTextField.typeText("Some new value")

Great! Xcode has generated all the command needed to re-run the same UI Test process we did by hand. This is a boon to our test design productivity, and gives us a great start. But it’s not perfect, and not a production ready test yet. There are some deficiencies:

  1. There are some messy aspects, such as the line let app2 = app. We wouldn’t have written the code this way ourselves — the app object created at line 1 obviously can be used throughout the test function.
  2. The reference to staticTexts[“Fourth Row”] in line 2 of the function assumes that the contents of the UITableView cells will always be the same. What if it won’t? This is a case where preparing the Accessibility metadata can help make a more robust test. I’ll cover this shortly.
  3. The auto-generated code causes the test to operate, but nothing here is evaluating whether the outcomes of the test were successful or not. Xcode can’t create this part of the test — we have to do this ourselves.

Preparing the Accessibility Metadata

In Line 2 of the auto-generated code, Xcode inserted this line:

app.tables["MyTable"].staticTexts["Fourth Row"].tap()

In english, this command means:

Within the array ofUITableView objects within the current UIView, find a UITable with the key MyTable. Then, search all the UILabel controls within that UITable and find a UILabel having a text value “Fourth Row”. Then tap on that UILabel.

There are two key references XCTest uses to find UI elements here:

  1. The “Fourth Row” UILabel — the UILabel text value displayed on the 4th UITableViewCell in the UITable
  2. The UITableView with a key of “MyTable” — huh? Where did that key come from?

Let’s consider the second item. In this case, I had previously assigned the text “MyTable” as the accessibilityIdentifier for the UITable on the first UIView. This was done in the viewDidLoad() function of that UIView’s UIViewController, like so:

override func viewDidLoad() {
   tableView.accessibilityIdentifier = "MyTable"

Every UIView can have an accessibilityIdentifier, as well as other Accessibility properties. For the purposes of UI Testing, you’ll be most interested in accessibilityIdentifier and accessibiltyLabel.

Example of Accessibility Properties

When a UIView has either an accessibilityIdentifier or an accessibilityLabel, it can be queried within a UI Test by using that string as a key. For example, this table could be accessed within a UI Test in either of these ways:

let tableView = app.tables.containing(.table, identifier: "MyTable")
let tableView = app.tables["MyTable"]

By using Accessibility metadata in this way, you can create a more robust UI Test — one not dependent on the content of the text in controls. Instead, the controls can be accessed by dictionary key values you define and control. But you do need to make the effort to assign keys in order to use them!

Note: while UIView objects can be queried using either accessibilityIdentifier or accessibilityLabel, it’s usually better to use accessibiltyIdentifier. accessibilityLabel is the property iOS Accessibility uses to access the text to be read to a blind or low vision user, and could change at runtime for controls that have updatable text properties.

How to Set accessibilityIdentifiers

Setting the accessibilityIdentifier for a UIView-based object can be done in several ways. The most common are as follows:

Using the Interface Builder Identity Inspector

Some UI elements support setting of Accessibility properties within IB Identity Inspector. For example, the UILabel on the first form of our test solution has its accessibilityIdentifier set to “labelIdentifier” directly within the predefined IB field.

Setting the accessibilityLabelIdentifier for a UILabel

Using a User-Defined Runtime Attribute

For UI elements that wouldn’t normally be read to a blind or low vision end-user, Interface Builder won’t have predefined Accessibility property fields. But you can still add them at Interface Builder design time using the User Defined Runtime Attributes dictionary editor on the Identity Inspector.

In this case, I’ve moved the UITableView’s accessibilityIdentifier from the UIViewController’s viewDidLoad() method into the Interface Builder storyboard editor. The resulting UI Test works exactly the same way — but with less code to maintain.

Setting accessibilityLabelIdentifier using Runtime Attributes

Using Code

As mentioned earlier, every UIView-based class has accessibility properties, and those properties can be set at runtime.

override func viewDidLoad() {
   tableView.accessibilityIdentifier = "MyTable"

All three of these methods have the same effect . Which is best depends on best practices within your team. Some prefer to reduce code by configuring UI in Interface Builder, others prefer to do all UI design in code. UI Testing supports both scenarios equally well.

Inspecting UI Elements During the Test

Recall earlier that I recorded the steps for the test — but I didn’t actually test for anything! Let’s wrap this job up by adding the actual tests, and use accessibilityIdentifier properties where possible.

Searching for UIView elements

Recall that Xcode wrote the following statement to find the UITableView using its accessibilityIdentifier:

let tableView = app.tables["MyTable"]

This is the most concise shorthand method, but I want to point out there’s more than one right answer to finding the tableView in the view hierarchy.

Another method is to explicitly search for the accessibilityIdentifier:

let tableView = app.tables.containing(.table, identifier: "MyTable")

If we hadn’t assigned an accessibilityIdentifier, we could use this code to get the first UITableView within the top-level UIView:

let tableView = app.tables.element(boundBy: 0)

This isn’t as good, because if we should ever add a second UITableView to the screen, the UI Test may break if a new UITableView happens to be retrieved as the first UITableView! This is the reason I suggest using accessibilityIdentifiers when designing your UI Tests.

If we knew there were one and only one UITableView on the screen, we could shorten the previous technique even more:

let tableView = app.tables

Again, this has the risk of breaking the UI Test if a second UITableView is added. This would be a more serious break, since the tables property would return a collection rather than a single table as it does when only one UITableView is in the view hierarchy.

Final Test Script

We’ve covered the fundamentals of creating tests, accessing elements, and manipulating values (which Xcode showed us during the test recording), so we’re ready to wrap this up.

I’ve pasted below the final test function, and then annotated it below.

01: func testChangeTableRowText() {
02:     let app = XCUIApplication()
03:     let tableView = app.tables["MyTable"]
04:     XCTAssert(tableView.cells.count == 5)
06:     let cell = tableView.cells.containing(.cell, identifier: "3")
07:     let cellLabelText = cell.staticTexts.element(boundBy: 0).label
08:     XCTAssertEqual(cellLabelText, "Fourth Row")
10:     cell.staticTexts.element(boundBy: 0).tap()
12:     // The detail form is now visible
14:     XCTAssertEqual(app.staticTexts["labelIdentifier"].label, cellLabelText)
16:     let textField = app.otherElements.textFields["newValue"]
17:     textField.tap()
18:     textField.typeText("Some new value")
20:     XCTAssertEqual(textField.value as? String ?? "", "Some new value")
22:     app.navigationBars["UITestingDemo.DetailView"].buttons["Back"].tap()
24:     // The detail form is now visible
26:     let tableView2 = app.tables.containing(.table, identifier: "MyTable")
27:     let cell2 = tableView2.cells.containing(.cell, identifier: "3")
28:     let updatedText = cell2.staticTexts.element(boundBy: 0).label
30:     XCTAssertEqual(updatedText, "Some new value")
31: }
  • In lines 2–4, we find the UITableView with the accessibilityIdentifier “MyTable”, and then check that the number of rows is five (5). Remember that whenever an XCTAssert fails, the entire test fails.
  • On line 6, we search the UITableView for a UITableViewCell with an accessibilityIdentifier equal to “3”. This value was set in the cellForRowAt method in the UITableView DataSource delegate (review the code from GitHub for details)
  • On line 7, we get the first UILabel within the cell (this cell has only one label).
  • One line 8, the UILabel text property is checked against an expected value (this is not really a requirement for this test, but I added it as a further example).
  • Line 10 sends a tap event to the UILabel within the cell. The effect of this is to generate a tap event on the cell, which then triggers a segue to the detail form (see source on GitHub for details)
  • Line 14 finds the UILabel with accessibilityIdentifier “labelIdentifier” (we set this in Interface Builder earlier. When the form is loaded, it should have set the UILabel text to the value tapped in the UITableView. This XCAssetEqual check to make sure this was done.
  • Lines 16–20 tap on the UITextField, and type in new text.
  • Line 22 taps the Back button at the top-left of the detail form, which pops the view controller off the stack, returning to the first form.
  • Lines 26–30 again retrieve the value in the 4th cell, and compare the new value to the value that was typed on the detail form.

Note: when creating tests that type into fields using an iOS simulator, be sure to disconnect the hardware keyboard in the simulator. The typeText method will fail when a hardware keyboard is attached to the simulator.

Where to go Next

With this, we’ve completed a completed, robust UI Test for this part of the application!

Since we used accessibilityIdentifier properties wherever possible, we’ve created a test that won’t easily break when the UI is enhanced with new controls, and the test is repeatable, automate-able, and easy to use for regression testing.

But this test can be improved even more:

  • We still have a few static data values in the test, e.g. “Fourth Row”. By refactoring all static value assumptions out of this test, we could set it up to work with dynamic data (for example, against a web service call)
  • This test is still bound to a developer or QA Engineer using Xcode at their desk. But with some additional work, we could incorporate this type of test into a fully automated test suite run by a daemon instead. Look for that in a future blog post!

Using Face ID to secure iOS Applications

While the data on iOS devices continued to be secured by encryption using underlying PIN code, Touch ID provided a convenient way for users to unlock devices and confirm their identity with only a touch of a finger.

Biometric security like Face ID and Touch ID help make iOS mobile devices more secure and convenient for users. These technologies can also be used by 3rd-party applications.

Touch ID Roots

In 2013 Apple introduced a new, biometric means to unlock its mobile devices using a fingerprint sensor incorporated in the home button — Touch ID. Prior to Touch ID, users who wanted to secure their iOS devices from unauthorized access could do so by entering a 4-digit PIN code (later extended to longer, 6-digit codes). While the data on iOS devices continued to be secured by encryption using underlying PIN code, Touch ID provided a convenient way for users to unlock devices and confirm their identity with only a touch of a finger.

Enter Face ID

With the launch of the iPhone X, Apple introduced a new biometric security mechanism — Face ID. The trademark name Face ID describes itself. Instead of using a scanned fingerprint to identify the user, Face ID uses a scan of the user’s face to match a stored profile on the device.

I’ll talk in terms of Face ID, but it’s worth noting that both Face ID and Touch ID are just different variants of biometric security. From an architecture and development perspective, both operate in the same way, and provide equivalent benefits application architecture.

Touch ID and Face ID operate in the same way, and provide equivalent benefits application architecture

Where Touch ID uses a map of fingerprint ridges to as a means to recognize its user, Face ID uses a 3-dimensional map of contours and facial features. Face ID uses its signature True Depth infrared camera to project 30,000 dots onto the user’s face, then reads the pattern to create its facial contour map.

The Touch ID fingerprint signature and the Face ID facial contour map are stored in the Secure Enclave within the iOS device. This data is accessible only by the end user (encrypted with the PIN that only the user knows), and never leaves the device itself.

Phil Schiller introducing Face ID at the iPhone X launch presentation (2017)

Leveraging Face ID in 3rd-Party Applications

While most users think of Face ID only in terms of unlocking the iOS device at the home screen, we can also use Face ID to create more secure and convenient experiences for our 3rd-party applications.

While users are accustomed to being prompted to authenticate with Face ID (or Touch ID) when unlocking the device, we can ask iOS to prompt them to re-authenticate with biometric security at any time. Typically we would prompt a biometric authentication when our own iOS application is launched, just before reading security tokens or user credentials from the iOS

As with other hardware features accessible to 3rd-party applications, users must authorize a custom application to use Face ID. We must design our application assuming that a user may not authorize us to use Face ID (or is using a device that doesn’t support biometric authentication). Apps must fail gracefully, and provide some other means to identify/authenticate the user when biometric security is not available or fails to recognize the user.

Face ID Benefits

Incorporating Face ID in our iOS security architecture has some key benefits:

  • We can be certain the user who unlocked the device is the same person now accessing our application.
  • We can provide an extra layer of security, being sure of the user’s identity before reading highly sensitive data from the user’s Keychain (for example a JWT token or a password)
  • When users are prompted for Face ID (or Touch ID), they are reassured that we‘re taking the security of their sensitive information seriously.

Security Architecture

An application that would benefit from using Face ID/Touch ID on launch would typically have one or more of the following security design elements:

  • A password stored in the user’s Keychain
  • A web service token used for accessing remote APIs stored in the user’s Keychain
  • Certificates or other sensitive data stored in the user’s Keychain

While an application could prompt for biometric authentication even when it’s not to authorize access to sensitive information, this isn’t a typical approach. For the most part, application-level biometric authentication is employed as a secure substitute for a traditional username/password authentication.

Example App Launch Flow with Face ID

The following example illustrates how Face ID (or Touch ID) biometric authentication would be used to provide a confirmation of user identity prior to accessing security credentials.

Typically applications that access secure information (for example, by making authenticated calls to web service APIs) will require either a username/password to begin a session, an expiring token, for example a Java Web Token (JWT). While prompting users to re-enter username/password combinations on every application launch is secure, it’s also frustrating for users. Most mobile applications do store authentication tokens or passwords in keychain, and keeping this information secure is of utmost importance.

In the following flow:

  • A username/password combination (previously entered by the user), or a security token (previously obtained from a web service) are stored in the iOS keychain
  • The Keychain is the correct location for this sensitive data, since it is then encrypted and not accessible without a device PIN/Biometric unlock.
  • If sensitive authentication information has been stored in the keychain (user logged in, but didn’t log out), the user’s ID (but not password or token!) is stored in user preferences. The presence of User Id in preferences is the signal that the application should attempt biometric authentication — rather than proceeding directly to the username/password prompt.
  • If the device doesn’t support biometric authentication, or the user has declined to allow the application to use that feature, or the sensor simply doesn’t recognize the user, Face ID fails, and the application falls back to conventional username/password authentication.
iOS App launch flow, enhanced with Face ID

In the above login flow, Face ID (or Touch ID) are used to provide a way for the user to grant permission for the application to read from the Keychain.

Could the app read from the Keychain without prompting for biometric verification? The answer is “Yes”. Face ID isn’t required to access Keychain data — when the user unlocked the device with PIN (or Face ID/Touch ID), the Keychain was implicitly unlocked for the application.

But by using Face ID/Touch ID, we’re providing an extra layer of identity verification, and raising the level of security of our application to one that prompts for a password every time it’s launched — but without the user frustration associated with repeated password prompts.

Configuring a UIScrollView in a Storyboard — with no code!

To understand how to setup a scrolling UIView within a UIScrollView, first we need to understand the views that will be involved, and how they relate to each other. Once we understand this, the concept of scrolling views becomes much simpler to understand and configure.

A delicious developer recipe for setting constraints on Storyboard views to serve up a proper scrolling view for iOS applications.


Configuring a UIViewController with a scrolling content view can be confusing, and — frustratingly — scrolling views usually don’t really work at all until all their constraints are perfect!

To understand how to setup a scrolling UIView within a UIScrollView, first we need to understand the views that will be involved, and how they relate to each other. Once we understand this, the concept of scrolling views becomes much simpler to understand and configure.

The Lay of the Land

First we need to understand the relationship between the views involved in the scrolling. Review the following diagram, which I’ll discuss just below.

Starting from the top of the view stack, these are the views we’ll be configuring:


At the top of this diagram — represented by the white boxes — are a set of controls that the user will see and interact with. There can be as many as you like, and they should be arranged as they make sense for the UI.

Presumably, the height will exceed the vertical size of the enclosing UIView — if it didn’t, we wouldn’t need to scroll the content, right?

Content View (UIView)

The Content we want to present to the user is then arranged within a Content View. This doesn’t have to be a UIView specifically, but should be something based on UIView. In the implementation walk-through below, I’ll actually use a UIStackView as the Content View, and just stack from UIViews within it until the vertical space exceeds the size of my iPhone’s screen.

This technique isn’t limited to view hierarchies. Your content view could be a UIView that you draw content in yourself. Anything goes, really.

Scroll View (UIScrollView)

The Scroll View is the container that knows show to pan the Content View around so the user sees as much of the Content View as will fit in the Scroll View’s visible frame. The rest of the Content View is clipped off the top or bottom (or right/left, if the width exceeds the frame.width). Yes, I just defined scrolling. And managing the visible and clipped regions is all the UIScrollView does in this solution.

Users don’t typically see content placed on the Scroll View —al though if it has a background color, they may see that color when the Content View is smaller than the Scroll View frame.

Top Level View (UIView)

The Scroll View has to live somewhere, and that somewhere is usually pinned to the edges of some containing view. When adding a Scroll View to an iPhone app, this will be the top-level UIView of a UIViewController. However, it could be any subview instead— for example in an iPad app, the Scroll View could be pinned to the edges of a Split View’s content view.

In this example, I’ll stick to the simple case of a scrolling view placed in an iPhone application’s main view frame.


Making the Scroll View work correctly is almost entirely dependent on getting the constraints created correctly in Xcode’s Interface Builder. By correctly, I mean connecting the edges of the controls appropriately and getting the size of the content view set correctly.

For the majority of iPhone applications, where a content view is scrolled vertically, the following simple checklist of constraints will work 90% of the time. This recipe requires creating one set of constraints on the Scroll View, and a second set of constraints on the Content View.

For the following constraints, I’m using the view names [Scroll View], [Content View] and [Top Level View]. Refer to the above diagram to recall the arrangement of these views.

Scroll View Constraints

The following constraints go on the Scroll View. Keep in mind that the [Safe Area] in the following constraints refer to the Superview of Scroll View, which is Top Level View.

  1. [Scroll View].[Trailing Space] = [Safe Area]
  2. [Scroll View].[Leading Space] = [Safe Area]
  3. [Scroll View].[Bottom Space] = [Safe Area]
  4. [Scroll View].[Top Space] = [Safe Area]

OK, these constraints are super simple! Basically, just pin the UIScrollView to the containing view. These don’t have to be precisely what I’ve listed.

I pinned to the Safe Area on an iPhone X here. If you have some other views on the same form (e.g. some navigation buttons), you might pin a Scroll View edge to those sibling views. Or, if you don’t want to stay within the Safe Area, you can pin to the edge of the containing view instead.

Position the Scroll View where your design suggests it should be — the point is that you want the scroll view size to be fixed in place relative to other elements on the screen.

Content View Constraints

Now that the Scroll View is set to a fixed position, we’ll setup the constraints for the content view.

Since the Content View is contained within the Scroll View, the Superview in the constraints below refers to the Scroll View (i.e. not the Top View).

  1. [Content View].[Trailing Space] = [Superview]
  2. [Content View].[Leading Space] = [Superview]
  3. [Content View].[Top Space] = [Superview]
  4. [Content View].[Bottom Space] = [Superview]
  5. Equal Width: [Content View] & [Top Level View]
  6. (maybe) Height: 1500

The first four of these constraints are super-simple: the Content View edges are pinned to the Scroll View edges. This is exactly what we did with the Scroll View — we pinned it to the Top Level View.

What’s less intuitive is that — at the same time — we have a width and height constraint applied to the Content View. Huh?

These last two constraints allow our content view to have a virtual size that exceeds the visible frame of the Scroll View.

In this case, I’m designing an iPhone app, and I want vertical scrolling, but not horizontal scrolling. To achieve this, I’ve set the Content View horizontal size equal to the Top View size. Because of this, the Scroll View will never perceive a need to scroll the content horizontally. The user won’t be able to scroll horizontally, and the horizontal scroll bar won’t be presented. This is a common technique for iPhone apps, which rarely scroll horizontally when in portrait orientation.

I do want vertical scrolling. There are essentially two ways to ensure the vertical height is known when Scroll View decides whether to present a scroll bar to the user:

  1. By setting a constraint such as #6 to set that hight specifically. You may need to do this when, for example, you’ll be drawing content at runtime in Swift code. You can set the height of this view at runtime by creating an outlet to this constraint, or changing the Content View frame size in code at runtime.
  2. By using a Content View that has some intrinsic size. For example, if you use a UIStackView for the Content View, and the UIStackView has a vertical size known and design-time, then there’s no need to worry about setting a Content View height constraint at all — it will be inferred by the content of the Stack View.

A common mistake is to pin the Content View to the vertical size of the UIScrollView. If you do this, the content won’t scroll — even when there is actual content “off screen”.

Using The Recipe

Now that we have a recipe — in terms of a view hierarchy and a checklist of constraints, let’s put it into practice in a demo application (which you can download from GitHub using the link at the end of this article).

To keep this article shorter, I’ve summarized the steps, since I assume you know how to use Xcode to create apps and views already:

  1. Create a new Single View App
  2. In the default Main.Storyboard, select the View Controller Scene, then select the Size Inspector, then change the View Controller Scene’s Simulated size to Freeform, and the Height property to 1500
  3. create seven views, one over the other, assigning the following colors of the rainbow: #9400D3, #4B0082, #0000FF, #00FF00, #FFFF00, #FF7F00, #FF0000
  4. Add a height constraint to each view, fixing each to 200 points
  5. Highlight all seven views in the Document Outline, and select from the Xcode menu: Editor / Embed In / Stack View.
  6. Highlight he new UIStackView in the Document Outline, and in the Attributes Inspector, set the following properties on the UIStackView:
    a. Axis = Vertical
    b. Distribution = Equal Spacing
    c. Spacing = 10
  7. Highlight the new UIStackView in the Document Outline, and select from the Xcode menu: Editor / Embed In / Scroll View
  8. Highlight the new Scroll View in the Document Outline, and create constraints 1–4 from the above Scroll View Constraints section list.
  9. Now in the Document Outline, click on the UIStackView. Now hold down the ⌘ key and click on the Top Level View. With both these views highlighted, click on the Add New Constraints button, select the Equal Widths checkbox, and press the Add Constraints button to save this constraint.

You don’t need constraint #6 because the Content View in this layout is a UIStackView that has an intrinsic height, since we fixed the height of all the rainbow UIView controls and set a spacing of 10 points. This gives the UIStackView a fixed height of (7 * 200) + (6 * 10) = 1460, which the UIScrollView will read at runtime to use to position and scroll the view.

Guess what? You’re done!

Your View Controller in the Storyboard should look similar to the following. Note that in the Document Outline I set the Xcode Specific Label property for each view to help you read through my outline easier. Your version may not have labels such as “Violet, Indigo”.

View Layout for the Scrollable Rainbow UIStackView

Now add the following Swift code. No — just kidding! No code. This scrolling solution is complete with no code at all. Yay!

Run the application, and scroll the rainbow views in the Scroll View. Your application should look like the following. Note: if you’re using the Medium app, this image may be blank. If so, open this page in a web browser to see this animated GIF.

Download the Code

You can download the code for this tutorial here:

Creating an iOS Chat Bubble with Tails in Swift — the easy way

To understand how to setup a scrolling UIView within a UIScrollView, first we need to understand the views that will be involved, and how they relate to each other. Once we understand this, the concept of scrolling views becomes much simpler to understand and configure.

Virtually everyone who’s used an iOS device has used the iMessage application to send and receive text messages to other iOS users or non-iOS users via SMS. This tutorial will teach you how to create the familiar chat bubble with tail UI element used in the built-in Apple Message application.

This tutorial was created using Swift 4 and Xcode 9.1. Most of the concepts covered apply to previous versions of Swift and Xcode also.

Virtually everyone who’s used an iOS device has used the iMessage application to send and receive text messages to other iOS users or non-iOS users via SMS. This tutorial will teach you how to create the familiar chat bubble with tail UI element used in the built-in Apple Message application.

Chat bubbles in action — in Apple Messages

The objective of this tutorial article isn’t to instruct how to build a fully-functional chat application. I’m going to focus specificlly on how to easily create the dynamically sizing bubble.

Design Requirements

While the chat bubble with tail is familiar, it presents some challenges for development:

  1. The horizontal and vertical size of the bubbles must be expandable. The size of a bubble containing a single word will be much smaller than one containing an entire paragraph (or, for example, an image).
  2. The middle of the chat bubble should stretch to fit its content — but the four corners of the chat bubble should not be stretched, and must remain exactly as designed.
  3. The tail should point to opposite sides of the chat window to indicate whether the message has been sent or received.
  4. The color of the bubble should match app branding, and may suggest a visual cue — for example, in Apple Message, blue indicates messages sent to other iOS/macOS users, green indicates messages sent to conventional SMS users (e.g. Android users) and gray indicates received messages.

Implementation Approach

As in most development topics, there’s more than one way to implement chat bubbles. Some implemeters draw bubbles manually using bezier curves, but using stretchable images is much simpler, and more common.

In this tutorial I’ll cover what I think is the simplest approach, and the basis for how bubbles with tails should probably implemented in most applications:

  1. The bubble itself is based on a simple bitmap (I created mine as a vector using Sketch, and then exported them to scaled png files)
  2. The dynamic bubble size is accomplished using the standard UIImage resizableImage:withCapInsets:resizingMode method provided by Cocoa Touch.
  3. The chat bubble can be set to any color in code using the UIImageView tintColor property — so your app can use any color it needs, and indicate different types of messages with color just as Apple Messsages does.

OK, let’s walk through the implementation. A link to the source code can be found at the end of this article.

Creating the Resizable Images

First create two images: one with the tail on the left, the other with the tail on the right. The color you use doesn’t really matter, since you’ll use the UIImageView.tintColor to color the bubble at runtime later.

You can create images to use in this technique using whatever tools your most proficient with. I used Sketch, but you can use Photoshop or any other application that saves raster bitmaps. When finished, save the final images as png files.

The first key to this technique is taking note of how many points (pixels in the 1x image) should remain fixed at each of the four corners of the bubble image(s). In my image, I’ve highlighted the four corners: each is 21 pixels wide and 17 pixels high.

Chat bubble image with right tail

These fixed corners will have the following effect in the final application:

  1. The minimum bubble should be 42 x 32 points — so that these perfect corners are never distorted by stretching or compressing. This is fine for most applications, but if you need larger or smaller corners, just design the image at whatever size meets your needs.
  2. When the bubble needs to grow, the empty middle space (between the corners) will be stretched to match the necessary size.

The blue bubble I designed is the one used whent he user sends a message. I could have made it any color, but for design purposes it’s blue.

I also designed a gray bubble with the tail on the left for received messages (actually, in Sketch, I just copied it and used the mirror button). Since the “recevied message” bubble is a mirror of the “send” bubble, the corrners will be identical. Again, the color doesn’t matter so long as it’s not white.

Chat bubble image with left tail

Add Images to the Xcode Project

Next, add the bubble images to your Xcode project. I exported 1x, 2x and 3x images from Sketch to png files, and created an Image asset for each type of bubble in Xcode. My sent bubble is named chat_bubble_sent, while the received bubble is called chat_bubble_received.

Bubble Image Assets in Xcode

Create a User Interface

Most applications that use chat bubbles are messaging applications. Chat apps can become quite complex to implement and understand, and would make this tutorial a much longer read! To keep things simple, I’m just going to focus on the bubble itself. The following user interface is designed to demonstrate the technique of creating, resizing and coloring the chat bubble.

Chat Bubble Tutorial User Interface
  1. First is a slider control that allows the bubble height to be changed, so we can move the slider and observe the bubble at any height from 34 to 400.
  2. Next are Two buttons. These switch the display between the sent (right-tailed) bubble and the received (left-tailed) bubble.
  3. Several color options to choose from to change the color of the bubble dynamically.
  4. A UIImageView where the bubble will be displayed. In a chat or message application, this UIImageView will typically be part of the content of a UITableViewCell or UICollectionViewCell.

Setting the Bubble Image

In this tutorial application, the bubble image itself is assigned to the UIImageView’s .image property when either the Sent or Received (2 in the diagram above) buttons are tapped.

Each button calls the following helper function changeImage, passing the name of the image to use.

1: func changeImage(_ name: String) {
2:    guard let image = UIImage(named: name) else { return }
3:    bubbleImageView.image = image
4:       .resizableImage(withCapInsets:
5:                          UIEdgeInsetsMake(17, 21, 17, 21),
6:                          resizingMode: .stretch)
7:       .withRenderingMode(.alwaysTemplate)
8: }

changeImage does does the following:

  • Line #2 loads the named asset from Assets.xassets. When the Sent button is tapped, the asset name chat_bubble_sent is passed; when the Received button is tapped, the chat_bubble_received asset name is passed.
  • Line #4 calls the UIImage method resizableImage:withCapInsets:resizingMode to create a version of the image that can be stretched — except for the four corner regions (each 21 x 17) that we noted when we designed them.
  • Line #7 Calls the UIImage method .withRenderingMode(.alwaysTemplate) to create a version of the resizable image that ignores color information. This modification to the image allows us to use the tintColor to make the bubble any color we need it to be at runtime.

Changing the Bubble Color

Changing the bubble color is quite simple. Since we used the .withRenderingMode(.alwaysTemplate) method when assigning the image to the UIImageView, we can use the .tintColor property to set the color of the image’s primary color.

In this case, I just grabbed the .backgroundColor property of the button the user taps in line #2 to set the color.

1: @IBAction func colorButtonTapped(_ sender: UIButton) {
2:    bubbleImageView.tintColor = sender.backgroundColor
3: }

Changing the Bubble Size

For the tutorial appliation, the height of the bubble is changed by moving the UISlider control. The control is set so that the bubble must be at least 34 points (2 x 17). This ensures that the corners will never be compressed vertically beyond the minimum defined at design time.

1: @IBAction func sliderChanged(_ sender: UISlider) {
2:    bubbleHeight.text = “(sender.value)”
3:    bubbleHeightConstraint.constant = CGFloat(sender.value)
4: }

Line 3 simply changes the constant value of the UIImageView height constraint.

Run the Tutorial App


I hope this helps you understand how to approach this type of requirement when using Swift for iOS applications. While this isn’t the only way to approach the impelmentation, this is a simple and effective method. You can see in the code below that we only used 40 lines of code to implement the entire solution!

Take this technique further by incorporating bubbles in your own app. The basis for this approach is a resizable UIIMageView, with a resizable UIImage with capInsets. Anywhere in your solution where you can add a UIImageView to a UIView container, you can leverage this technique.

Full Swift Source Code

Following is the full source code for the tutorial.

01: class ViewController: UIViewController {
03: @IBOutlet weak var slider: UISlider!
04: @IBOutlet weak var bubbleImageView: UIImageView!
06: @IBOutlet weak var bubbleHeight: UILabel!
07: @IBOutlet weak var bubbleHeightConstraint: NSLayoutConstraint!
09: override func viewDidLoad() {
10:    super.viewDidLoad()
11: }
13: @IBAction func sliderChanged(_ sender: UISlider) {
14:    bubbleHeight.text = “(sender.value)”
15:    bubbleHeightConstraint.constant = CGFloat(sender.value)
16: }
18: @IBAction func sentButtonTapped(_ sender: UIButton) {
19:    changeImage(“chat_bubble_sent”)
20:    bubbleImageView.tintColor = UIColor(named:             
21: }
23: @IBAction func receivedButtonTapped(_ sender: UIButton) {
24:    changeImage(“chat_bubble_received”)
25:    bubbleImageView.tintColor = UIColor(named: 
26: }
28: func changeImage(_ name: String) {
29:    guard let image = UIImage(named: name) else { return }
30:    bubbleImageView.image = image
31:           .resizableImage(withCapInsets:
32:                           UIEdgeInsetsMake(17, 30, 17, 30),
33:                           resizingMode: .stretch)
34:           .withRenderingMode(.alwaysTemplate)
35: }
37: @IBAction func colorButtonTapped(_ sender: UIButton) {
38:    bubbleImageView.tintColor = sender.backgroundColor
39: }
40: }

Download the Code

You can download the code for this tutorial here:

How to create a static UICollectionView

UICollectionView doesn’t support static content layouts as its sibling UITableView. There is a way to simulate it though, and this article will walk through how to do just that.

UICollectionView doesn’t support static content layouts as its sibling UITableView. There is a way to simulate it though, and this article will walk through how to do just that.

Tutorial objectives

When using a UITableView, we have a choice of either creating a table with static cells or to create cell templates that are used to create cells dynamically. The former approach can be used to more quickly implement simple use cases like a screen to allow users to change settings. The latter dynamic approach is used when the number of cells are not known until runtime–such as a list of products in a product catalog stored on a web service.

In this tutorial, we’ll create a UICollectionView that will serve as a “main menu” for an application. We’ll provide the following functionality to the application:

  1. The collection view will display a fixed number of cells designed within an Xcode storyboard.
  2. The cells will adapt to the size of the display — for example using a two-column layout on an iPhone X in portrait mode, and a single column on an iPhone SE in portrait mode. In landscape mode, the click-able cells will fill the horizontal space before starting a new row.
  3. When the user taps on a cell, the application will intercept that event and respond appropriately (in a production app, this may be to fire the correct segue associated with each cell.

To keep the tutorial simple, this application will just display four cells of identical size and layout. But a real application could be much more sophisticated. Each cell could be a different Collection View Cell design, and present entirely different content from the others. But the basic architecture for this approach would be the same.

Step 1: Creating a static layout in Interface Builder

As with creating static UITableView layouts, the first step is to create a layout in Interface Builder.

Complete the following steps first:

  1. Open Xcode and create a new single view application
  2. Add a UICollectionView to the View Controller scene in Main.storyboard
  3. Set ViewController as the UICollectionView delegate and datasource
  4. Using the size inspector, customize the UICollectionView cell size to w=170, h=80

Customize the default UICollectionViewCell with the following changes:

  1. Add a UIView, and use constraints to pin it 4 points from the top, bottom, leading and trailing edges (clear the constrain to margins checkbox).
  2. Add a UILabel to the UIView in step 1, and center it vertically & horizontally in the UIView container.
  3. Change the background color of the UIView to Purple, and the UILabel text to “Purple Cell”.
  4. Using the Identity Inspector, change the UICollectionViewCell Collection Reusable View Identifier to “Purple Cell”.

Now copy & paste the Purple cell three times. Change the UIView color, the UILabel text and the UICollectionViewCell Reuse Identity to differentiate each of the four cells from each other. When finished your storyboard design should look something like this:

Step 2: Implement the view controller delegates

If you ran the application now, you’d see a screen with an empty UICollectionView. Why is that? It’s because all we really did was to design some templates of what a set of dynamic cells can look like. Even though the layout looks similar to what can be done using a static UITableView (except for multiple columns), it’s not really a static design. But by adding two data source methods, you can provide the missing information to create the layout design using Interface Builder at runtime.

Change the UIViewController class definition as follows:

class ViewController: UIViewController { let cellIds = ["Purple Cell","Green Cell","Blue Cell","Red Cell"] let cellSizes = Array( repeatElement(CGSize(width:170, height:80), count: 4)) override func viewDidLoad() { super.viewDidLoad() } }

The cellIds property contains a list of the Identity properties we assigned to each UICollectionViewCell designed in Interface builder. These Ids must exactly match the values we assigned to each cell in Interface Builder.

The cellSizes property stores the size of each cell. In this simple tutorial, all cells will be the same size–but they don’t have to be; each cell could have different content and a different size. By defining the sizes here, we’re giving ourselves the ability to control cell sizes at runtime via the UICollectionViewDelegateFlowLayout (we’ll do this in a moment).

Add data source delegate methods

Now add the data source methods to the end of the ViewController.swift file via a swift extension.

extension ViewController: UICollectionViewDataSource { func collectionView( _ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { return cellIds.count } func collectionView( _ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell { return collectionView.dequeueReusableCell( withReuseIdentifier: cellIds[indexPath.item], for: indexPath) } }

Each of these delegate methods has only one line of code, and accomplish the following:

  1. numberOfItemsInSection returns the number of cells in the UICollectionView, which is inferred by the number of cell Ids we added to the cellIds property in the last step.
  2. For each of the cells, we create a cell with the corresponding Id. This is the workaround that allows us to make a dynamic UICollectionView behave like a static UITableView.

Add the layout delegate method

To give us control over the size of each cell at runtime, we can adopt the UICollectionViewDelegateFlowLayout protocol, and provide an implementation for sizeForItemAt method. The implementation will simply return the corresponding cellSizes element corresponding to the indexPath being laid out.

extension ViewController: UICollectionViewDelegateFlowLayout { func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize { return cellSizes[indexPath.item] } }

Add the didSelectItemAt delegate method

Since the objective of this tutorial was to create a type of menu, we need to intercept when users tap on items in the menu. To do this, implement a single UICollectionViewDelegate method. Add the following extension to the bottom of ViewController.swift.

extension ViewController: UICollectionViewDelegate { func collectionView(_ collectionView: UICollectionView, didSelectItemAt indexPath: IndexPath) { print("User tapped on (cellIds[indexPath.row])") } }

This delegate method is called when the user taps on any of the (four) cells in the UICollectionView. In response, the sample code just prints the Cell Id of the cell the user tapped on. In a production application, we might instead cast the UITableViewCell to a custom class, and read metadata from it to decide how to branch the application flow.

Now if you run the application, and press on each button, you should see the following output in the Xcode debug console.

User tapped on Purple Cell User tapped on Green Cell User tapped on Red Cell User tapped on Blue Cell

The completed, flexible layout

If you now run the application on different devices, you can see that we’ve created an almost static UICollectionView that represents a menu. The advantage of this approach over using a UITableView is that we have a much more flexible layout that can be presented on different devices.

iPhone X & iPhone SE — Portrait

The Width of the iPhone X and iPhone SE are quite different, so our layout automatically adapts between two columns (X) and 1 column (SE).

iPhone X — Landscape

The iPhone X in landscape has plenty of horizontal space, so all four of the menu items fit on one row.

iPhone SE — Landscape

The iPhone SE in landscape is more constrained horizontally, so the layout automatically flows onto two rows:

Changing up the cell sizes

One of the advantages to UICollectionView is how flexible it is when cell sizes are different. By changing the array of cell sizes at the top of UIViewController.swift to the following, we can observe this flexibility in action.

Change the cellSizes property in ViewController.swift to the following:

let cellSizes = [ CGSize(width:210, height:60), CGSize(width:180, height:100), CGSize(width:170, height:80), CGSize(width:150, height:150) ]

Full source code

I hope this tutorial was helpful to you and gave you some ideas for your own applications. You can download the full source code for this tutorial here on my github account. Feel free to contact me on twitter via @rekerrsive.

Benchmarking Xcode Builds

I recently started working with a fairly large iOS/Swift code base — one which takes several minutes to complete a full build from a clean folder. Since I have several macOS workstations on-hand, I naturally was curious how they compare in my most common use case — developing Swift iOS applications. Is there one that would give me less time to fetch a cup of coffee during a full build?

The code base

The code base I’m working with for these tests has the following profile:

Swift files: 223

Objective-C files: 6

Storyboards: 35

Xcode version: 8.3.2

Arrangement: One primary target, with two embedded projects.

Test Methodology

The methodology I used was quite simple, and designed around “real life use” of my computers for Xcode development:

  1. Clean the project, manually delete the entire DerivedData folder.
  2. Shut down the computer
  3. Turn the computer back on
  4. Open the project
  5. Select the “generic ios device” target OS, and build the project (measurement #1)
  6. Clean the project, close Xcode, manually delete DerivedData (again)
  7. Launch Xcode, build the project again (measurement #2)

I planned the two trials to record the experience of building a project that isn’t currently in a disk cache, then building it again after macOS had the opportunity to keep files in disk cache. As the numbers below bear out, this made virtually no difference for the two SSD-based computers, but made a huge difference for the iMac with the Fusion disk.

For all trials, I made sure the computers were on AC power, no other applications were running — in fact I had done nothing since booting each machine except run XCode to time the tests.

I ran the test with three different computers, described below.

2015 15-inch (Retina) MacBook Pro Core i7

Test subject #1 is a 2015 MacBook Pro with a Core i7 CPU, SSD and 16GB RAM. This is a pretty standard “developer workstation” that many serious developers would select. Indeed my tests do support this is probably the best balance of power and portability for a developer (of the three test subjects).

2015 27-inch (5K) iMac Core i5

Test subject #2 is a 2015 iMac with a Core i5 CPU, Spinning Fusion drive and 32GB RAM. This is my “daily driver” development workstation. While a Core i5 machine with a spinning disk may not sound like a natural choice for a developer, the numbers below bear out that this is actually the most performant machine of the three by a decent margin — for this use case. Of course while it has the most RAM, largest screen and most storage, it has zero portability. Trade-offs…

2016 13-inch MacBook Pro (no touch bar) Core i7

I purchased this machine as a “travel laptop”. I love the portability of it, and it’s more than adequate for me as a mobile development workstation for short stints. It’s especially awesome for use on airplanes and giving presentations. However, I had wondered whether the 2-core i7 made any difference for Xcode builds vs. the 4-core i5 in my “daily driver” iMac, or the 4-core i7 in the 15″ MBP. Spoiler alert: it does. It really, really does.


Here’s a summary of the system configurations, configuration costs and build timings using each machine with an identical set of code and the same build process.

One note: the fourth row is a computer I didn’t test because I don’t have one. I included it because that’s the system I would need buy if I wanted to trade in my current iMac+13″ MacBook pairing. I could make due with a base model upgraded to 1TB storage.

Note: the prices are what I paid for this equipment (or in the case of the 2015 15″ which I don’t own, its cost new from Apple as of this date).

My observations

  • Intuitively, I would have expected the 2015 MBP with the Core i7 to beat the 2015 iMac with the Core i5 — but the i5 iMac was the fastest. It’s only a difference of 12 seconds, but it’s still better performance for lower cost.
  • I didn’t necessarily expect the Fusion featured iMac spinning disk to catch up with the SSD machines (after it had a chance to cache the project) — but it did. In my use case of repetitive compiles in Xcode, I perceive no benefit to an SSD-based iMac vs the much less expensive Fusion drive.
  • It’s unfortunate that the 13″ MBP can only be ordered with 2 cores. Both the 4-Core iMac and 4-Core 15″ MBP blow right past the 2-core 13″ MacBook Pro.
  • The biggest determinant of performance seems to be how many CPU cores area available to the build process.
  • The i5/i7 architecture seems to make no difference — for this use case, at least.

What will I buy next time?

Since I primarily work in my office, and travel for business only occasionally, the iMac plus lower end MacBook for the road still works the best for me. A 5K iMac ($2,499) plus a basic 13″ MBP for travel ($1,699) costs less than a mid-range 2016 15″ MBP ($3,199) plus the 5K UltraFine display ($1,299). Your mileage may vary, but for my purposes the speed of the iMac and the lightest, most coach-class friendly travel machine is the best of both worlds. Plus my livelihood depends on a working computer every day, so having a backup to my primary machine gives me peace of mind.

Using the System Font Efficiently in iOS

Often an app designer specifies custom fonts out of a perceived need for uniqueness, but very often the built-in iOS system font is entirely appropriate. One advantage of the system-provided font is how simple it is to load them from code. Here’s how to do it:

Call systemFontOfSize

When using the system font, call the static routine systemFont to generate a font with a weight and height.

UIFont.systemFont(ofSize: <fontSize>, weight: <predefined weight value>)

Specify a font size for the height of the font

The fontSize is per usual — for example 10.0 for a standard reading font.

Specify a font weight using predefined constants

For predefined weight value, pass in one of the following constants, which will let iOS go find the correct font for you from the pre-installed System font used on the device.

UIFontWeightUltraLight UIFontWeightThin UIFontWeightLight UIFontWeightRegular UIFontWeightMedium UIFontWeightSemibold UIFontWeightBold UIFontWeightHeavy

For Example

Example valid calls:

let fontNormal = UIFont.systemFont(ofSize: 10.0, weight: UIFontWeightLight) let fontHeading = UIFont.systemFont(ofSize: 12.0, weight: UIFontWeightBold)


There are a few caveats to be aware of:

  • The System font used on different versions of iOS is not necessarily the same. For example on iOS 9, the font is San Francisco. For iOS 8 the font is Helvetica Neue. This could be good or bad, depending on your point of view. It’s good that your app will automatically adopt whatever fresh font Apple introduces in the future (if it does). On the other hand, if you use System font, you don’t have control over the font rendered in your app.
  • The systemFont (which used to be called systemFontOfSize before Swift 3.0, is available only on iOS 8.2 or later. This shortcut method won’t work if you’re supporting a codebase with an OS target before the minimum. Be warned.

Originally published at on December 1, 2016.

Testing whether a view is currently visible

When manipulating iOS UI from background threads, or in response to NSNotification messages, you won’t always be sure that the view your controller is working with is on-screen. How can you check?

Relatively easy. Here’s a simple check from within the context of a View’s ViewController:

if self.isViewLoaded() && self.view.window != nil { // do something }

This is appropriate when the “something” shouldn’t be executed unless the view is currently visible to the user.

Facebook login using the iOS API with Swift

A common requirement for consumer mobile apps is to allow users to authenticate with their FaceBook credentials. Let’s explore why, and then go ahead and make the integration with iOS and Swift

A common requirement for consumer mobile apps is to allow users to authenticate with their FaceBook credentials. Let’s explore why, and then go ahead and make the integration with iOS and Swift

Why use Facebook authentication?

Why would we want to use Facebook auth instead of providing our own authentication database? Really it boils down to two reasons:

  1. Easier user experience. If we let users use their Facebook accounts to authenticate to our application, they don’t need to remember a new user id and password, and are more likely to go ahead and use our app because it’s a frictionless experience.
  2. Easier and more secure for our app. Since we’re effectively “outsourcing” identity to Facebook, we don’t need to manage/secure user passwords, implement password reset processes and so on.

Overall Process

Here’s the overall process of integrating Facebook Identity:

  1. Register our app on Facebook. This lets Facebook know who we are, when authentication requests come to their API. In the setup process we’ll also communicate which Facebook services we’re going to use. Some of the services we can integrate into our app require additional Facebook review of us. But the basic authentication in this post does not.
  2. Integrate the Facebook API into our XCode project. Just like any integration, we’ll need to add some frameworks. In this project I’ll be using CocoaPods to download and install the Facebook frameworks.
  3. Add a button to the app UI to initiate the Facebook request. This can be done in code or in storyboard design. I’ll use the storyboard method.
  4. Capture the Facebook user ID, and insert it into our own web service database so we can associate the Facebook users with one of our users.

Register with Facebook

Facebook provides a Quick Start for iOS tool to create app IDs, which I’ll be using.

  1. On the Quick Start form, provide a display name for the new App, adn then click on the Create New Facebook App ID button.
  1. The next step is to provide a contact e-mail and an application category. After providing this info, click the Create App ID button.
  1. From here, the Facebook Quick Start screens are providing us some great step-by-step instructions what to do next. Capture the information about the info.plist contents, which we’ll later add to our app.
  1. Mid-way down this form is a switch to indicate whether the app we’re creating contains in-app purchases. The default is Yes, so if your app doesn’t include them (or if you don’t want to use FB to track them), slide this button to the No position.

Finally — and this may be the most important step! — provide the Bundle Identifier for the app to Facebook. When our app makes calls to the FB API, it will send the Bundle ID, which FB will use to match our app to the App ID record we just created within FB.

Adding the Facebook API to our XCode Project

Next up, we need to add the Facebook API to our XCode project. This can be done by downloading the Facebook frameworks and manually integrating them, but I always find using CocoaPods more productive and a better way to keep 3rd-party modules up-to-date over time.

Install CocoaPods

If you’re not familiar with CocoaPods, it’s a Ruby-based package manager that modifies your XCode project to include 3rd-party modules (like the Facebook API). CocoaPods will configure an XCode workspace that includes your project, and additional projects for the 3rd-party components.

If you don’t have CocoaPods installed on your development workstation, or don’t know what it is, browse over to to learn about it and install it before continuing.

Create the Podfile

After creating your base XCode project, open a bash shell in the same folder where your .xcodeproj is located, and then run the following command to create a new Podfile

$ pod init

Next edit the Podfile (I’m using nano, but you can use any text editor). Update the new Podfile to look as follows:

At this point, your project folder should look something like this:

After updating the Podfile, make sure XCode is closed, and then run a the following command to download the Facebook frameworks and configure your XCode workspace:

$ pod install

Once the frameworks are installed and the .xcworkspace is created, your folder should now look like this:

From this point, you should never have a reason to open your .xcodeproj project, and from now on always open the .xcworkspace file instead. The Facebook API frameworks will be new projects in the workspace, referenced by your original project while editing and building the project.

Back in step 1, Facebook gave us text to add to our .plist file, so open the .xcworkspace, and then open the .plist file as text (so we can paste in the entries FB gave us).

Add the entries from the Facebook Quick Start screen before the closing </dict> tag.

Setup the AppDelegate

Two methods to update int he AppDelegate class. The first is our old friend application : didFinishLaunchingWithOptions. In this method, just add a call to pass on the launch parameters to the Facebook API:

FBSDKApplicationDelegate.sharedInstance().application(application, didFinishLaunchingWithOptions: launchOptions)

Next, add the delegate method application : open url, and fill it out with a similar call that passes on to Facebook API:

func application(_ app: UIApplication, open url: URL, options: [UIApplicationOpenURLOptionsKey : Any] = [:]) -> Bool { let sourceApplication: String? = options[UIApplicationOpenURLOptionsKey.sourceApplication] as? String return FBSDKApplicationDelegate.sharedInstance().application(app, open: url, sourceApplication: sourceApplication, annotation: nil) }

With these changes, the AppDelegate will make calls to the Facebook API when initially launched (didFinishLaunchingWithOptions method), and when launched after the Facebook login is completed (open url method).

Configure the UI

Since this is a test application, I’ve created it as a simple single view app. In the default storyboard, I’ve added the following controls:

  • A UIView in the shape of a button, with the class name set to FBSDKLoginButton, connected to a class outlet named loginButton
  • A UIView in the shape of a square profile photo, with the class name set to FBSDKProfilePictureView. No class outlet is needed.
  • A UILabel, with a class outlet named userIdLabel
  • A UILabel, with a class outlet named userNameLabel

In the storyboard, the form looks like so:

Add code to the ViewController

There’s not much code involved in making this login button work. Its presence on the form is enough to enable it. The code we add really is just to set a couple properties, and add delegate methods that are called as the user interacts with the Facebook login process.

Add the Import

import FBSDKLoginKit

Declare our class with the delegate protocol

class ViewController: UIViewController, FBSDKLoginButtonDelegate {

Connect Outlets

@IBOutlet weak var loginButton: FBSDKLoginButton! @IBOutlet weak var userIdLabel: UILabel! @IBOutlet weak var userNameLabel: UILabel!

Configure the button

In the ViewDidLoad method, add the following configuration lines:

loginButton.delegate = self loginButton.readPermissions = ["public_profile", "email", "user_friends"] FBSDKProfile.enableUpdates(onAccessTokenChange: true)
  • The first line sets this ViewController as the delegate for the Facebook login events
  • The second line lets the FB login process know what read permissions we’re asking the user to approve
  • The third line asks the SDK to send a notification if/when the user’s profile changes.

Create a profile change listener

Just below the button configuration lines, add the following code to receive a notification when the user’s profile is updated (asynchronously) after the login completes.

NotificationCenter.default.addObserver( forName: NSNotification.Name.FBSDKProfileDidChange, object: nil, queue: nil) { (Notification) in if let profile = FBSDKProfile.current(), let firstName = profile.firstName, let lastName = profile.lastName { self.userNameLabel.text = "(firstName) (lastName)" } else { self.userNameLabel.text = "Unknown" } }

The listener is needed because the profile won’t be completely populated at the moment the login completes. So effectively this code will wait around for the API to complete the profile, then will be called.

After the user interacts with Facebook to login, we’ll be notified when the login is completed, and if the user presses the button again to logout.

func loginButton(_ loginButton: FBSDKLoginButton!, didCompleteWith result: FBSDKLoginManagerLoginResult!, error: Error!) { if let result = result { self.userIdLabel.text = result.token.userID // Notify our web API that this user has logged in with Facebook } } func loginButtonDidLogOut(_ loginButton: FBSDKLoginButton!) { print("Logging out") }

With these methods implemented, the application is complete.

If using Simulator, Turn Keychain Sharing on

If you don’t need to test using a simulator, and will use a physical device for testing, proceed to the next section — you’re good to go.

However, if you want to run this application on a simulator and connect to Facebook, there’s one more small thing to do — enable Keychain Sharing for your target.

Select your target, and in the Capabilities tab, slide the Keychain Sharing switch to the “On” position. Similar to the below (image credit:

Running the application

When we run this application, the first thing we see is the main form, with the login button waiting to be pressed:

After pressing the login button, the application will be put in background while Safari launches to ask the user to login within the Facebook mobile experience. In this screen I’m informed that I’ve already authorized this app once, so I only have to confirm that I still want to login.

After I finish with Facebook, I’m transferred again back to AppDelegate, within that open url method we implemented earlier. Afterward, the form is shown again, this time with my profile photo, name and Facebook ID.

And that’s it!

Swift 3.0 substrings made easy

Swift is a fantastic, modern language, and has fast become my favorite. So much of what’s built into it is intuitive, simple and makes coding much more expressive than older, more syntactically heavy programming languages.

Swift is a fantastic, modern language, and has fast become my favorite. So much of what’s built into it is intuitive, simple and makes coding much more expressive than older, more syntactically heavy programming languages.

But…sometimes its sophistication makes what was simple in older languages more complicated. Case in point is taking substrings. Substrings are not difficult to deal with in Swift, but personally I find the syntax confusing. Many others do as well, and it’s common to address the confusion with an extension pattern.

Let’s first look at some basics of how to take substrings “out of the box”, and then look at a pretty common extension approach to make the substring syntax more approachable and simple.

The Range Construct

The foundation of taking substrings in Swift is using Range objects. Range is just what it sounds like — an encapsulation of a beginning and ending index within a String.

Take a look at he following quick example. In the example, the .range function is used to get the start and end index of the word “quick” within the larger string. Then, if the range is not nil, that word is extracted using the range bounds, and printed to the console.

This is something we might do in real life, and the Swift syntax is expressive and simple to remember.

// find and return a substring using Swift 3.0 let sentence = "The quick brown fox jumped over the lazy dog." if let quick = sentence.range(of: "quick") { let word = sentence[quick.lowerBound..<quick.upperBound] print(word) // prints "quick" }

The Range syntax is used to extract strings by known index as well, but in this case the syntax really gets in the way and is not simple or easy to read.

Before getting into it, let’s review how this would be done in C++:

// find and return a substring at known location using C++ string sentence = "The quick brown fox jumped over the lazy dog."; string word = sentence.Substring(4, 5); cout << word; // prints "quick"

The C++ is syntactically simple and easy to understand. Substring just starts at the fifth character, and extracts five characters. Simple.

Swift is similar to the C++ standard library conceptually, but the use of the Range construction. This results in a significantly more verbose statement to accomplish what is simple to do in C++:

let sentence = "The quick brown fox jumped over the lazy dog." let substringNoExtension = sentence[sentence.index(sentence.startIndex, offsetBy: 4)...sentence.index(sentence.startIndex, offsetBy: 8)] print(substringNoExtension) // prints "quick"

The syntax is actually quite similar to the C++ version, but let’s face it — it’s long and tedious. The interpretation is “extract from the string from index 4 to index 8 inclusive”. OK, simple concept, but wow! Look at all that syntax. If you need to do this once, not a problem, but what if the code uses offsets frequently?

Swift using an Extension

Luckily, Swift has the concept of extensions which allow us to essentially append new methods and data to existing classes and structs — even ones where we don’t have the source code or aren’t allowed to inherit new objects from them.

So first, let’s add an extension that adds a subscript operator that accepts a closed range:

extension String { subscript(range: ClosedRange<Int>) -> String { let lowerIndex = index(startIndex, offsetBy: max(0,range.lowerBound), limitedBy: endIndex) ?? endIndex return substring( with: lowerIndex..<(index(lowerIndex, offsetBy: range.upperBound - range.lowerBound + 1, limitedBy: endIndex) ?? endIndex)) } }

OK, so yes, I know…this is a lot of code too. But remember, this is an extension, and you just need to add this extension method to global scope one time, then use it over and over wherever you need it.

Now with the extension added to global scope, the substring with known indexes becomes the following:

let sentence = "The quick brown fox jumped over the lazy dog." let substringWithExtension = sentence[4...8] print(substringWithExtension) // prints "quick"

This syntax is even simpler than the original C++, and it’s immediately intuitive that we’re taking a range of characters from the 4th to the 8th character.


Full Extension Example

Below is a more complete extension to add a somewhat more complete substring extension to String which builds on the simple single example above.

extension String { subscript(i: Int) -> String { guard i >= 0 && i < characters.count else { return "" } return String(self[index(startIndex, offsetBy: i)]) } subscript(range: Range<Int>) -> String { let lowerIndex = index(startIndex, offsetBy: max(0,range.lowerBound), limitedBy: endIndex) ?? endIndex return substring(with: lowerIndex..<(index(lowerIndex, offsetBy: range.upperBound - range.lowerBound, limitedBy: endIndex) ?? endIndex)) } subscript(range: ClosedRange<Int>) -> String { let lowerIndex = index(startIndex, offsetBy: max(0,range.lowerBound), limitedBy: endIndex) ?? endIndex return substring(with: lowerIndex..<(index(lowerIndex, offsetBy: range.upperBound - range.lowerBound + 1, limitedBy: endIndex) ?? endIndex)) } }