Blog

How to Debug iOS Extensions using Xcode

This video is a tutorial showing how to debug an iOS share extension using Xcode. In the tutorial, the extension is created to accept documents from any application, and Microsoft Word for iOS is used as the document share source. The objective of the technique is to set breakpoints and view debug logs within the extension after Microsoft Word sends a document to the extension.

These techniques are applicable to many scenarios where you need to debug processes that are embedded with your main iOS application bundle, but running outside the main process on the device.

The video demonstrates two techniques:

  1. Using the debugger to wait for the extension’s process to be started by the external application.
  2. Starting the debugger with the extension’s scheme.

Both techniques are useful, and the demo highlights some advantages enjoyed by the second alternative.

Xcode Server Bot Email Triggers

Xcode server is typically run on a dedicated machine, e.g. a Mac Mini or cloud-based Mac server, so configuring the Xcode server to communicate the outcome of automations to the development team is critical to keeping forward progress on the development project.

In a previous post, I introduced Xcode Server bots and mentioned that an integration can fire an email trigger — for example to notify team members of successful integration completion or of exceptions encountered as the integration ran.

In that previous post I presented this diagram to illustrate an Xcode server automation that builds an app after a git branch is pushed to a remote repository on GitHub. Note the step in the bottom-right corner, which sends an email to the development team to communicate the outcome of the automation, or the overall status of the project.

Xcode Server automation example

What Do Email Triggers do for me?

Xcode server is typically run on a dedicated machine, e.g. a Mac Mini or cloud-based Mac server, so configuring the Xcode server to communicate the outcome of automations to the development team is critical to keeping forward progress on the development project.

An obvious choice to send status reports from Xcode server to development, QA and product teams is email, and Xcode server supports email reporting triggers out-of-the box.

Xcode Server provides two built-in email notification trigger types:

  1. New Issue Email. An email is sent when new issues are found. The intended recipient(s) for this type of trigger are the team members that introduced issues via source code contributions to the git repository.
  2. Periodic Email report. A summary of current project progress, intended to be sent to the broader team responsible for product development.

Configuring an automation email trigger

Configuring a trigger is really easy. While on the last tab of the integration configuration screen:

  1. Tap the plus (+) button to create a new trigger.
  2. Select either New Issue Email or Periodic Email Report.
  3. Select the types of issues that should trigger a new -mail each committer who introduced and issue.

In the case of a Periodic Email Report:

4. Select how frequently the summary report should be sent: after each integration, daily or weekly.

5. Since the trigger is a period report, the report can be sent to a broader audience, so you can add a list of email addresses/distribution lists in a way similar to sending an email from a standard mail client.

What do I get out of this?

After the bot is configured and saved (which sends it to the Xcode server for scheduling), Xcode Server will send emails according to the configuration steps made above.

Email will be delivered to your email box, and the delivered email will look similar to the following example of a compile error summary:

Example Xcode Server email report

Configuring your Xcode server to send email

The above discussion is focused on configuring bots with triggers that send email. But you do need to configure Xcode server so it has an authorized path to send the mail it’s configured to. I’ll discuss how to do this in the next post in this series: Configuring Xcode Server to send email.

Related to This Post

Configuring Xcode Server to Send Email

In my previous post on Xcode Server, I discussed the Xcode Server feature to send issue notification and summary email messages, and how to configure email triggers as part of integration configuration. In this post, I’ll discuss how to configure a MacOS Xcode Server machine to actually route email messages to the development team.

The Documented Way to Configure Xcode Server

As of this writing, the Xcode 10 documentation provides a single — and sparse — manual page on configuring Xcode Server to send email via SMTP:

Xcode Server Email Configuration Documentation (as of Xcode 10)

This seems very straightforward. You should add your outgoing SMTP server information and send from info — and Xcode server will send email. It should just work, right?

There’s only one problem with this procedure…for many people — including me — this doesn’t work.

Most likely this simple configuration can work in some environments, but Apple provides no information in the documentation regarding prerequisites and what type of SMTP environment this configuration is designed for.

No worries, though — next I’ll cover how to configure your Xcode server to work with any type of SMTP-based email infrastructure.

Note: if you know how to get the default Xcode configuration to work with authenticated, TLS-based email back-ends, please let me know in the comments!

Configuring Xcode Server to Send Email via Postfix

If you’ve installed Xcode 10 on macOS Mojave (and probably previous versions of both — though I’m only covering Xcode 10 here), your Xcode server hardware should already have a dormant installation of Postfix pre-installed. We’ll use that existing install of Postfix to get email delivery going with Xcode server.

Note: I use SMTP2GO as my SMTP provider for Xcode server, so the following instructions are specific to that service. This same procedure can work with other providers like Gmail, Office 365, and other services that provide SMTP with or without an encrypted SMTP connection. Adapt the settings below as required to match the requirements of your SMTP provider.

#1 Don’t configure Xcode’s email options

First, don’t configure Xcode’s email settings at all — just leave them blank. On your Xcode server, open Xcode Server configuration and ensure all fields are left blank, as in the documentation image above.

#2 Add your SMTP server credentials to the Postfix password file

Next configure your Xcode server Postfix installation with the username/password needed to authenticate with your SMTP server when sending e-mail.

Create or modify the /etc/postfix/sasl_passwd file using your favorite editor, for example:

$sudo nano /etc/postfix/sasl_passwd

Add a line that provides a valid username/password combination and corresponding server/port, with your own SMTP authentication credentials, similar to the following:

mail.smtp2go.com:2525 user@domain.com:password

#3 Run Postmap agains the password file

sudo postmap /etc/postfix/sasl_passwd

#4 Add Configurations to Postfix

Open the Postfix configuration file with your favorite text editor, for example nano:

$ sudo nano /etc/postfix/main.cf

Add correct configuration lines as required by your email provider. For smtptogo, I use the following:

relayhost = mail.smtp2go.com:2525
smtp_sasl_auth_enable=yes
smtp_sasl_password_maps=hash:/etc/postfix/sasl_passwd
smtp_use_tls=yes
smtp_tls_security_level=encrypt

Note that the relayhost in main.cf should match the url:port in the previous password file exactly.

It’s OK to add these configurations to the bottom of the main.cf file, but search the file and comment out any existing settings that would duplicate what you add.

#5 Start Postfix server on the Xcode Server machine

$ sudo postfix start

When Postfix is running, and your security configurations are correct — and the Xcode server mail settings are left blank — Xcode server will successfully send email when your email triggers are fired in Xcode integrations!

Related to This Post

CI/CD with Xcode Server 10

Xcode Server is a powerful and easy-to-use CI/CD solution that every Xcode 10 developer already installed — even though many of them don’t know it! In this article, I’ll overview the product architecture and discuss some of the plus and minus factors for using Xcode server vs. 3rd-party alternatives.

What does Xcode Server Do?

Like other CI/CD platforms, Xcode Server’s primary role is to automate the integration, analysis, unit testing, assembly and distribution of applications.

Typical use cases include:

  • Pulling an integration or distribution branch from a source code repo when commits are made (and/or on a nightly schedule).
  • Automatically run unit tests to verify that new code commits haven’t introduced regressions or performance problems.
  • Run static analysis of code to detect issues before application assembly.
  • Build QA or production application packages (archiving, in Apple speak).
  • Distribute completed archives to internal (ad-hoc) or external (Test Flight, Crashlytics, etc.) points.
  • Notify the Development, QA and Product teams of new build status, completions and exceptions.
  • Do all of the above continuously — perhaps several times per day — allowing developers to go back to work on their next tasks while these essential but repetitive tasks are completed by automated processes.
Xcode Server Build Process Overview

Many of the tasks Xcode server does are fully baked-in automation steps. Others are custom scripting tasks can be added to plug holes in the built-in capabilities — and add entirely new process steps limited only by a developer’s imagination.

Xcode Server History

First introduced by Apple with Xcode 5, Xcode Server is a first-party CI/CD solution — i.e. delivered and supported directly by Apple. When first introduced, Xcode server was one of many modules included in OS X Server (now known as macOS Server). In addition to CI/CD capabilities, OS X Server of that era included:

  • Email server
  • DNS Server
  • Git repository server
  • User profile management
  • And more…

Over time, Apple has pared back what is now branded macOS Server, removing many of the features that aren’t specific to macOS — and were probably under-utilized or not needed by customers. Today, macOS server still remains as a system administration layer over macOS for system administrator use, while Xcode server has been relocated into the Xcode.app product.

Xcode Server in Xcode

With Xcode 9 and Xcode 10, Xcode server is integrated with Xcode, rather than integrated with macOS server. This has several advantages:

  • Every installation of Xcode also installs Xcode Server and can be used directly
  • No need to license or install macOS Server on the remote integration server
  • Overall tighter integration with Xcode
  • A more familiar user experience for developers configuring Xcode Server.

With full Xcode integration, installing Xcode server really couldn’t be easier — just install Xcode, and Xcode server is installed as well. All that’s left is to enable it in preferences and select a user, on a new tab within the Xcode preferences screen.

Enabling Xcode Server in Xcode 10 Preferences

Running Locally or on an Integration Server

Xcode Server is installed along with Xcode 10, so does that mean your own development workstation can be your CI/CD server? The answer is — Yes! This is certainly possible, and may make sense for projects where the developer is working alone or with a very small team on a project — but would still like to take advantage of integration automations rather than running tasks manually.

Running on a Dedicated Integration Server

Probably more common is for a team of developers working on a product to use Xcode server as an automated integration point. This scenario doesn’t change typical developer workflow too much.

A lead developer or Devops staff would install and configure Xcode Server on a dedicated Mac, and then most developers would push code updates to remote git branches. Xcode server would then run its bot magic either on git push events, or on a scheduled event, e.g. nightly integration tests and builds.

Xcode server supports Subversion as well as git, though the latter is more commonly used today.

Developers contributing to a shared Xcode Server Installation

When used as part of an integration server deployment, it’s most common to deploy a dedicated Mac on a LAN — for example a headless Mac Mini dedicated to the task of fetching committed branches and running integration bots.

When deploying an integration server on a LAN isn’t a viable solution (e.g. remote teams), a Mac can be rented in the cloud, which is an economical way to deploy a headless Mac Mini in a professionally-managed data center. Popular Mac Mini hosting providers that can provide cloud-based hardware that even small teams can afford:

Mac Stadium

Mac In Cloud

XCLOUD

Currently there isn’t a platform-as-a-service (PAAS) offering for Xcode Server (a la Microsoft App Center or Circle CI). However, Apple’s recent acquisition and curtailment of the Buddy Build PaaS provider raises the obvious question: “is there an Xcode Server PAAS offering under development?” As with most new Apple product development, the answer is: “Nobody outside Apple knows!”

Xcode Server Alternatives

Even for iOS/macOS developers, Xcode Server isn’t the only option available. Popular open source tools or commercial services that can serve as viable Xcode Server alternatives include:

Fastlane

Jenkins

Microsoft App Center

Circle CI

Xcode Server Advantages compared withAlternatives

  • Xcode Server is arguably the easiest-to-use CI/CD solution for iOS or macOS application development. The software is already installed with Xcode, and gnarly issues like certificate management and build scripting is — for straightforward use cases — automatic and painless for the developer.
  • Apple supports and regression tests updates to Xcode server along with the Xcode product.
  • Except for the cost of dedicated server hardware (which is optional), Xcode server requires no additional up-front or ongoing operating costs for development teams.
  • Xcode Server can run unit and UI tests on physical iOS devices. Simply attach test iPhone/iPad devices to the Xcode Server, and add them to the test integration for the bot to run. Simple.

Xcode Server Disadvantages compared with Alternatives

  • Xcode Server is not cross-platform, and supports only Apple OS target applications.
  • The lack of a PaaS offering (at time of this writing) means to deploy Xcode Server requires you to provide hardware. This can take the form of a Mac mini (or other type of Mac) yourself, or renting a Mac from a cloud provider. However, this is also true of open source alternatives such as Fastlane/Jenkins — which are on-premises software too. It’s also true that the cost of a Mac may not exceed the cost of commercial PaaS offerings such as Circle CI or Microsoft App Center in the long-run.
  • Out-of-the-box, Xcode Server lacks some features found in other solutions. For example, Xcode Server (at time of writing) doesn’t include a built-in integration step for external deployment (e.g. Test Flight) — so deploying a finished archive to Test Flight is possible — but requires a custom post-integration script.

Is Xcode Server for you?

As always, the answer is: “maybe”. If the project you want to automate integration/testing/deployment for is targeting iOS or macOS, there’s really no reason not to try Xcode Server. It’s included with Xcode 10, and is a snap to setup and use.

Particularly if you’re new to CI/CD, you really can’t go wrong here — in the worst case you’ll get some experience with CI, probably cut down on some manual test/build work and better understand what features you need in a long-term solution if Xcode Server doesn’t fully meet your needs.

On the other hand, if you’re part of a fully-integrated cross-platform (e.g. iOS+Android) team that wants a unified solution for all development targets, Xcode Server might not be for you. While other solutions probably have a higher up-front learning curve and require more scripting to get going compared with Xcode Server, there are open source and commercial platforms that can provide cross-platform solutions where a unified CI/CD/Devops infrastructure is essential.

Related to This Post

The rise of Machine Learning on mobile platforms

The time for Mobile ML is here, and the possibilities are many. If you’ve not yet given much thought to how Machine Learning technology can make your mobile software better, now is the time!

Machine Learning has long been a big part of our lives (even if we don’t often think about it). Estimating a customer’s likelihood to pay a bill or ranking pages in a web search result are common ML implementations we use often but rarely think about.

In part due to the expense of processing power (CPU/GPU) and data storage requirements, ML has for decades been the domain of darkened data centers rarely seen by end-users. This is rapidly changing, and mobile developers now have a plethora of new tools and platforms to choose from to make their current mobile solutions more valuable and open up new solution possibilities.

We’re in a golden era where all platform mega-vendors providing mobile infrastructure are rolling out mobile-accessible tools for mobile developers. For example:

Apple CoreML

Amazon Machine Learning for Android & iOS

Google ML Kit for Firebase

Microsoft Custom Vision export to CoreML

IBM Watson Services for CoreML

All of these are excellent offerings. In future posts I’ll be reviewing many of them, highlighting their relative strengths and exploring use cases — so stay tuned!

What’s Machine Learning, anyway?

Machine Learning is an idea that has deep and ancient roots in computer science, dating back to the term’s coining by Arthur Samuel in 1959. But what is Machine Learning, and why is it now coming to mobile computing platforms?

Machine learning (ML) is a field of artificial intelligence that uses statistical techniques to give computer systems the ability to “learn” (e.g., progressively improve performance on a specific task) from data, without being explicitly programmed. — Wikipedia

The definition of ML brings to mind SkyNet in the Terminator movie series — where ML-enabled mobile devices run amok and plot to destroy humankind. But for the most part ML is about using computational techniques to improve the effectiveness of computer software that address everyday computing problems for which conventional techniques have often failed.

Why is Machine Learning landing everywhere now?

ML has been around for decades, but the cost of computing power and data storage has kept it mostly locked in data centers with 7-digit budgets. Today, the CPUs and GPUs shipping with mobile devices have more computing power than web browsers and e-mail clients really need. Using that extra capacity to drive ML techniques on mobile is now possible. The answer to “Why now?” is simply: “Because now we can.”

From a programming point of view, ML is all about applying our current abundance of CPU/GPU power to solve problems that aren’t efficient or possible using traditional, declarative programming techniques.

The rapidly increasing computing capacity on mobile devices today begs the opportunity to bring ML right down to the device-level. The mobile platform providers mentioned above (and others) are responding with at once compelling and cost effective solutions we can use at any layer of our tech stack — from server to mobile phone.

How Machine Learning Works (essentially)

ML is a broad, broad topic — much broader than I can cover in a single blog post. But from a software engineering point of view, ML is really about using statistical likelihood rather than deterministic procedural code to calculate answers given some input data.

ML without ML

Let’s say we didn’t have ML, and just wanted to classify images by writing some code. Our initial pass might look something like this:

func IdentifyObjectInPhoto(image: UIImage) -> String {
   if imageHasLotsOf(UIColor.blue) {
      if imageHasSun() {
         if imageTwoHasDiagonalLines() {
            return “Mountain”
         } else {
            return “Sky”
         }
      } 
    } else if imageHasLotsOf(UIColor.green) {
         if imageHasVerticalBrownLine() {
            return “Tree”
         } else {
            return “Grass”
         }
    }
    return “I have no Idea”
}

This code might actually work — sometimes. But it would fail too often to be reliable. Trying to replicate the human brain’s ability to recognize an image using procedural code alone is doomed to failure. Even if we could do it, our project sponsors couldn’t afford to pay us to develop this type of solution.

ML with ML

The Machine Learning approach alternative uses statistics to build a mathematical model (typically using an Artificial Neural Network algorithm). Basically the network “train itself” using a large “training data set” of images to classify images.

At the most basic level, the neural network is actually kind of like the coding solution. It’s not code, but it would still develop a sort of evaluation algorithm that finds correlations between image traits it observes and “the right answer” — e.g. “Tree”, “Mountain”, “Grass”, etc.

The biggest difference is that the machine (the computer) uses training data to learn which traits should be paid attention to in order to classify the image.

The machine learning process that contrasts with the above code looks like this:

There could be literally millions of branches in the logic that correctly classifies an image — more than a human programmer could ever produce by writing code. But training process might only take a few minutes to build the model.

When building the model, the Machine Learning process follows a somewhat brute-force, trial-and-error process to find a set of tests that accurately predicts what the image is.

Note that predicts is a key word here, since ML models that are 100% accurate are actually rare.

Not Just for Images

Most mobile ML examples deal with the image classification domain — for example above determining if a landscape photo is a tree or mountain. And this is an important domain for mobile — image data is notoriously difficult to work with as a data processing source, and mobile devices have cameras that serve as excellent data collection devices.

But the same training and model deployment strategy can work for all kinds of data. For example, text recognition is essentially accomplished in exactly the same way:

Text recognition begins by allowing a Machine Learning training process examine lots and lots and lots of letters in lots of lots of fonts (and even hand-written text), to build a model that can predict what symbol (letter, in Latin text) each image of a letter actually is.

As with images, text recognition isn’t perfect — we’ve probably all seen OCR output that misspells words! But ML models can be improved over time by feeding new data into subsequent training iterations, along with continued evolution in ML research and increases in CPU/GPU power.

Does ML live On the Mobile Device or in the Cloud?

This article started with the idea that ML historically has lived in the data center due to high CPU/GPU and high data storage requirements. Does all this move onto mobile now that devices are so much more powerful than before?

This question really has an “it depends” answer, or maybe a “yes and no” answer. In practice, many ML models — once trained — are not large or processor-intensive, and can easily live “on the device”. In other words, it’s is very common now to take the output asset of the ML training process and embed it into a mobile application.

Why put the model “on the device”?

Several good reasons:

  • If the model is on the device, it doesn’t need an Internet connection to be used. This can be greatly important in many applications (especially B2B and commercial deployments where work is done without Internet connectivity).
  • Hosting ML models in the cloud isn’t free — so if the model can be embedded, both the hosting cost and cost of systems administration are eliminated.

Apple’s Core ML architecture, for example, only (at time of this writing) supports models that are deployed to the device, i.e. embedded with the application (though models can be originally created in a variety of ways).

Why put the model “in the cloud”?

While ML models can, and often should, be embedded on device, there are some scenarios where it would make more sense for them to live in the cloud:

  • Typically, models that are deployed on device can’t be trained after they’re deployed. If your ML model needs frequent training, letting the mobile device app use the model as a remote resource at the end of a cloud API may make more sense.
  • Though many (maybe the majority) of ML models are relatively compact and “fit” on a modern mobile device storage, some may be very large — and others may require more CPU/GPU power than exists on an end-user mobile device.

Can I have it both ways?

Yes, of course. Platform suppliers are already iterating their architectures to allow models to be used on device and/or in cloud data centers. For example, Apple itself provides tools to translate server-based models to Core ML for iOS distribution (you can still use the model on server).

Others are developing architectures where local models can be used as a “fallback” when cloud models are unavailable. In this scenario, the most recently trained server model would be used whenever available, but if unavailable then an older iteration local model could be used as backup.

The best solution of course depends on the application’s needs — and a variety of factors.

Summary and Call to Action

If your company develops mobile apps for in-house or customer use, and you’ve not yet given much thought to how Machine Learning technology can make your software better, now is the time! Tools, platforms and tech to integrate ML into your app have never been better, more affordable or more accessible as they are today.

How could you use ML to make your app better? Start brainstorming with these:

  • Use text recognition to allow users to enter data with the camera rather than a virtual keyboard.
  • Add barcode scanning to your application to remove barriers to data entry.
  • Use a conversational bot to enable resolution and/or intelligent routing of customer service requests right from your mobile app.
  • Use image classification to recognize products visually (even products that don’t have barcodes).
  • Use image landmark detection to provide context-specific information to mobile users.
  • Train an ML model to recognize product failures (or possible failures) based on installed product photos, allowing customers to self-diagnose product failures and identify preventative maintenance oversights.
  • And many more!

The time for Mobile ML is here, and the possibilities are many. With ML being addressed in complimentary ways on the platform side (IBM, Microsoft, AWS) and on the device side (Apple, Google), the stars are truly aligning for ML on mobile.

Share your thoughts! What are you using Mobile ML for, or what would you like it to do for your app?

Using WebKit to call WKWebView Javascript from Swift and Swift from Javascript

Many mobile applications incorporate remote web pages, either as passive (static) content — or as in this case as integral parts of the UI. Using the WebKit/WKWebView techniques presented here, your native apps can be better integrated with web content and provide a superior experience to end-users.

Two-way Integration between Swift and JavaScript

In this article we’ll build a full working example of a hybrid native/web application that uses two-way function calls between a native iOS app (written in Swift) and a mobile web page (written in HTML/JavaScript).

By leveraging these two features allows us to build a highly robust, hybrid application where the native and web components cooperate as equal partners in delivering a valuable customer solution.

Solution Overview

The finished solution consists of two components:

  1. A native iOS application, developed in Swift
  2. A static HTML/JavaScript web page hosted on a remote web server (hosted in Microsoft Azure in this case).

The finished learning app implements three main features:

#1 — Loading the web page from the remote. If you’ve used a WKWebView, you know all about this feature. As a UIViewController is loaded, a web page URL is set to the WKWebView, which uses an HTTP GET method to fetch the HTML content.

#2 — Manipulate WebView appearance from Swift. Next we gently wade into the interop waters by sending a command to the WKWebView content page to change the page background color according to a user selection in a native Segment control.

#3 — Callback to Swift from HTML/JavaScript. Finally, we make the solution more complex and interesting by exposing a geolocation function in the native iOS application to the web page. When the user enters an address presses a button on the web view page, the following will be done:

  1. The web page (using JavaScript) calls a Swift function, passing in the user-entered address as a JSON object.
  2. The Swift native app makes an asynchronous call to Apple using CLLocation, to determine the latitude & longitude of the user-entered address.
  3. When the latitude/longitude are returned from Apple, the Swift native app calls a JavaScript function in the web page to update the web page with the latitude/longitude for the entered address.

Solution Demo

Before walking through the code, let’s demo what the completed application looks like (animated GIF).

UI Storyboard Design

The learning application contains a single UIViewController named ViewController. ViewController has only two UI controls in the Storyboard:

  1. A UISegmentedControl which allows the user to change the WebView background color to one of five colors.
  2. A UIView, which is placed in the Storyboard to serve as a container view for the WKWebView control.

Changing Web Page Color

To wade into the hybrid solution water, let’s implement a simple call from Swift to the WKWebView.

ViewController has a member array of colors corresponding to the color choices in the Segment control at the top of the native view.

let colors = [“black”, “red”, “blue”, “green”, “purple”]

When the user taps a new segment in the Segment control, an event handler calls the JavaScript function changeBackgroundColor, passing the string corresponding to the user selection:

@IBAction func colorChoiceChanged(_ sender: UISegmentedControl) {
   webView.evaluateJavaScript
   (“changeBackgroundColor(‘(colors[sender.selectedSegmentIndex])’)”, 
   completionHandler: nil)
}

The Swift code doesn’t really know that the web page has a JavaScript routine named changeBackgroundColor. It’s job is to format a JavaScript fragment that will successfully run in the WebView.

The HTML content in the WKWebView has the matching JavaScript routine, which simply sets the background color of the page to the string passed to it from Swift:

function changeBackgroundColor(colorText) {
    document.body.style.background = colorText;
}

Setting up a Message Handler

The next feature is to send a user-entered address from the HTML page to the native Swift app for geocoding. There are three steps to implement this feature:

  1. Add a message handler to the WKWebView’s WKUserContentController. This establishes a contract that promises that the Swift code can respond to the named message handler when it’s called from the HTML page via JavaScript.
  2. Implement the WKScriptMessageHandler delegate method didReceive message to receive the call from JavaScript.
  3. Call the message handler from the web content JavaScript.

Create a Message Handler (1)

// A
let contentController = WKUserContentController();
contentController.add(self, name: “geocodeAddress”)
// B
let config = WKWebViewConfiguration()
config.userContentController = contentController
// C
webView = WKWebView(frame: webViewContainer.bounds, configuration: config)

A WKUserContentController is created at (A). The contentController holds the registration of the geocodeAddress message handler.

The WKUserContentController is added to a new WKWebViewConfiguration at (B).

Finally (C), as the WKWebView is instantiated, the configured WKWebViewConfiguration created in (B) is passed in to the initializer.

Implement the WKScriptMessageHandler delegate (2)

Now that the geocodeAddress handler is defined to the WKWebView, we need to implement a delegate method, which is triggered when WKWebView event handlers are called.

In this solution, an extension is defined to implement the WKScriptMessageHandler protocol on the ViewController class.

extension ViewController:WKScriptMessageHandler {
   func userContentController(
           _ userContentController:
           WKUserContentController, 
           didReceive message: WKScriptMessage) {
    if message.name == “geocodeAddress”, 
       let dict = message.body as? NSDictionary {
            geocodeAddress(dict: dict)
       }
    }
}

The didReceive handler checks whether the message name is as expected (geocodeAddress), and if so extracts the JSON object from the message body (as an NSDictionary), and calls the ViewController instance method geocodeAddress.

Note that the message handler is stringly typed, so be careful that the string comparison in didReceive properly matches the original message handler registration made with the WKUserContentController.

Calling geocodeAddress from the HTML/JavaScript page (3)

In HTML, the form’s INPUT button calls a JavaScript function called geocodeAddress:

<input type=”submit” value=”Geocode Address” onclick=”geocodeAddress();”>

The body of the JavaScript geocodeAddress function responds by calling the Swift Message Handler of the same name, passing in address details as a JSON object.

function geocodeAddress() {
    try {
        webkit.messageHandlers.geocodeAddress.postMessage(
            {
                street: document.getElementById(“street”).value,
                city: document.getElementById(“city”).value,
                state: document.getElementById(“state”).value,
                country: document.getElementById(“country”).value
            });
        document.querySelector(‘h1’).style.color = “green”;
    } catch(err) {
        document.querySelector(‘h1’).style.color = “red”;
    }
}

Note: In the JavaScript geocodeAddress() function, the H1 style changes are merely here for testing purposes and are not part of the actual solution.

Passing back Latitude/Longitude to the HTML page

So far, the HTML page has accepted an address entry from the user in a series of INPUT fields, and sent it to the native Swift application. Now let’s complete the final requirement — geocoding the address and returning it to the web page UI.

Recall that the Swift message handler calls a Swift function called geocodeAddress(dict:) to do the heavy-lifting of geocoding the address.

func geocodeAddress(dict: NSDictionary) {
    let geocoder = CLGeocoder()

    let street = dict[“street”] as? String ?? “”
    let city = dict[“city”] as? String ?? “”
    let state = dict[“state”] as? String ?? “”
    let country = dict[“country”] as? String ?? “”

    let addressString = “(street), (city), (state), (country)”
    geocoder.geocodeAddressString(
        addressString, 
        completionHandler: geocodeComplete)
}

This part of the solution is straightforward CoreLocation. After the geocodeAddressString asynchronous function sends the address to Apple, the response is provided to the Swift method geocodeComplete:

func geocodeComplete(placemarks: [CLPlacemark]?, error: Error?) {
    if let placemarks = placemarks, placemarks.count > 0 {
        let lat = placemarks[0].location?.coordinate.latitude ?? 0.0
        let lon = placemarks[0].location?.coordinate.longitude ?? 0.0
        webView.evaluateJavaScript(
            “setLatLon(‘(lat)’, ‘(lon)’)”, completionHandler: nil)
    }
}

This method checks to make sure at least one placemark was found for the provided address, extracts the latitude and longitude from the first place mark, and then sends them back to the HTML page by calling its setLatLon JavaScript function.

Updating the HTML page

The process of sending the latitude/longitude back to the web page is functionally identical to the previous feature which set the background color.

The setLatLon JavaScript function is implemented as follows:

function setLatLon(lat, lon) {
    document.getElementById(“latitude”).value = lat;
    document.getElementById(“longitude”).value = lon;
}

As with the background color function, setLatLon simply sets the HTML form’s INPUT field values to the passed parameter values.

Summary

The most common use of WKWebView to provide a simple display of web content within the context of a native iOS application — but it can do much more, and in this article we’ve seen how to incorporate web and native components to build enhanced native applications, or even hybrid native/web applications.

Download the Code

The above fragments provide the core functionality for the learning solution. The full Xcode project can be downloaded from Github here.

Flexible and Easy Unit Testing of CoreData Persistence Code

Modern and high-quality iOS applications are expected to perform flawlessly. An important input to ensuring flawless, regression-resistant code is to add comprehensive unit and integration testing as part of the development process. This article steps through a methodology for building repeatable, automated database unit tests for iOS applications using CoreData as their persistence layer.

Intended Audience

This article assumes you know the basics of using CoreData in an iOS application, and have probably used it in your own work. However the focus of this article is architectural, and even if you don’t know how to code with CoreData, the concepts here should still make sense if you understand the basics of data persistence and unit testing in iOS.

Code Samples

The code and concepts in this article were developed with Xcode 10 (beta) and Swift 4.2.

This article includes code excerpts to illustrate the concepts, but rather than embed all the code for this solution within the article text, a link to an example application in my GitHub repository is provided at the end of this article.

What is CoreData?

CoreData is the default local persistence choice for iOS (and macOS) applications. Core data is fundamentally an object-relational mapping (ORM) layer over a persisted data store. While the physical storage of CoreData objects is abstracted from the developer, CoreData is almost always used with SQLite.

If you’re new to CoreData, or just need a refresher, there are many great resources out there, such as Apple’s own Core Data Programming guide, and Getting Started with Core Data Tutorial guide at RayWenderlich.com.

How CoreData Fits in an iOS Application

The following is a highly simplified diagram of how a typical application accesses CoreData. I’ll discuss each element of the architecture below.

AppDelegate. This object represents the entry point of an iOS application, and should already be familiar to all iOS developers. If you create a project with the Use CoreData option in Xcode 10, Xcode will create basic CoreData stack for you. Within the AppDelegate object, you’ll find the the following property exploded.

This property is, essentially, the hook your application uses to access data managed by CoreData.

class AppDelegate: UIResponder, UIApplicationDelegate {
   .
   .
lazy var persistentContainer: NSPersistentContainer = { ... }
   .
   .
}

An NSPersistentContainer property has within it a setting that specifies whether its data should be saved to SQLite disk files (NSSQLiteStoreType), memory (NSInMemoryStoreType) or somewhere else (NSBinaryStoreType). The latter case is uncommon, and I won’t discuss it in this article. When no setting is specified (the default), NSSQLiteStoreType is used by CoreData to configure the container.

<projectname>.xcdatamodel. When creating a project with CoreData support, Xcode will automatically create a data model file, with a root name matching the new project name and the extension xcdatamodel. The Xcode data model editor stores your evolving design in this file, and uses this metadata file to generate low-level CoreData entity model classes for you. In Xcode 10, the generated model classes will automatically be availalbe to your XCTest target (which was not the case in some older versions of Xcode, so yay!).

StorageManager. While it’s certainly possible and acceptable to access CoreData and the auto-generated entity model classes directly throughout your application, it’s quite common to encapsulate data operations in a service class. In this architecture, I’ve done this. This approach simplifies data access code for the rest of the application, and provides some degree of encapsulation just in case the the underlying database physical layer changes in the future.

As the StorageManager object is initialized (refer to the red circle numbers in the diagram called out in these bullets):

  • It uses the .xcdatamodel (1) generated model classes to perform underlying database access.
  • It will use the global persistentContainer object (2) instantiated in the AppDelegate class, which uses the deafult SQLite (3) backing for data storage.

Production App Code (e.g. ViewController). This box in the diagram represents wherever data is fetched or saved within the app. This may be code within a View Controller, View Model, or other classes you write yourself. In this architecture, all such accesses are made by calling methods of the StorageManager object, rather than interacting directly with CoreData.

SQLite DB. In the production app, StorageManager fetches and makes database changes to physical files stored in the App’s sandbox, indicated by (3) in the above diagram. These changes are not in RAM, and the database persists between runs of the program.

The main goal for this article is to create a hybrid architecture where the persistent SQLite database is used for the production app, while a volitile in-memory database is used for unit testing.

Repeatable Unit Tests vs Persistent Disk Storage

A basic requirement for unit tests is that the application state should be exactly the same at the beginning of each run of a unit test. Having a disk-based SQLite database presents a challenge to this requirement. The database files are by definition persistent, so each test run by definition affects their state and fails to guarantee each test is identical.

That said, we could simply and easily add unit tests to the project, using the existing CoreData configuration. The resulting architecture would be as follows:

In this approach both the production app and the unit test target use the same StorageManager and xcdatamodel generated model classes. This is good because the data access objects and calling methods are unchanged.

The problem, though, is that both app and test targets will use the same Container type, which is configured with the default SQLite setting (1), resulting in the unit tests using a disk-based data store (2) that won’t start in the same state for all test runs — without writing additional pre-test initialization code.

Unit Testing with a SQLite-backed container

We could deal with this challenge by reinitializing the database, perhaps in one of the following ways:

  1. Truncate all tables
  2. Delete and recreate the disk file(s) associated with the database before each unit test

Either approach may be reasonable, and should ensure the state of all disk files would be the same before every unit test. But each of these approaches requires additional code to achieve, and may need additional maintenance as the database evolves over time. If only there was an easier way — and there is!

By leveraging CoreData’s container abstraction, we have a third — and more elegant — approach that requires no physical disk file manipulation at all.

Using In Memory Persistence for Unit Tests

To give unit tests a clean, consistent environment before each test begins requires only a minor change to the existing code base. In fact, if you compare the architecture diagram below to the previous one, you’ll note that there are no additional code modules.

The coding change is to create a custom NSPersistentContainer within the Unit Test code — one which continues to use the xcdatamodel-generated CoreData model classes, but provides a PersistentContainer configured to use a volitile, in-memory persistent storage component. This is where CoreData’s abstraction between the programming model and physical storage model comes into play.

When the Unit Test is run, the custom, in-memory backed container is passed to the Storage Manager (1), which is configured for in-memory data storage.

By contrast, the production app initializes a StorageManager without passing a Container object. In this case, StorageManager uses the Container configured in AppDelegate (2), which uses the default SQLite container type.

CoreData will use SQLite or in-memory for database access automatically (3) depending on the container configuration.

NSPersistentContainer initialization in the production App

The key to making this strategy work is to initialize StorageManager differently depending on whether it’s being used from the main App target or the Unit Test target. The following are simplified versions of the initializations for each case.

When the production app target accesses the database, it always uses the persistentContainer created as a global property of AppDelegate, illustrated in the following abridged code excerpt.

Note that this initialization is very simple, and CoreData will use its default SQLite storage configuration.

Abridged AppDelegate excerpt

class AppDelegate: UIResponder, UIApplicationDelegate {
    lazy var persistentContainer: NSPersistentContainer = {
        let container = NSPersistentContainer(name: "CoreDataUnitTesting")
        container.loadPersistentStores(completionHandler: { (storeDescription, error) in
            .
            .
            .
        })
        return container
    }()
}

To use this default SQLite CoreData stack, application code needs only to create a StorageManager instance and call its methods. StorageManager will use the AppDelegate.persistentContainer whenever a custom container is not provided.

Abridged ViewController excerpt

class ViewController: UIViewController {
@IBAction func saveButtonTapped(_ sender: Any) {
            let mgr = StorageManager()

            if let city = cityField.text, let country = countryField.text {
                mgr.insertPlace(city: city, country: country)
                mgr.save()
            }
        }
    }
}

NSPersistentContainer initialization in a Unit Test

When data is accessed by a unit test, the unit test target creates its own custom Container, then passes it to the StorageManager class initializer.

StorageManager doesn’t know that the persistent layer will be in-memory (and it doesn’t care). It just passes the container it’s given to CoreData, which handles the underlying details.

The following is a simplified example of the Unit Test class.

CoreDataUnitTestingTests Excerpt

class CoreDataUnitTestingTests: XCTestCase {

    // this class instantiates its own custom storage manager, using an in-memory data backing
    var customStorageManager: StorageManager?
// Using the in-memory container unit testing requires loading the xcdatamodel to be loaded from the main bundle
    var managedObjectModel: NSManagedObjectModel = {
        let managedObjectModel = NSManagedObjectModel.mergedModel(from: [Bundle.main])!
        return managedObjectModel
    }()
// The customStorageManager specifies in-memory by providing a custom NSPersistentContainer
    lazy var mockPersistantContainer: NSPersistentContainer = {
       let container = NSPersistentContainer(name: "CoreDataUnitTesting", managedObjectModel: self.managedObjectModel)
       let description = NSPersistentStoreDescription()
       description.type = NSInMemoryStoreType
       description.shouldAddStoreAsynchronously = false

        container.persistentStoreDescriptions = [description]
        container.loadPersistentStores { (description, error) in
           .
           .
           .
        }
        return container
    }()
// Before each unit test, setUp is called, which creates a fresh, empty in-memory database for the test to use
    override func setUp() {
        super.setUp()
        customStorageManager = StorageManager(container: mockPersistantContainer)
    }
// Example of how a unit test uses the customStorageManager
    func testCheckEmpty() {
        if let mgr = self.customStorageManager {
            let rows = mgr.fetchAll()
            XCTAssertEqual(rows.count, 0)
        } else {
            XCTFail()
        }
    }
}

Note the following points in the preceding code sample:

  1. A key difference is the NSPersistentContainer definition vs. the AppDelegate version. This version overrides the default SQLite storage behavior with the optional in-memory storage.
  2. Since the xcdatamodel used for testing is part of the main app bundle, it’s necessary to reference it explicitely by initializing an NSManagedObjectModel. This was not necessary in AppDelegate, since the model and container exist in the same namespace.
  3. The initialization of StorageManager includes the in-memory container, whereas in the previous ViewController code, StorageManager’s convenience initializer that takes no parameters was used to initialize the CoreData stack with the default SQLite container.

Summary

While there‘s’ always more than one way to achieve a solid testing architecture, and this isn’t the only good solution, this architectural approach has some distinct advantages:

  1. By using in-memory (rather than SQLite) for unit testing, we know for certain that there are never remnants of prior tests included in the database that we’re testing code against.
  2. Using in-memory eliminates the need to write and maintain code that clears data objects or deletes physical files before tests run. By definition, we get a fresh, new database for every run of every unit test.
  3. If we’re already using a StorageManager pattern to encapsulate CoreData calls (which is a good practice anyway), this pattern can be applied to existing projects merely by adding a convenience initializer to the StorageManager object!
  4. This approach can be achieved entirely using out-of-the-box Xcode and iOS SDK components.

Get the Code

The code for a full, runnable sample application that incorporates the above architecture is available in my GitHub account. Use this for further study of this technique, and/or as a boilerplate for your own projects.

GitHub CoreDataUnitTesting Repository

My Favorite WWDC 2018 Sessions

Every year I look forward to WWDC — it’s like Christmas morning for apple developers, where we get to take the wrapping paper off the next version of Xcode and the various iOS, tvOS, macOS and watchOS SDKs.

This year is no different! The press focuses more on the operating systems themselves. But I’m a lot more interested in what SDK goodness is coming down the line to provide more tools and hooks to build even better software! 2018 hasn’t disappointed at all!

Here are the top five sessions I saw in terms of value to me personally this year:

Platform State of the Union

Always my first stop, to get the executive vision of where the platform is heading.

https://developer.apple.com/videos/play/wwdc2018/102/

Practical Approaches to Greater App Performance

High value session packed with practical techniques based on real-world experience. Excellent session packed with practical knowledge!

https://developer.apple.com/videos/play/wwdc2018/102/

Building Faster in Xcode

Lately I’ve been working in more complex projects, developing frameworks and just working larger codebases. This session was quite enlightening in terms of how to solve for dependencies and speed up the build process.

https://developer.apple.com/videos/play/wwdc2018/102/

What’s New in Swift

As Swift continues to evolve, yet mercifully more slowly now, we have to keep up! Last year I developed instructional content for Packt Press where I had to really understand every nuance of Swift as part of that effort, and I’m always up to learn and start using the new language features.

https://developer.apple.com/videos/play/wwdc2018/102/

Introduction to Siri Shortcuts

I did some work with Siri in the past, and have to admit being disappointed it was so limited to specific domains (none of which I work with!). I’m really excited to see Siri start to branch out, and found this session really informative.

https://developer.apple.com/videos/play/wwdc2018/102/

Creating simple frame animations for Android using Kotlin

User Interface Animation is a technique that can really make any mobile application pop off the screen, making almost any app feel more fluid and engaging. This article is a walk-through for using Android’s AnimationDrawable to add simple frame animations.

What is AnimationDrawable?

AnimationDrawable is a built-in Android class (since API Level 1) used to create frame-by-frame animations with a list of Drawable objects as the source for each frame in the Drawable Animation.

While any Drawable resource can theoretically be used with AnimationDrawable, it’s most often used with raster images (e.g. png files) — which is what I’ll demonstrate in this walk-through.

What We’ll Build

In this walk through, I’ll build an appliation that shows an animation of a robot walking. This is a simple frame animation that has a little fun with the UI while demonstrating how frame animations can still look fluid. Here’s the completed UI

Note: this is an animated GIF; if using the Medium app on a mobile device, open this article in a browser to view the animation.

Robot Walker App Demo

To make following along easier, the source code for the completed application can be downloaded from my github account here.

The AnimationDrawable Class

AimationDrawable was added in API version 1, so this is a technique that will work with virtually any Android application. Using AnimationDrawable is fairly simple. The overall process is as follows:

  1. Create a Drawable resource in your application, which contains a list of item
  2. Assign the Drawable in step 1 to the container element where it will appear — commonly this is the background of an ImageView.
  3. Call the start method on the AnimationDrawable to begin the frame animation.

Note: a common mistake when using AnimationDrawable is to attempt to start the animation in the onCreate method — before the AnimationDrawable is fully attached to the Window. When this is done, typically the first frame is displayed, but the image doesn’t animate. The Android documentation provides the following warning:

“Note: Do not call this in the onCreate(Bundle) method of your activity, because the AnimationDrawable is not yet fully attached to the window. If you want to play the animation immediately without requiring interaction, then you might want to call it from the onWindowFocusChanged(boolean)method in your activity, which will get called when Android brings your window into focus.”

In the walk-through app, I’ll be calling start from a button press handler, which is also a perfectly safe way to approach this.

Creating the Drawable Resource

Creating the resource is fairly straightforward. For the RobotWalker application, a resource is added to the Drawable folder containing a single animation-list element, which in turn contains one item per animation frame.

<?xml version="1.0" encoding="utf-8"?>
<animation-list xmlns:android="http://schemas.android.com/apk/res/android"
    android:oneshot="false">

    <item
        android:duration="100"
        android:drawable="@mipmap/robot_start_1"/>

    <item
        android:duration="100"
        android:drawable="@mipmap/robot_start_2"/>

    <item
        android:duration="100"
        android:drawable="@mipmap/robot_start_3"/>
[etc...the sample app has about 40 frames]
</animation-list>

Each item contains a drawable key that specifies a related Drawable object to use for that frame, and a duration (in milliseconds) for that frame to be displayed before the animation moves to the next frame.

The oneshot key at the animation-list level is true when the animation should be displayed once and then stop, or false if the animation should repeat from the first frame when it reaches the end.

Using the Drawable Resource in Kotlin

With the Drawable created in the res/drawable folder, all that’s left is to use the resource in your program.

Within main_activity.xml of the source project, I’ve added an ImageView(#1) and two Button objects: one to start the animation from the beginning (#2), and the other to stop animating (#3). The design view of the main activity is as follows:

Setting the AnimationDrawable as the background for the ImageView is the simplest and most common approach — which is what I’ve done here.

The final step is to add a listener for each button, and then to call the start and stop methods within the listeners.

The final Kotlin code is as follows:

package robkerr.com.robotwalker

import android.support.v7.app.AppCompatActivity
import android.os.Bundle
import android.graphics.drawable.AnimationDrawable
import kotlinx.android.synthetic.main.activity_main.*

class MainActivity : AppCompatActivity() {

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)

        startWalking.setOnClickListener {
val bgImage = imageView.background as AnimationDrawable
            bgImage.start()
}

stopWalking.setOnClickListener {
val bgImage = imageView.background as AnimationDrawable
            bgImage.stop()
}
}
}

I hope this how-to was helpful, and gets you started using simple frame animation that works with any version of Android! If this was helpful, please tap the clap button and let me know!

To download the referenced project source, click on this link to my GitHub account.

Understanding UI Testing using iOS, Xcode 9 and Swift

Xcode provides a fully-featured, scriptable UI Testing framework. A key to using the framework is understanding its architecture and how to best leverage its capabilities.

Understanding an Xcode UI Test

When you create a new project in Xcode, the new project wizard asks if you’d like to Include Unit Tests, and whether you’d like to Include UI Tests.

Xcode Test Target Selection

One might wonder — is a UI Test not a Unit test? If not, then what is it?

Actually, these checkboxes and their outcomes are primarily there to inform Xcode which targets to create within your project. Each checkbox, when checked, generates a different type of test target in your project.

The fundamental differences between an Xcode Unit Test and an Xcode UI Test:

  • Unit Tests are used to test that source code generates expected results. For example: ensuring that a function, when passed a specific parameter, generates some expected result.
  • UI Tests test that a user interface behaves in an expected way. For example: a UI Test might programmatically tap on a button which should segue to a new screen, and then programmatically inspect whether the expected screen did load, and contains the expected content.

Both Unit Tests and UI Tests support full automation, and enable regression testing of applications over their lifecycle.

Generally speaking, an Xcode Unit Test exercises and evaluates your Swift code, but does not inspect the impact is has on the User Interface, while an Xcode UI Test evaluates the UI behavior in response to actions, but does not inspect your code.

As always, these are generalized statements that have exceptions. It is certainly possible to get some insight into the UI from a (code) Unit Test, and to get some insight into the state of your code objects from a UI Test. But the tools are designed to be used according to this generalized statement.

Example of a UI Test

Before examining the architecture and implementation of UI Test, let’s take a look at a finished test in operation. The user story for this test is as follows:

On the first screen, the user can select a cell within a table view, which opens a second form showing the selected value in a label. The user can then key in a new value, in a text box beneath the original label. When the user subsequently returns to the first form, the new value will be shown in the Table View.

If a QA tester were to manually check this process they would do the following sequence:

  1. Launch the app
  2. Tap on a row
  3. Observe the table view row text is on the second form when it loads
  4. Type in a new value in the text field
  5. Press the back button
  6. Observe the value they typed has replaced the original text in the table view

The manual testing process would look as follows (this is an animated .gif — if using the Medium app, you may need to open this page in a browser to view the animation).

UI Test Process

Wouldn’t it be nice if we could automate this process so our QA tester didn’t have to repeat this process manually before every release? That’s exactly what UI Testing is for — and we’ll walk through how to automate this test process!

UI Testing Architecture

Before digging into the code, it’s important to understand how the Xcode UI Test framework operates. By understanding how the UI Tests access and manipulates your UI components, you’ll be able to make your UI easy to build tests for.

As with Unit Tests (the ones that exercise your source code), XCode uses XCTest to run your UI Tests. But how does the XCTest code know how to inspect and manipulate the UI that you designed in Storyboards and/or Swift code?

To gain access to your UI at runtime, XCTest uses metadata exposed by iOS Accessibility. This is the same technology used to enable iOS to read your screen to blind and low vision users, for example. At runtime, XCTest iterates over your UI controls, looking for Accessibility properties such as accessibilityIdentifier and accessibilityLabel to find the UI components you’d like XCTest to tap on, change or inspect as part of your UI Test.

While it’s possible to design UI Tests without doing any preparation of Accessibility metadata in your app — and you’ll find many examples on the Internet that do this — you can maintain better control and predictability in UI Tests by planning for UI Tests in advance, and preparing Accessibility metadata in the UI. Similarly, if you’re retrofitting UI Tests to an existing application, you should consider retrofitting Accessibility metadata as part of the process.

UI Test Recording

Xcode’s UI Test suite provides an easy way to get started implementing a UI Test: the Record UI Test button.

To begin recording a UI Test:

  1. Create a new UI Test function in the UI Test target source .swift file (assuming you created a UI Test target when you created your project — or added it later)
  2. Place the editing cursor within the empty test function
  3. Press the Record UI Test button below the source code editing pane

Xcode will compile and run the application using the debug device (i.e. simulator). Then, just walk through the test sequence on the simulator (or other debug device). When you’re finished, stop the debug session. Xcode will have created a set of commands to re-create the UI experience during the recording. In the case of the test sequence outlined above, the following code would be generated:

func testChangeTableRowText() {
   let app = XCUIApplication()
   app.tables["MyTable"].staticTexts["Fourth Row"].tap()
   let newvalueTextField = app.textFields["newValue"]
   newvalueTextField.tap()
   let app2 = app
   app2.buttons["shift"].tap()
   newvalueTextField.typeText("Some new value")
   app.navigationBars["UITestingDemo.DetailView"]
                                     .buttons["Back"].tap()
}

Great! Xcode has generated all the command needed to re-run the same UI Test process we did by hand. This is a boon to our test design productivity, and gives us a great start. But it’s not perfect, and not a production ready test yet. There are some deficiencies:

  1. There are some messy aspects, such as the line let app2 = app. We wouldn’t have written the code this way ourselves — the app object created at line 1 obviously can be used throughout the test function.
  2. The reference to staticTexts[“Fourth Row”] in line 2 of the function assumes that the contents of the UITableView cells will always be the same. What if it won’t? This is a case where preparing the Accessibility metadata can help make a more robust test. I’ll cover this shortly.
  3. The auto-generated code causes the test to operate, but nothing here is evaluating whether the outcomes of the test were successful or not. Xcode can’t create this part of the test — we have to do this ourselves.

Preparing the Accessibility Metadata

In Line 2 of the auto-generated code, Xcode inserted this line:

app.tables["MyTable"].staticTexts["Fourth Row"].tap()

In english, this command means:

Within the array ofUITableView objects within the current UIView, find a UITable with the key MyTable. Then, search all the UILabel controls within that UITable and find a UILabel having a text value “Fourth Row”. Then tap on that UILabel.

There are two key references XCTest uses to find UI elements here:

  1. The “Fourth Row” UILabel — the UILabel text value displayed on the 4th UITableViewCell in the UITable
  2. The UITableView with a key of “MyTable” — huh? Where did that key come from?

Let’s consider the second item. In this case, I had previously assigned the text “MyTable” as the accessibilityIdentifier for the UITable on the first UIView. This was done in the viewDidLoad() function of that UIView’s UIViewController, like so:

override func viewDidLoad() {
   super.viewDidLoad()
   tableView.accessibilityIdentifier = "MyTable"
}

Every UIView can have an accessibilityIdentifier, as well as other Accessibility properties. For the purposes of UI Testing, you’ll be most interested in accessibilityIdentifier and accessibiltyLabel.

Example of Accessibility Properties

When a UIView has either an accessibilityIdentifier or an accessibilityLabel, it can be queried within a UI Test by using that string as a key. For example, this table could be accessed within a UI Test in either of these ways:

let tableView = app.tables.containing(.table, identifier: "MyTable")
let tableView = app.tables["MyTable"]

By using Accessibility metadata in this way, you can create a more robust UI Test — one not dependent on the content of the text in controls. Instead, the controls can be accessed by dictionary key values you define and control. But you do need to make the effort to assign keys in order to use them!

Note: while UIView objects can be queried using either accessibilityIdentifier or accessibilityLabel, it’s usually better to use accessibiltyIdentifier. accessibilityLabel is the property iOS Accessibility uses to access the text to be read to a blind or low vision user, and could change at runtime for controls that have updatable text properties.

How to Set accessibilityIdentifiers

Setting the accessibilityIdentifier for a UIView-based object can be done in several ways. The most common are as follows:

Using the Interface Builder Identity Inspector

Some UI elements support setting of Accessibility properties within IB Identity Inspector. For example, the UILabel on the first form of our test solution has its accessibilityIdentifier set to “labelIdentifier” directly within the predefined IB field.

Setting the accessibilityLabelIdentifier for a UILabel

Using a User-Defined Runtime Attribute

For UI elements that wouldn’t normally be read to a blind or low vision end-user, Interface Builder won’t have predefined Accessibility property fields. But you can still add them at Interface Builder design time using the User Defined Runtime Attributes dictionary editor on the Identity Inspector.

In this case, I’ve moved the UITableView’s accessibilityIdentifier from the UIViewController’s viewDidLoad() method into the Interface Builder storyboard editor. The resulting UI Test works exactly the same way — but with less code to maintain.

Setting accessibilityLabelIdentifier using Runtime Attributes

Using Code

As mentioned earlier, every UIView-based class has accessibility properties, and those properties can be set at runtime.

override func viewDidLoad() {
   super.viewDidLoad()
   tableView.accessibilityIdentifier = "MyTable"
}

All three of these methods have the same effect . Which is best depends on best practices within your team. Some prefer to reduce code by configuring UI in Interface Builder, others prefer to do all UI design in code. UI Testing supports both scenarios equally well.

Inspecting UI Elements During the Test

Recall earlier that I recorded the steps for the test — but I didn’t actually test for anything! Let’s wrap this job up by adding the actual tests, and use accessibilityIdentifier properties where possible.

Searching for UIView elements

Recall that Xcode wrote the following statement to find the UITableView using its accessibilityIdentifier:

let tableView = app.tables["MyTable"]

This is the most concise shorthand method, but I want to point out there’s more than one right answer to finding the tableView in the view hierarchy.

Another method is to explicitly search for the accessibilityIdentifier:

let tableView = app.tables.containing(.table, identifier: "MyTable")

If we hadn’t assigned an accessibilityIdentifier, we could use this code to get the first UITableView within the top-level UIView:

let tableView = app.tables.element(boundBy: 0)

This isn’t as good, because if we should ever add a second UITableView to the screen, the UI Test may break if a new UITableView happens to be retrieved as the first UITableView! This is the reason I suggest using accessibilityIdentifiers when designing your UI Tests.

If we knew there were one and only one UITableView on the screen, we could shorten the previous technique even more:

let tableView = app.tables

Again, this has the risk of breaking the UI Test if a second UITableView is added. This would be a more serious break, since the tables property would return a collection rather than a single table as it does when only one UITableView is in the view hierarchy.

Final Test Script

We’ve covered the fundamentals of creating tests, accessing elements, and manipulating values (which Xcode showed us during the test recording), so we’re ready to wrap this up.

I’ve pasted below the final test function, and then annotated it below.

01: func testChangeTableRowText() {
02:     let app = XCUIApplication()
03:     let tableView = app.tables["MyTable"]
04:     XCTAssert(tableView.cells.count == 5)
05: 
06:     let cell = tableView.cells.containing(.cell, identifier: "3")
07:     let cellLabelText = cell.staticTexts.element(boundBy: 0).label
08:     XCTAssertEqual(cellLabelText, "Fourth Row")
09:     
10:     cell.staticTexts.element(boundBy: 0).tap()
11: 
12:     // The detail form is now visible
13:     
14:     XCTAssertEqual(app.staticTexts["labelIdentifier"].label, cellLabelText)
15:     
16:     let textField = app.otherElements.textFields["newValue"]
17:     textField.tap()
18:     textField.typeText("Some new value")
19: 
20:     XCTAssertEqual(textField.value as? String ?? "", "Some new value")
21:     
22:     app.navigationBars["UITestingDemo.DetailView"].buttons["Back"].tap()
23: 
24:     // The detail form is now visible
25: 
26:     let tableView2 = app.tables.containing(.table, identifier: "MyTable")
27:     let cell2 = tableView2.cells.containing(.cell, identifier: "3")
28:     let updatedText = cell2.staticTexts.element(boundBy: 0).label
29: 
30:     XCTAssertEqual(updatedText, "Some new value")
31: }
  • In lines 2–4, we find the UITableView with the accessibilityIdentifier “MyTable”, and then check that the number of rows is five (5). Remember that whenever an XCTAssert fails, the entire test fails.
  • On line 6, we search the UITableView for a UITableViewCell with an accessibilityIdentifier equal to “3”. This value was set in the cellForRowAt method in the UITableView DataSource delegate (review the code from GitHub for details)
  • On line 7, we get the first UILabel within the cell (this cell has only one label).
  • One line 8, the UILabel text property is checked against an expected value (this is not really a requirement for this test, but I added it as a further example).
  • Line 10 sends a tap event to the UILabel within the cell. The effect of this is to generate a tap event on the cell, which then triggers a segue to the detail form (see source on GitHub for details)
  • Line 14 finds the UILabel with accessibilityIdentifier “labelIdentifier” (we set this in Interface Builder earlier. When the form is loaded, it should have set the UILabel text to the value tapped in the UITableView. This XCAssetEqual check to make sure this was done.
  • Lines 16–20 tap on the UITextField, and type in new text.
  • Line 22 taps the Back button at the top-left of the detail form, which pops the view controller off the stack, returning to the first form.
  • Lines 26–30 again retrieve the value in the 4th cell, and compare the new value to the value that was typed on the detail form.

Note: when creating tests that type into fields using an iOS simulator, be sure to disconnect the hardware keyboard in the simulator. The typeText method will fail when a hardware keyboard is attached to the simulator.

Where to go Next

With this, we’ve completed a completed, robust UI Test for this part of the application!

Since we used accessibilityIdentifier properties wherever possible, we’ve created a test that won’t easily break when the UI is enhanced with new controls, and the test is repeatable, automate-able, and easy to use for regression testing.

But this test can be improved even more:

  • We still have a few static data values in the test, e.g. “Fourth Row”. By refactoring all static value assumptions out of this test, we could set it up to work with dynamic data (for example, against a web service call)
  • This test is still bound to a developer or QA Engineer using Xcode at their desk. But with some additional work, we could incorporate this type of test into a fully automated test suite run by a daemon instead. Look for that in a future blog post!

Using Face ID to secure iOS Applications

While the data on iOS devices continued to be secured by encryption using underlying PIN code, Touch ID provided a convenient way for users to unlock devices and confirm their identity with only a touch of a finger.

Biometric security like Face ID and Touch ID help make iOS mobile devices more secure and convenient for users. These technologies can also be used by 3rd-party applications.

Touch ID Roots

In 2013 Apple introduced a new, biometric means to unlock its mobile devices using a fingerprint sensor incorporated in the home button — Touch ID. Prior to Touch ID, users who wanted to secure their iOS devices from unauthorized access could do so by entering a 4-digit PIN code (later extended to longer, 6-digit codes). While the data on iOS devices continued to be secured by encryption using underlying PIN code, Touch ID provided a convenient way for users to unlock devices and confirm their identity with only a touch of a finger.

Enter Face ID

With the launch of the iPhone X, Apple introduced a new biometric security mechanism — Face ID. The trademark name Face ID describes itself. Instead of using a scanned fingerprint to identify the user, Face ID uses a scan of the user’s face to match a stored profile on the device.

I’ll talk in terms of Face ID, but it’s worth noting that both Face ID and Touch ID are just different variants of biometric security. From an architecture and development perspective, both operate in the same way, and provide equivalent benefits application architecture.

Touch ID and Face ID operate in the same way, and provide equivalent benefits application architecture

Where Touch ID uses a map of fingerprint ridges to as a means to recognize its user, Face ID uses a 3-dimensional map of contours and facial features. Face ID uses its signature True Depth infrared camera to project 30,000 dots onto the user’s face, then reads the pattern to create its facial contour map.

The Touch ID fingerprint signature and the Face ID facial contour map are stored in the Secure Enclave within the iOS device. This data is accessible only by the end user (encrypted with the PIN that only the user knows), and never leaves the device itself.

Phil Schiller introducing Face ID at the iPhone X launch presentation (2017)

Leveraging Face ID in 3rd-Party Applications

While most users think of Face ID only in terms of unlocking the iOS device at the home screen, we can also use Face ID to create more secure and convenient experiences for our 3rd-party applications.

While users are accustomed to being prompted to authenticate with Face ID (or Touch ID) when unlocking the device, we can ask iOS to prompt them to re-authenticate with biometric security at any time. Typically we would prompt a biometric authentication when our own iOS application is launched, just before reading security tokens or user credentials from the iOS

As with other hardware features accessible to 3rd-party applications, users must authorize a custom application to use Face ID. We must design our application assuming that a user may not authorize us to use Face ID (or is using a device that doesn’t support biometric authentication). Apps must fail gracefully, and provide some other means to identify/authenticate the user when biometric security is not available or fails to recognize the user.

Face ID Benefits

Incorporating Face ID in our iOS security architecture has some key benefits:

  • We can be certain the user who unlocked the device is the same person now accessing our application.
  • We can provide an extra layer of security, being sure of the user’s identity before reading highly sensitive data from the user’s Keychain (for example a JWT token or a password)
  • When users are prompted for Face ID (or Touch ID), they are reassured that we‘re taking the security of their sensitive information seriously.

Security Architecture

An application that would benefit from using Face ID/Touch ID on launch would typically have one or more of the following security design elements:

  • A password stored in the user’s Keychain
  • A web service token used for accessing remote APIs stored in the user’s Keychain
  • Certificates or other sensitive data stored in the user’s Keychain

While an application could prompt for biometric authentication even when it’s not to authorize access to sensitive information, this isn’t a typical approach. For the most part, application-level biometric authentication is employed as a secure substitute for a traditional username/password authentication.

Example App Launch Flow with Face ID

The following example illustrates how Face ID (or Touch ID) biometric authentication would be used to provide a confirmation of user identity prior to accessing security credentials.

Typically applications that access secure information (for example, by making authenticated calls to web service APIs) will require either a username/password to begin a session, an expiring token, for example a Java Web Token (JWT). While prompting users to re-enter username/password combinations on every application launch is secure, it’s also frustrating for users. Most mobile applications do store authentication tokens or passwords in keychain, and keeping this information secure is of utmost importance.

In the following flow:

  • A username/password combination (previously entered by the user), or a security token (previously obtained from a web service) are stored in the iOS keychain
  • The Keychain is the correct location for this sensitive data, since it is then encrypted and not accessible without a device PIN/Biometric unlock.
  • If sensitive authentication information has been stored in the keychain (user logged in, but didn’t log out), the user’s ID (but not password or token!) is stored in user preferences. The presence of User Id in preferences is the signal that the application should attempt biometric authentication — rather than proceeding directly to the username/password prompt.
  • If the device doesn’t support biometric authentication, or the user has declined to allow the application to use that feature, or the sensor simply doesn’t recognize the user, Face ID fails, and the application falls back to conventional username/password authentication.
iOS App launch flow, enhanced with Face ID

In the above login flow, Face ID (or Touch ID) are used to provide a way for the user to grant permission for the application to read from the Keychain.

Could the app read from the Keychain without prompting for biometric verification? The answer is “Yes”. Face ID isn’t required to access Keychain data — when the user unlocked the device with PIN (or Face ID/Touch ID), the Keychain was implicitly unlocked for the application.

But by using Face ID/Touch ID, we’re providing an extra layer of identity verification, and raising the level of security of our application to one that prompts for a password every time it’s launched — but without the user frustration associated with repeated password prompts.

Configuring a UIScrollView in a Storyboard — with no code!

To understand how to setup a scrolling UIView within a UIScrollView, first we need to understand the views that will be involved, and how they relate to each other. Once we understand this, the concept of scrolling views becomes much simpler to understand and configure.

A delicious developer recipe for setting constraints on Storyboard views to serve up a proper scrolling view for iOS applications.

Introduction

Configuring a UIViewController with a scrolling content view can be confusing, and — frustratingly — scrolling views usually don’t really work at all until all their constraints are perfect!

To understand how to setup a scrolling UIView within a UIScrollView, first we need to understand the views that will be involved, and how they relate to each other. Once we understand this, the concept of scrolling views becomes much simpler to understand and configure.

The Lay of the Land

First we need to understand the relationship between the views involved in the scrolling. Review the following diagram, which I’ll discuss just below.

Starting from the top of the view stack, these are the views we’ll be configuring:

Content

At the top of this diagram — represented by the white boxes — are a set of controls that the user will see and interact with. There can be as many as you like, and they should be arranged as they make sense for the UI.

Presumably, the height will exceed the vertical size of the enclosing UIView — if it didn’t, we wouldn’t need to scroll the content, right?

Content View (UIView)

The Content we want to present to the user is then arranged within a Content View. This doesn’t have to be a UIView specifically, but should be something based on UIView. In the implementation walk-through below, I’ll actually use a UIStackView as the Content View, and just stack from UIViews within it until the vertical space exceeds the size of my iPhone’s screen.

This technique isn’t limited to view hierarchies. Your content view could be a UIView that you draw content in yourself. Anything goes, really.

Scroll View (UIScrollView)

The Scroll View is the container that knows show to pan the Content View around so the user sees as much of the Content View as will fit in the Scroll View’s visible frame. The rest of the Content View is clipped off the top or bottom (or right/left, if the width exceeds the frame.width). Yes, I just defined scrolling. And managing the visible and clipped regions is all the UIScrollView does in this solution.

Users don’t typically see content placed on the Scroll View —al though if it has a background color, they may see that color when the Content View is smaller than the Scroll View frame.

Top Level View (UIView)

The Scroll View has to live somewhere, and that somewhere is usually pinned to the edges of some containing view. When adding a Scroll View to an iPhone app, this will be the top-level UIView of a UIViewController. However, it could be any subview instead— for example in an iPad app, the Scroll View could be pinned to the edges of a Split View’s content view.

In this example, I’ll stick to the simple case of a scrolling view placed in an iPhone application’s main view frame.

Constraints

Making the Scroll View work correctly is almost entirely dependent on getting the constraints created correctly in Xcode’s Interface Builder. By correctly, I mean connecting the edges of the controls appropriately and getting the size of the content view set correctly.

For the majority of iPhone applications, where a content view is scrolled vertically, the following simple checklist of constraints will work 90% of the time. This recipe requires creating one set of constraints on the Scroll View, and a second set of constraints on the Content View.

For the following constraints, I’m using the view names [Scroll View], [Content View] and [Top Level View]. Refer to the above diagram to recall the arrangement of these views.

Scroll View Constraints

The following constraints go on the Scroll View. Keep in mind that the [Safe Area] in the following constraints refer to the Superview of Scroll View, which is Top Level View.

  1. [Scroll View].[Trailing Space] = [Safe Area]
  2. [Scroll View].[Leading Space] = [Safe Area]
  3. [Scroll View].[Bottom Space] = [Safe Area]
  4. [Scroll View].[Top Space] = [Safe Area]

OK, these constraints are super simple! Basically, just pin the UIScrollView to the containing view. These don’t have to be precisely what I’ve listed.

I pinned to the Safe Area on an iPhone X here. If you have some other views on the same form (e.g. some navigation buttons), you might pin a Scroll View edge to those sibling views. Or, if you don’t want to stay within the Safe Area, you can pin to the edge of the containing view instead.

Position the Scroll View where your design suggests it should be — the point is that you want the scroll view size to be fixed in place relative to other elements on the screen.

Content View Constraints

Now that the Scroll View is set to a fixed position, we’ll setup the constraints for the content view.

Since the Content View is contained within the Scroll View, the Superview in the constraints below refers to the Scroll View (i.e. not the Top View).

  1. [Content View].[Trailing Space] = [Superview]
  2. [Content View].[Leading Space] = [Superview]
  3. [Content View].[Top Space] = [Superview]
  4. [Content View].[Bottom Space] = [Superview]
  5. Equal Width: [Content View] & [Top Level View]
  6. (maybe) Height: 1500

The first four of these constraints are super-simple: the Content View edges are pinned to the Scroll View edges. This is exactly what we did with the Scroll View — we pinned it to the Top Level View.

What’s less intuitive is that — at the same time — we have a width and height constraint applied to the Content View. Huh?

These last two constraints allow our content view to have a virtual size that exceeds the visible frame of the Scroll View.

In this case, I’m designing an iPhone app, and I want vertical scrolling, but not horizontal scrolling. To achieve this, I’ve set the Content View horizontal size equal to the Top View size. Because of this, the Scroll View will never perceive a need to scroll the content horizontally. The user won’t be able to scroll horizontally, and the horizontal scroll bar won’t be presented. This is a common technique for iPhone apps, which rarely scroll horizontally when in portrait orientation.

I do want vertical scrolling. There are essentially two ways to ensure the vertical height is known when Scroll View decides whether to present a scroll bar to the user:

  1. By setting a constraint such as #6 to set that hight specifically. You may need to do this when, for example, you’ll be drawing content at runtime in Swift code. You can set the height of this view at runtime by creating an outlet to this constraint, or changing the Content View frame size in code at runtime.
  2. By using a Content View that has some intrinsic size. For example, if you use a UIStackView for the Content View, and the UIStackView has a vertical size known and design-time, then there’s no need to worry about setting a Content View height constraint at all — it will be inferred by the content of the Stack View.

A common mistake is to pin the Content View to the vertical size of the UIScrollView. If you do this, the content won’t scroll — even when there is actual content “off screen”.

Using The Recipe

Now that we have a recipe — in terms of a view hierarchy and a checklist of constraints, let’s put it into practice in a demo application (which you can download from GitHub using the link at the end of this article).

To keep this article shorter, I’ve summarized the steps, since I assume you know how to use Xcode to create apps and views already:

  1. Create a new Single View App
  2. In the default Main.Storyboard, select the View Controller Scene, then select the Size Inspector, then change the View Controller Scene’s Simulated size to Freeform, and the Height property to 1500
  3. create seven views, one over the other, assigning the following colors of the rainbow: #9400D3, #4B0082, #0000FF, #00FF00, #FFFF00, #FF7F00, #FF0000
  4. Add a height constraint to each view, fixing each to 200 points
  5. Highlight all seven views in the Document Outline, and select from the Xcode menu: Editor / Embed In / Stack View.
  6. Highlight he new UIStackView in the Document Outline, and in the Attributes Inspector, set the following properties on the UIStackView:
    a. Axis = Vertical
    b. Distribution = Equal Spacing
    c. Spacing = 10
  7. Highlight the new UIStackView in the Document Outline, and select from the Xcode menu: Editor / Embed In / Scroll View
  8. Highlight the new Scroll View in the Document Outline, and create constraints 1–4 from the above Scroll View Constraints section list.
  9. Now in the Document Outline, click on the UIStackView. Now hold down the ⌘ key and click on the Top Level View. With both these views highlighted, click on the Add New Constraints button, select the Equal Widths checkbox, and press the Add Constraints button to save this constraint.

You don’t need constraint #6 because the Content View in this layout is a UIStackView that has an intrinsic height, since we fixed the height of all the rainbow UIView controls and set a spacing of 10 points. This gives the UIStackView a fixed height of (7 * 200) + (6 * 10) = 1460, which the UIScrollView will read at runtime to use to position and scroll the view.

Guess what? You’re done!

Your View Controller in the Storyboard should look similar to the following. Note that in the Document Outline I set the Xcode Specific Label property for each view to help you read through my outline easier. Your version may not have labels such as “Violet, Indigo”.

View Layout for the Scrollable Rainbow UIStackView

Now add the following Swift code. No — just kidding! No code. This scrolling solution is complete with no code at all. Yay!

Run the application, and scroll the rainbow views in the Scroll View. Your application should look like the following. Note: if you’re using the Medium app, this image may be blank. If so, open this page in a web browser to see this animated GIF.

Download the Code

You can download the code for this tutorial here:

https://github.com/robkerr/TutorialScrollingView

Creating an iOS Chat Bubble with Tails in Swift — the easy way

To understand how to setup a scrolling UIView within a UIScrollView, first we need to understand the views that will be involved, and how they relate to each other. Once we understand this, the concept of scrolling views becomes much simpler to understand and configure.

Virtually everyone who’s used an iOS device has used the iMessage application to send and receive text messages to other iOS users or non-iOS users via SMS. This tutorial will teach you how to create the familiar chat bubble with tail UI element used in the built-in Apple Message application.

This tutorial was created using Swift 4 and Xcode 9.1. Most of the concepts covered apply to previous versions of Swift and Xcode also.

Virtually everyone who’s used an iOS device has used the iMessage application to send and receive text messages to other iOS users or non-iOS users via SMS. This tutorial will teach you how to create the familiar chat bubble with tail UI element used in the built-in Apple Message application.

Chat bubbles in action — in Apple Messages

The objective of this tutorial article isn’t to instruct how to build a fully-functional chat application. I’m going to focus specificlly on how to easily create the dynamically sizing bubble.

Design Requirements

While the chat bubble with tail is familiar, it presents some challenges for development:

  1. The horizontal and vertical size of the bubbles must be expandable. The size of a bubble containing a single word will be much smaller than one containing an entire paragraph (or, for example, an image).
  2. The middle of the chat bubble should stretch to fit its content — but the four corners of the chat bubble should not be stretched, and must remain exactly as designed.
  3. The tail should point to opposite sides of the chat window to indicate whether the message has been sent or received.
  4. The color of the bubble should match app branding, and may suggest a visual cue — for example, in Apple Message, blue indicates messages sent to other iOS/macOS users, green indicates messages sent to conventional SMS users (e.g. Android users) and gray indicates received messages.

Implementation Approach

As in most development topics, there’s more than one way to implement chat bubbles. Some implemeters draw bubbles manually using bezier curves, but using stretchable images is much simpler, and more common.

In this tutorial I’ll cover what I think is the simplest approach, and the basis for how bubbles with tails should probably implemented in most applications:

  1. The bubble itself is based on a simple bitmap (I created mine as a vector using Sketch, and then exported them to scaled png files)
  2. The dynamic bubble size is accomplished using the standard UIImage resizableImage:withCapInsets:resizingMode method provided by Cocoa Touch.
  3. The chat bubble can be set to any color in code using the UIImageView tintColor property — so your app can use any color it needs, and indicate different types of messages with color just as Apple Messsages does.

OK, let’s walk through the implementation. A link to the source code can be found at the end of this article.

Creating the Resizable Images

First create two images: one with the tail on the left, the other with the tail on the right. The color you use doesn’t really matter, since you’ll use the UIImageView.tintColor to color the bubble at runtime later.

You can create images to use in this technique using whatever tools your most proficient with. I used Sketch, but you can use Photoshop or any other application that saves raster bitmaps. When finished, save the final images as png files.

The first key to this technique is taking note of how many points (pixels in the 1x image) should remain fixed at each of the four corners of the bubble image(s). In my image, I’ve highlighted the four corners: each is 21 pixels wide and 17 pixels high.

Chat bubble image with right tail

These fixed corners will have the following effect in the final application:

  1. The minimum bubble should be 42 x 32 points — so that these perfect corners are never distorted by stretching or compressing. This is fine for most applications, but if you need larger or smaller corners, just design the image at whatever size meets your needs.
  2. When the bubble needs to grow, the empty middle space (between the corners) will be stretched to match the necessary size.

The blue bubble I designed is the one used whent he user sends a message. I could have made it any color, but for design purposes it’s blue.

I also designed a gray bubble with the tail on the left for received messages (actually, in Sketch, I just copied it and used the mirror button). Since the “recevied message” bubble is a mirror of the “send” bubble, the corrners will be identical. Again, the color doesn’t matter so long as it’s not white.

Chat bubble image with left tail

Add Images to the Xcode Project

Next, add the bubble images to your Xcode project. I exported 1x, 2x and 3x images from Sketch to png files, and created an Image asset for each type of bubble in Xcode. My sent bubble is named chat_bubble_sent, while the received bubble is called chat_bubble_received.

Bubble Image Assets in Xcode

Create a User Interface

Most applications that use chat bubbles are messaging applications. Chat apps can become quite complex to implement and understand, and would make this tutorial a much longer read! To keep things simple, I’m just going to focus on the bubble itself. The following user interface is designed to demonstrate the technique of creating, resizing and coloring the chat bubble.

Chat Bubble Tutorial User Interface
  1. First is a slider control that allows the bubble height to be changed, so we can move the slider and observe the bubble at any height from 34 to 400.
  2. Next are Two buttons. These switch the display between the sent (right-tailed) bubble and the received (left-tailed) bubble.
  3. Several color options to choose from to change the color of the bubble dynamically.
  4. A UIImageView where the bubble will be displayed. In a chat or message application, this UIImageView will typically be part of the content of a UITableViewCell or UICollectionViewCell.

Setting the Bubble Image

In this tutorial application, the bubble image itself is assigned to the UIImageView’s .image property when either the Sent or Received (2 in the diagram above) buttons are tapped.

Each button calls the following helper function changeImage, passing the name of the image to use.

1: func changeImage(_ name: String) {
2:    guard let image = UIImage(named: name) else { return }
3:    bubbleImageView.image = image
4:       .resizableImage(withCapInsets:
5:                          UIEdgeInsetsMake(17, 21, 17, 21),
6:                          resizingMode: .stretch)
7:       .withRenderingMode(.alwaysTemplate)
8: }

changeImage does does the following:

  • Line #2 loads the named asset from Assets.xassets. When the Sent button is tapped, the asset name chat_bubble_sent is passed; when the Received button is tapped, the chat_bubble_received asset name is passed.
  • Line #4 calls the UIImage method resizableImage:withCapInsets:resizingMode to create a version of the image that can be stretched — except for the four corner regions (each 21 x 17) that we noted when we designed them.
  • Line #7 Calls the UIImage method .withRenderingMode(.alwaysTemplate) to create a version of the resizable image that ignores color information. This modification to the image allows us to use the tintColor to make the bubble any color we need it to be at runtime.

Changing the Bubble Color

Changing the bubble color is quite simple. Since we used the .withRenderingMode(.alwaysTemplate) method when assigning the image to the UIImageView, we can use the .tintColor property to set the color of the image’s primary color.

In this case, I just grabbed the .backgroundColor property of the button the user taps in line #2 to set the color.

1: @IBAction func colorButtonTapped(_ sender: UIButton) {
2:    bubbleImageView.tintColor = sender.backgroundColor
3: }

Changing the Bubble Size

For the tutorial appliation, the height of the bubble is changed by moving the UISlider control. The control is set so that the bubble must be at least 34 points (2 x 17). This ensures that the corners will never be compressed vertically beyond the minimum defined at design time.

1: @IBAction func sliderChanged(_ sender: UISlider) {
2:    bubbleHeight.text = “(sender.value)”
3:    bubbleHeightConstraint.constant = CGFloat(sender.value)
4: }

Line 3 simply changes the constant value of the UIImageView height constraint.

Run the Tutorial App

Summary

I hope this helps you understand how to approach this type of requirement when using Swift for iOS applications. While this isn’t the only way to approach the impelmentation, this is a simple and effective method. You can see in the code below that we only used 40 lines of code to implement the entire solution!

Take this technique further by incorporating bubbles in your own app. The basis for this approach is a resizable UIIMageView, with a resizable UIImage with capInsets. Anywhere in your solution where you can add a UIImageView to a UIView container, you can leverage this technique.

Full Swift Source Code

Following is the full source code for the tutorial.

01: class ViewController: UIViewController {
02:
03: @IBOutlet weak var slider: UISlider!
04: @IBOutlet weak var bubbleImageView: UIImageView!
05:
06: @IBOutlet weak var bubbleHeight: UILabel!
07: @IBOutlet weak var bubbleHeightConstraint: NSLayoutConstraint!
08:
09: override func viewDidLoad() {
10:    super.viewDidLoad()
11: }
12:
13: @IBAction func sliderChanged(_ sender: UISlider) {
14:    bubbleHeight.text = “(sender.value)”
15:    bubbleHeightConstraint.constant = CGFloat(sender.value)
16: }
17:
18: @IBAction func sentButtonTapped(_ sender: UIButton) {
19:    changeImage(“chat_bubble_sent”)
20:    bubbleImageView.tintColor = UIColor(named:             
                                   “chat_bubble_color_sent”)
21: }
22:
23: @IBAction func receivedButtonTapped(_ sender: UIButton) {
24:    changeImage(“chat_bubble_received”)
25:    bubbleImageView.tintColor = UIColor(named: 
                                   “chat_bubble_color_received”)
26: }
27:
28: func changeImage(_ name: String) {
29:    guard let image = UIImage(named: name) else { return }
30:    bubbleImageView.image = image
31:           .resizableImage(withCapInsets:
32:                           UIEdgeInsetsMake(17, 30, 17, 30),
33:                           resizingMode: .stretch)
34:           .withRenderingMode(.alwaysTemplate)
35: }
36:
37: @IBAction func colorButtonTapped(_ sender: UIButton) {
38:    bubbleImageView.tintColor = sender.backgroundColor
39: }
40: }

Download the Code

You can download the code for this tutorial here: https://github.com/robkerr/TutorialChatBubble

60% Custom Apple Mechanical Keyboard Build

Since I type on a vintage Apple AEK M0115 board every day, I can attest that the 60% custom board with the cut-down steel plate, vintage orange switches and vintage PBT keycaps feels exactly like typing on the M0115. I’m super happy to have the same experience on a modern form factor that I can slip in my backpack and take on the road!

I started using computers in the golden age of keyboards — which in my opinion is from about 1985 until 1995. To this day, if I’m considering a new computer, the first thing I do is some touch typing on the keyboard. If I don’t like the keyboard, I really won’t go any further. Since 1995 I’ve actually liked very few newly made keyboards, sad to say.

I understand most people aren’t that picky about keyboards, but since I spend so much time typing (10+ hours a day), I’m picky. Unreasonably so…

I love old mechanical switch keyboards, and I type on them every day. I have a collection of favorites…Among them an IBM Model M, a Northgate Omnikey Ultra, and every mechanical keyboard Apple made for the Macintosh line. But my “daily driver” is an Apple AEK (M0115) made circa 1987 that I restored to “better than new” condition. Here is this beast of a keyboard:

I love this board, but I have to admit, it’s just enormous! It takes up loads of space, and my mouse is a mile away from the keyboard’s home row…which I don’t like at all and is just a bit annoying to me. But the sound and the feel — just amazing.

Quest for the Perfect Board

The perfect keyboard has until now alluded me…I love the mechanical keyboards made in the 80s/90s…the feel, the sound, the precision…but I like modern, compact form factors which are just more practical. I mean…do we really need function keys? Do I really need a 10-key on the right side? I understand why most people want dedicated arrow keys, but with thoughtfully designed Fn-overlays even dedicated arrow keys aren’t really necessary.

Hooray!

Finally, thanks to a custom PCB I ran across on Geekhack, called the Alps64, I saw that I could have the best of both worlds — 1990s mechanicals (and nostalgia!), 2017 form factors, and complete customization.

The catch is…I’d have to build it myself.

A note on customization

For those who use any kind of laptop keyboard, you have a “Fn” key, that allows certain keys to have more than one function. For example, a MacBook has an F1 key that is used to dim the screen, or if you press Fn-F1, it acts as the actual F1 key. That’s an “overlay”.

The Alps64 has seven possible overlays, and is fully user-programmable using the TMK Keymap Editor. Seven is way too many, but the point is if you want to have more than one “Fn Layer”, go for it (I use two).

Build Objective

The objectives for building my perfect keyboard were:

  1. 60% form factor (having 60 keys, no nav cluster and no keypad)
  2. Alps keyswitches — I prefer both “tactile” Orange made by Alps in the 1980s, Blue Alps or White (either Alps SKCM or Matias clones). I’ll mention my inability to choose between these later…
  3. PBT Keycaps with Apple Macintosh legends having that Univers Condensed Thin italic font that they haven’t used in ages.
  4. Key layout matching the 1987 Apple Extended Keyboard M0115
  5. Fully user programmable keys and layers

I like the 60% layout because:

  1. The amount of desk space it needs — barely any at all.
  2. Keeping the mouse within two inches of the Return key on the keyboard is simply better ergonomically. I find that leaning over to my mouse/trackpad makes my shoulder and elbow ache after a few hours.
  3. 60% is the only keyboard size you can realistically toss into a backpack and take anywhere.

Which switches? All of the above!

After much internal debate, I decided I’d never be able to choose whether to build this board with tactile (orange) or clicky (blue/white) switches…so I decided to build bothof them. Plus I built a third with Alps Cream Dampened switches, which I don’t enjoy typing on as much, but are a bit more considerate to use in a shared work environment, library etc. — especially around poor souls who grew up with rubber dome keyboards that make barely any noise at all.

Bill of materials

  1. PCBs from Hasu’s Geekhack project. These are custom made and shipped from Japan. Luckily I got in on a group buy drop in January, and had these parts in February. In the meantime, I had to start sourcing vintage boards to use as donors for the other parts.
  2. Alps Orange Switches. I actually have three M0115 AEK boards that could have donated the switches…but since I think the M0115 is the best board Apple ever made (and is somewhat rare), I couldn’t bring myself to sacrifice any of them. So I found an Apple Standard Keyboard (M0116) in poor overall condition (i.e. really cheap to acquire), but with good switches. Check.
  3. Blue Alps Switches. Blue Alps switches are very rare (and ridiculously expensive when found), but they are the best clicky alps switches, and this project needed only the best. It took months, but I finally was able to source a 1987 Ortek keyboard in HORRIBLE condition, but with Blue Switches. A third of the 104 switches are not working very well but I only needed 60 good ones, which I was able to find in that board.
  4. Plates. Apple AEK II (M3501) boards donated plates. I didn’t feel the same hesitation as I did about the M0115. The M3501 boards are plentiful and cheap, and I easily found them in “non-restore-worthy” condition to use as donors for steel plates.
  5. Keycaps. The AEK IIs that donated their plates also donated their keycaps.
  6. Cases. The new boards would need cases. Since I’m really looking for these boards to be a 60% version of the old AEK models, I went with plastic since the old vintage boards also use plastic cases. I used Sentraq plastic 60% cases, which worked out perfectly.

Preparing Parts

  • The PBT keycaps don’t yellow, and just needed a good scrub in the kitchen sink to remove 30 years of grime. I find using something like Comet or Soft Scrub to (gently) scrub away grime and shine restores the caps really well.
  • The ABS plastic space bar definitely yellows, and on a 30 year old board the space bar color will be closer to orange than gray. I Retrobrighted the spacebar with a UV LED Bulb and 12% Cream Hydrogen Peroxide. Color back to normal! There are several techniques for Retrobright — I use the process developed by 8-bit guy described in this youtube video.
  • To restore the switches, I disassembled them completely, blew out the dust with canned air, lubricated the sliders and springs (tip: a thin coat of SuperLube added to the Alps switch coil springs eliminates “ping” sound if it bothers you)
  • In this photo the restored switches and keycaps are bagged and the plate is removed from the M3501 keyboard and cleaned, but obviously is still the wrong size for a 60% board.

Cutting the plate

I really was going for the same feel and sound as the original M0115 AEK, so I cut the original plate down. This actually went really fast with a zip saw and a bench clamp.

After the zip saw, I used a bench grinding wheel to finish the shape, knock off the sharp edges and any surface rust. Then to finish up I re-painted it satin black with a Rust-Oleum spray.

Assembling the board

Fast forwarding a bit…the Alps64 as delivered needs to have diodes soldered in to every switch position, and then of course the switches are installed in the steel plate and soldered to the PCB. Actually, I found installing the diodes took longer than installing switches. But both operations are simple, and anyone with patience can handle this level of soldering.

Here’s the front of the Orange switch board after all soldering is completed.

And the back-side of the board

Once the soldering was completed, I ran a quick test by connecting the raw board to my laptop and check that all the keys register as expected. Afterward, it’s time to load up keycaps, mount the board in the case, and customize the firmware.

A fortunate aspect of cutting down the vintage steel plate is that all the stabilizer hardware is on-hand and correct — all parts are just going back into the same plate they came from.

And loading keycaps on the Blue Alps Switch board.

And here’s the board installed into the Sentraq case, running through final tests on my laptop.

And finally, the family of three keyboards Blue Alps, Orange Alps, Cream Dampened Alps

Firmware Programming

Like most of the custom PCBs out there, this board is “programmed” using a web-based GUI Keymap Editor. This couldn’t be simpler…you point and click to choose what each key does, then download a binary file to flash onto the keyboard using a DFU utility. That might seem a little techie, but if you can handle soldering, flashing firmware is within grasp.

The DFU utility was pre-compiled for Windows, but not for macOS, so rather than compile my own from source code, I downloaded the Windows version in a Parallels VM, attached the keyboard to the VM, and flashed it that way. That worked fine.

Results

Since I type on a vintage Apple AEK M0115 board every day, I can attest that the 60% custom board with the cut-down steel plate, vintage orange switches and vintage PBT keycaps feels exactly like typing on the M0115. I’m super happy to have the same experience on a modern form factor that I can slip in my backpack and take on the road!

There’s really no direct comparison for the board with Matias Click switches — Apple never made an AEK (or any board that I know of) with Clicky White Alps switches. I can compare the 60% Matias Click board with my AEK II that I retrofitted with Alps SKCM click switches (harvested from a non-working Omnikey). The Matias board feels very similar, maybe a little louder and has a little more ping (coil spring vibration) than the AEK with the Alps white keys from the OmniKey board. But it’s very satisfying to type on, and I really love it.

Overall, I’m delighted with the outcome of this project, and can’t thank Hasu enough for going through the trouble to design and manufacture the AEK layout compatible PCB boards that made these builds possible!

How to create a static UICollectionView

UICollectionView doesn’t support static content layouts as its sibling UITableView. There is a way to simulate it though, and this article will walk through how to do just that.

UICollectionView doesn’t support static content layouts as its sibling UITableView. There is a way to simulate it though, and this article will walk through how to do just that.

Tutorial objectives

When using a UITableView, we have a choice of either creating a table with static cells or to create cell templates that are used to create cells dynamically. The former approach can be used to more quickly implement simple use cases like a screen to allow users to change settings. The latter dynamic approach is used when the number of cells are not known until runtime–such as a list of products in a product catalog stored on a web service.

In this tutorial, we’ll create a UICollectionView that will serve as a “main menu” for an application. We’ll provide the following functionality to the application:

  1. The collection view will display a fixed number of cells designed within an Xcode storyboard.
  2. The cells will adapt to the size of the display — for example using a two-column layout on an iPhone X in portrait mode, and a single column on an iPhone SE in portrait mode. In landscape mode, the click-able cells will fill the horizontal space before starting a new row.
  3. When the user taps on a cell, the application will intercept that event and respond appropriately (in a production app, this may be to fire the correct segue associated with each cell.

To keep the tutorial simple, this application will just display four cells of identical size and layout. But a real application could be much more sophisticated. Each cell could be a different Collection View Cell design, and present entirely different content from the others. But the basic architecture for this approach would be the same.

Step 1: Creating a static layout in Interface Builder

As with creating static UITableView layouts, the first step is to create a layout in Interface Builder.

Complete the following steps first:

  1. Open Xcode and create a new single view application
  2. Add a UICollectionView to the View Controller scene in Main.storyboard
  3. Set ViewController as the UICollectionView delegate and datasource
  4. Using the size inspector, customize the UICollectionView cell size to w=170, h=80

Customize the default UICollectionViewCell with the following changes:

  1. Add a UIView, and use constraints to pin it 4 points from the top, bottom, leading and trailing edges (clear the constrain to margins checkbox).
  2. Add a UILabel to the UIView in step 1, and center it vertically & horizontally in the UIView container.
  3. Change the background color of the UIView to Purple, and the UILabel text to “Purple Cell”.
  4. Using the Identity Inspector, change the UICollectionViewCell Collection Reusable View Identifier to “Purple Cell”.

Now copy & paste the Purple cell three times. Change the UIView color, the UILabel text and the UICollectionViewCell Reuse Identity to differentiate each of the four cells from each other. When finished your storyboard design should look something like this:

Step 2: Implement the view controller delegates

If you ran the application now, you’d see a screen with an empty UICollectionView. Why is that? It’s because all we really did was to design some templates of what a set of dynamic cells can look like. Even though the layout looks similar to what can be done using a static UITableView (except for multiple columns), it’s not really a static design. But by adding two data source methods, you can provide the missing information to create the layout design using Interface Builder at runtime.

Change the UIViewController class definition as follows:

class ViewController: UIViewController { let cellIds = ["Purple Cell","Green Cell","Blue Cell","Red Cell"] let cellSizes = Array( repeatElement(CGSize(width:170, height:80), count: 4)) override func viewDidLoad() { super.viewDidLoad() } }

The cellIds property contains a list of the Identity properties we assigned to each UICollectionViewCell designed in Interface builder. These Ids must exactly match the values we assigned to each cell in Interface Builder.

The cellSizes property stores the size of each cell. In this simple tutorial, all cells will be the same size–but they don’t have to be; each cell could have different content and a different size. By defining the sizes here, we’re giving ourselves the ability to control cell sizes at runtime via the UICollectionViewDelegateFlowLayout (we’ll do this in a moment).

Add data source delegate methods

Now add the data source methods to the end of the ViewController.swift file via a swift extension.

extension ViewController: UICollectionViewDataSource { func collectionView( _ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { return cellIds.count } func collectionView( _ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell { return collectionView.dequeueReusableCell( withReuseIdentifier: cellIds[indexPath.item], for: indexPath) } }

Each of these delegate methods has only one line of code, and accomplish the following:

  1. numberOfItemsInSection returns the number of cells in the UICollectionView, which is inferred by the number of cell Ids we added to the cellIds property in the last step.
  2. For each of the cells, we create a cell with the corresponding Id. This is the workaround that allows us to make a dynamic UICollectionView behave like a static UITableView.

Add the layout delegate method

To give us control over the size of each cell at runtime, we can adopt the UICollectionViewDelegateFlowLayout protocol, and provide an implementation for sizeForItemAt method. The implementation will simply return the corresponding cellSizes element corresponding to the indexPath being laid out.

extension ViewController: UICollectionViewDelegateFlowLayout { func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize { return cellSizes[indexPath.item] } }

Add the didSelectItemAt delegate method

Since the objective of this tutorial was to create a type of menu, we need to intercept when users tap on items in the menu. To do this, implement a single UICollectionViewDelegate method. Add the following extension to the bottom of ViewController.swift.

extension ViewController: UICollectionViewDelegate { func collectionView(_ collectionView: UICollectionView, didSelectItemAt indexPath: IndexPath) { print("User tapped on (cellIds[indexPath.row])") } }

This delegate method is called when the user taps on any of the (four) cells in the UICollectionView. In response, the sample code just prints the Cell Id of the cell the user tapped on. In a production application, we might instead cast the UITableViewCell to a custom class, and read metadata from it to decide how to branch the application flow.

Now if you run the application, and press on each button, you should see the following output in the Xcode debug console.

User tapped on Purple Cell User tapped on Green Cell User tapped on Red Cell User tapped on Blue Cell

The completed, flexible layout

If you now run the application on different devices, you can see that we’ve created an almost static UICollectionView that represents a menu. The advantage of this approach over using a UITableView is that we have a much more flexible layout that can be presented on different devices.

iPhone X & iPhone SE — Portrait

The Width of the iPhone X and iPhone SE are quite different, so our layout automatically adapts between two columns (X) and 1 column (SE).

iPhone X — Landscape

The iPhone X in landscape has plenty of horizontal space, so all four of the menu items fit on one row.

iPhone SE — Landscape

The iPhone SE in landscape is more constrained horizontally, so the layout automatically flows onto two rows:

Changing up the cell sizes

One of the advantages to UICollectionView is how flexible it is when cell sizes are different. By changing the array of cell sizes at the top of UIViewController.swift to the following, we can observe this flexibility in action.

Change the cellSizes property in ViewController.swift to the following:

let cellSizes = [ CGSize(width:210, height:60), CGSize(width:180, height:100), CGSize(width:170, height:80), CGSize(width:150, height:150) ]

Full source code

I hope this tutorial was helpful to you and gave you some ideas for your own applications. You can download the full source code for this tutorial here on my github account. Feel free to contact me on twitter via @rekerrsive.