Xcode UI Testing

Let's talk briefly about testing.

Xcode 7 is coming out with a great feature, XCTest UI tests. Now, some of you might not know, but we've been able to do UI testing for iOS apps for quite a while now using Instruments and the UIA Instrument. If tried that, you know that tool was terrible.

Now, UI testing is a fine approach to testing, but don't be fooled by the awesome demos – UI testing is the most expensive way to author tests. They are the hardest to maintain, they are the hardest to debug, and they are the slowest to run. Now, that does not mean that you should not write them, they just shouldn't be your primary way of testing your app.

Let's look at a simple test from the Lister demo app.

func testExample() {
    let app = XCUIApplication()
    app.tables.staticTexts["Groceries"].tap()

    let addItemTextField = app.tables.textFields["Add Item"]
    addItemTextField.tap()
    addItemTextField.typeText("Burgers")
    app.typeText("\r")

    XCTAssert(app.tables.cells.textFields["Burgers"].exists)
}

The test simply taps on the "Groceries" list, adds a "Burgers" item, and verifies that the item is indeed in the list.

Notice a problem?

Now, this is where we can get into a bit of a philosophical debate about testing and verification. The question is, do we want the UI tests to only verify that the UI is correct? Or, do we want our UI tests to validate that both the UI and the model is correct?

For me, I've seen far too many bugs where the UI was updated but the model didn't update to be satisfied with only validating the UI in our tests.

So, what do we do? The primary problem is that these UI tests are running outside of the process, and in the case of tests on the device, they aren't even running on the same machine. Don't worry, we have options!

The basic requirement is to be able to send a message of some sort to the application under test (AUT). Our options:

  1. Make use of the Accessibility APIs
  2. Make use of the network

My philosophy for building test infrastructure is to build it the cheapest, most simplest way possible for your needs at the time. When you have expensive test infrastructure, it can be a significant cost to update as your needs change later, and they always do.

So, instead of building up a mini-server that runs in your app, I would just fake it and create a UIView/NSView with a simple text field that's shown in response to some gesture. The purpose of this view is to provide a mechanism to provide a command layer into your product accessible via your test.

A command might even be as simple as a string: "get model items". When ENTER is pressed, you'd have a handler for the event that would handle each command and replace the contents of the text field with the result, such as: "Burgers, Apples, Oranges, Bananas, Milk, Bread".

Now, in your test, you can simply get the value of the text field and XCTAssert that the value matches the string.

You might be thinking that this is a bit of a hack, and well, you're right. However, it ends up being extremely cheap to maintain and update this model as it is so extremely easy to get up and running. Adding commands is literally as simple as registering the command name with a block on the view controller as well. And, if you really want to, you can compile out all of this stuff in App Store builds.

Xcode UI Testing

Swift v2.0 Error Handling – Revisit

Last night I posted my initial thoughts about Swift's approach to error handling. I've softened a little to the approach as I got to play with it a bunch this morning, including re-visting what a Result<T, E> looks like in Swift.

It's only after going through that excercise again that reminded me just how clunky dealing with enum values are, especially those that contain values, that I can appreciate why the Swift team went in the direction they did.

enum MyCustomError: ErrorType {
    case Happy
    case Dance
}

enum Result<T, E> {
    case Ok(T)
    case Error(E)
}

func base() -> Result<(), MyCustomError> {
    return Result.Error(MyCustomError.Dance)
}

func handle() {
    let result = base()

    switch (result) {
    case .Ok(_):
        println("required...")

    case let .Error(error):
        switch (error) {
        case MyCustomError.Happy:
            print("Happy error")

        case MyCustomError.Dance:
            print("Dance error")
        }
    }
}

Or, with the Swift error handling approach.

enum MyCustomError: ErrorType {
    case Happy
    case Dance
}

func base() throws {
    throw MyCustomError.Dance
}

func handle() {
    do {
        try base()
    }
    catch MyCustomError.Happy {
        print("Happy error")
    }
    catch MyCustomError.Dance {
        print("Dance error")
    }
    catch {
        print("catch all, because no types")
    }
}

Of course… those aren't our only options though. See, there was this lovely little keyword guard that was also introduced to us. So, by putting some better smarts into Result<T, E>, we can get something that looks like this:

enum MyCustomError: ErrorType {
    case Happy
    case Dance
}

enum Result<T, E> {
    case Ok(T)
    case Err(E)

    // what if these were simply generated for all enums?
    var ok: T? {
        switch self {
        case let .Ok(value): return value
        case .Err(_): return nil
        }
    }

    var err: E? {
        switch self {
        case .Ok(_): return nil
        case let .Err(e): return e
        }
    }
}

func base() -> Result<(), MyCustomError> {
    return Result.Err(MyCustomError.Dance)
}

func handle() {
    let result = base()

    guard let value = result.ok else {
        switch result.err! {
        case MyCustomError.Happy:
            print("Happy error")

        case MyCustomError.Dance:
            print("Dance error")
        }

        return
    }

    print("value: \(value)")
}

Swift's throws approach is still more terse, but provides no type-safety on the error type. It might be possible to stream-line the approach above some more, but it feels better to me. I never liked exceptions used as code flow control and the do-try-catch approach seems to just beg for that.

Eh… I'll keep noodling.

Swift v2.0 Error Handling – Revisit

Swift v2 – Error Handling First Impressions

UPDATE June 9th, 2015: Go watch the 'What's New in Swift' talk. It answered nearly all of my concerns below.

Swift v2.0… honestly, I was a bit surprised by many of the updates to the language, especially the error handling. There's a session on Tuesday to talk about many of the changes in depth, but I'm pretty mixed about the update.

The worst update, in my opinion, is the error handling. It's basically a pseudo-exception system.

enum SomeException: ErrorType {
    case Oops
}

func bad() throws {
    throw SomeException.Oops
}

func good() {
    do {
        try bad()
    }
    catch SomeException.Oops {
        print("There was an exception")
    }
    catch {
        print("This is required because...")
    }
}

Ok, so what do we have up there? Well, we have a function bad that is marked as throwing an exception, and we have a function good that makes use of calling that function and handling the exception.

Already I'm seeing problems that I don't know if can even be solved.

  1. No type information about the type of errors that can happen from bad. This strikes me as extremely odd from a language that is all about type-safety.
  2. A catch-all catch statement is required – I don't know if Swift can even fix as it doesn't seem to have enough information to determine all possible types to be thrown, especially for deep function calls.
  3. Verbose with needing both do to scope your catch calls with and try to prefix the call to functions that can throw.

If you want to simply pass the buck on the error, that's pretty simple to do:

func good() throws {
    try bad()
}

Now… this is where things get interesting. There is a whole section about stopping the error propogation. If you know (famous last words…) that the function you are calling will not throw, you can do this:

func goodish() {
    try! bad()
}

This is the rough equivalent of force-unwrapping your optionals. This is not great, in my opinion. The really bad part is that if you use code that does this, there is no way for you handle the exception that it can throw.

func goodish() {
    try! bad()
}

func nogood() {
    do {
        try goodish()
    }
    catch {
        print("sorry, never going to happen")
    }
}

Let's say that goodish actually does throw in an edge-case that the programmer missed. Well, if that's the case, too bad for you; I sure hope you have access to that code so you can fix it. In fact, the code above will issue you a compiler warning because goodish() isn't marked as throws.

Ok, so you get the idea to try this:

func better() throws {
    try! bad()
}

func nogood() {
    do {
        try better()
    }
    catch {
        print("sorry, never going to happen")
    }
}

Nope… still going to get a runtime error.

The reason? I'm speculating here, but it seems like Swift is simply adding a bunch of compiler magic to inline the error information as a parameter to the function. That information is thrown away when you call try! and will not be propogated out. The nice benefit of this is you can get an error-handling system that has great runtime performance. However, the downside, is that you've built a system that codifies that correctness is up to the programmer now and not verifiable by the compiler as much as it could have been.

I would have much rather seen a system that codifies the error as the return type and forces handling of it, as in Rust. I thought that was the direction we were heading with Optional<T> when it was introduced with Swift 1.0.

Who knows, maybe it will grow on me, but I think I may be sticking with the Result<T, U> pattern.

Swift v2 – Error Handling First Impressions

VS Code Swift Colorizer

I started working on a Swift colorizer for Visual Studio Code a few days ago. While looking through the code for VS Code1, I've come to realize a few things.

  1. There are basically three different ways to create colorization:
    1. With a TextMate syntax file
    2. With a "Monarch" syntax declaration; I can only assume this is a VS Code specific declaration
    3. A full language service 2. Customization is not really supported that well at the moment 3. Swift has a bunch of language constructs to worry about in order to properly colorize itself

I actually tried going down the path of building a proper tokenizer to give semantically correct highlighting, but without knowing how the internals of VS Code really work2 and my lack of wanting to build a full on parser for Swift, I opted to go down the "Monarch" syntax path3.

So… does the "Monarch" syntax declaration work well? Actually, it works quite well. It's basically a declarative state machine syntax so it allows some pretty nice stuff that is either extremely difficult or just impossible to do with regex. For instance, string interpolation highlighting was fairly straight-forward and you can push and pop states on the syntax highlighting stack.

Anyhow, I've updated the project here: https://github.com/owensd/vscode-swift. It's mostly working, though I do have a set of known issues and I haven't got around to implementing those yet, such as unicode operator colorizing.

Enjoy.

  1. After all, it's just a JavaScript app.
  2. All of that code is minimized as it's not ready for mainstream consumption.
  3. As far as I know, there are no TextMate Swift colorizers that actually work for the entirety of the language.
VS Code Swift Colorizer

Optionals and if-let

Natasha posted a article a little over a week ago: http://natashatherobot.com/swift-unused-optional-value/. I agree with her initial feelings, "This immediately stood out as "wrong" to me."

Basically she was talking about this:

var someOptionalVar: String? = "something there"

if someOptionalVar != nil {
    // just need to check for the presence of a value
}

I don't like this. It just doesn't feel semantically correct. The optional is not nil, it specifically has a value of None. I've never liked this syntax; I've talked about it before – thankfully the bool comparison has been lost, but the nil check still sits wrong with me.

Another example:

let f‚ÇÄ: Bool? = true
let f‚ÇÅ: Bool? = false
let f‚ÇÇ: Bool? = nil

let fn = [ (f‚ÇÄ, "f‚ÇÄ"), (f‚ÇÅ, "f‚ÇÅ"), (f‚ÇÇ, "f‚ÇÇ")]

for (f, label) in fn {
    if f == nil {
        println("\(label) has no value")
    }
    if f != nil {
        println("\(label) has a value")
    }

    if let fv = f {
        println("\(label) = \(fv)")
    }
    else {
        println("\(label) = nil")
    }

    switch (f) {
    case .Some(_):
        println("\(label) has a value.")

    case .None:
        println("\(label) has no value")
    }
}

It's a bit annoying that we have so many ways to check for the presence of a value in an optional. I find languages that have multiple ways of achieving the same thing end up being confusing, especially as more features get added. It's unnecessary complexity that adds over time. Natasha already pointed out the multiple discussions of people doing it different ways.

I also prefer a different way:

func hasValue<T>(value: T?) -> Bool {
    switch (value) {
    case .Some(_): return true
    case .None: return false
    }
}

for (f, label) in fn {
    if hasValue(f) {
        println("\(label) has a value")
    }
    if !hasValue(f) {
        println("\(label) has no value")
    }
}

There is no ambiguity here, there is not transparent conversion between nil and .None, and most importantly to me, the code is always extremely obvious.

Optionals and if-let

Prototyping Practical FRP

A few days ago I started brainstorming how we might go about handling input in a way that actually satisfies the continuous and temporal nature of FRP behaviors. Today, let’ take a deeper dive into one possible implementation.

Understanding the Problem

The problem can essentially be broken down into the following goals:

  1. Storage of historical input data
  2. Efficient interpolation between discrete inputs

For input handling, there are two basic types: digital and analog. The digital input is on or off, and the analog input is a value that ranges between two min and max values for the input. However, since all of our input is made up of discreet inputs, the problem is nearly identical.

Here is a sample scatter-plot graph of a keypress:

   │                                                              
 1 ┤    ▪           ▪     ▪       ▪             ▪                 
   │                                                              
   │                                                              
 0 └────────▪──────────▪──────▪───────────▪──────────▪───────▶  t 

Each of the represent an actual input change for the button on the time. Of course, the problem with this plot above is that there is no f(t) that results in a value for every value of t. We need to interpolate that graph:

   │                                                              
 1 ┼────▪───┐       ▪──┐  ▪───┐   ▪───────┐     ▪────┐            
   │        │       │  │  │   │   │       │     │    │            
   │        │       │  │  │   │   │       │     │    │            
 0 └────────▪───────┴──▪──┴───▪───┴───────▪─────┴────▪───────▶  t 
   t₀       t₁      t₂ t₃ t₄  t₅  t₆      t₇    t₈   t₉

This provides us with a nice step-graph. Now, we can most definitely provide an answer for f(t) given any t that has occurred.

Basic Approach

Let’ start thinking about the shape of the algorithm. The simplest approach is to simply loop through the input and return the correct value based on the time range.

Here is a sample program based on the above input graph:

let t₀ = 0,  t₁ = 4,  t₂ = 8,  t₃ = 10, t₄ = 12
let t₅ = 15, t₆ = 18, t₇ = 22, t₈ = 25, t₉ = 28

let up = 1, down = 0

struct Range { let start: Int, end: Int }
struct Input { let range: Range, value: Int }

var inputs: [Input] = []

// Simulate an input stream...
inputs.append(Input(range: Range(start: t₀, end: t₁), value: up))
inputs.append(Input(range: Range(start: t₁, end: t₂), value: down))
inputs.append(Input(range: Range(start: t₂, end: t₃), value: up))
inputs.append(Input(range: Range(start: t₃, end: t₄), value: down))
inputs.append(Input(range: Range(start: t₄, end: t₅), value: up))
inputs.append(Input(range: Range(start: t₅, end: t₆), value: down))
inputs.append(Input(range: Range(start: t₆, end: t₇), value: up))
inputs.append(Input(range: Range(start: t₇, end: t₈), value: down))
inputs.append(Input(range: Range(start: t₈, end: t₉), value: up))
inputs.append(Input(range: Range(start: t₉, end: Int.max), value: down))

func f(t: Int) -> Int {
    for input in inputs {
        if input.range.start <= t && t < input.range.end {
            return input.value
        }
    }
    
    return inputs.last?.value ?? Int.min
}

Then we can simply call the function to get our results:

f(0)     // -> 1
f(29)    // -> 0
f(16)    // -> 0
f(22)    // -> 0

Notice that we can call f(t) in any order and it all works.

Optimization

Now, there is a problem with the above implementation: as the game progresses, more and more input will be generated. This is not good… this is an O(n) algorithm. What we want to as close to is the O(1) algorithm that is available to us in non-FRP world.

Well, one simple approach is to use the previous result as a hint. Here’ the new function f(t).

func f(t: Int, _ index: Int? = nil) -> (value: Int, index: Int) {
    let start = index ?? 0
    let forward = 1, reverse = -1

    let len = inputs.count
    let step = t < inputs.get(start)?.range.start ? reverse : forward

    // search loop
    for var idx = start; 0 <= idx && idx < len; idx += step {
        let input = inputs[idx]
        if input.range.start <= t && t < input.range.end {
            return (input.value, idx)
        }
    }

    let value = (step == forward) ?
        inputs.last?.value ?? Int.min :
        inputs.first?.value ?? Int.min;

    return (value, 0)
}

The major updates are:

  1. Return a tuple (value, index) that can provide us a hint as to the index into the inputs history to start at.
  2. Based on the given index, we can also determine the search order.

Note: I’m using an extension to Array; you can find it here: Array.get.

The updated usage code:

let f₀  = f(0)
let f₁₆ = f(16, f₀.index + 1)
let f₂₂ = f(22, f₁₆.index + 1)
let f₂₉ = f(29, f₂₂.index + 1)

When they are called in the expected time order, the number of times through the “search loop” above is reduced from 25 down to 10. The thing to note from the value 10 here, is that this is the minimum number of searches required for this approach.

That’ great!

Also, imagine if there were thousands of inputs. The value of 25 above would be significantly higher as each search would be starting at index 0. With this new optimized approach, the number searches will be bound by the number of inputs that have happened since the last call of f(t).

It’s good to note that you could still call this version of the function in any order and all should still be good. The the index hint is only used as the starting search point in the inputs array.

Summary

This approach is looking pretty solid to me, though, I’ve only really played around with it in the playground and with fake, simulated input data. Next time, I’ll explore out how this algorithm looks with real input handling code.

Prototyping Practical FRP

Taking a Look at Practical FRP

My last blog post got me thinking… if we wanted to keep track of input over some arbitrary time range T, what is a way to approach the problem?

To start with, there are three types of inputs that I care about:

  1. Keyboards
  2. Mice
  3. Gamepads

For each of these inputs, my primary question to ask breaks down into:

  1. What was the starting state of the input component
  2. What was the ending state of the input component

Further, for each of the types, I may ask some additional questions depending on the type:

  1. digital – how many transitions were made over t?
  2. analog – what was the actual path over t? the averaged result over t?

As we can see, the basic need is to perform some operation on individual components of the input over a set of those inputs that represent a time range.

Keyboard

There are lots of different types of keyboards, but both in the US and internationally, the standard keyboards do not have any more than 105 keys.

Great! At any given moment in time, any combination of keys can be down.

This means that a 128 bit wide bitfield will be more than adequate to store this data.

Mouse

For the mouse, well, some mice have LOT of buttons. Of course, many of those buttons come across as actual keystrokes. Regardless, we know we need the following data:

  1. The (x, y) coords of the mouse
  2. The button state of some number of supported buttons
  3. The rotation of the mouse wheel (x, y)

For now, we can choose the following data layout:

  1. 32 bits for the x-coordinate
  2. 32 bits for the y-coordinate
  3. 32 bits for each of the mouse wheel coords (64 bits)
  4. A bitfield for each of the supported buttons, let’s say 7 buttons.

So, for a mouse, we can store the data in a 136 bits without any trouble at all.

Gamepad

If we take a look at the basics of an Xbox or PlayStation controller, you can see the following controls:

  1. Two thumb-sticks (analog)
  2. Two triggers (analog)
  3. Two bumpers
  4. 8-way directional d-pad
  5. 4 primary buttons
  6. Three auxiliary buttons (“select”, “start”, “power”)
  7. Two buttons on the thumb-sticks

The digital buttons account for 19 bits. The analog inputs account for 6-axises of input data. If we use 16 bits for each axis, then the analog components need 96 bits.

The gamepad can be represented in 120 bits.

Keeping the History

Now that we have a strategy to handle storing the input in a somewhat compact way, it is good to look at storing the history of the inputs over time. Recall that a behavior is about a continuous value for the input.

So how much space is that?

Originally, I was thinking that this would be a bit impractical‚Ķ however, after thinking about it a bit more, especially in the context of this compact storage, this really doesn’t seem so bad.

Here’s the math so we can talk about it more deeply:

 document.addEventListener(“DOMContentLoaded”, function() {
var tangle = new Tangle(document.body, {
initialize: function () {
this.KeyboardInputSizeInBits = 128;
this.MouseInputSizeInBits = 136;
this.GamePadInputSizeInBits = 120;

this.KeyboardInputs = 1;
this.MiceInputs = 1;
this.GamePadInputs = 1;

this.KeyboardSampleRate = 60;
this.MouseSampleRate = 60;
this.GamePadSampleRate = 60;

this.CaptureTimeInMinutes = 120;
},
update: function () {
function SizeInMB(BitsPerSample, SamplesPerSecond, TimeInMinutes) {
var BitsPerSecond = BitsPerSample * SamplesPerSecond;
var DurationInSeconds = TimeInMinutes * 60;
var BitsPerMB = 1024 * 1024 * 8;
return BitsPerSecond * DurationInSeconds / BitsPerMB;
}
var KeyboardSizeInMB = SizeInMB(this.KeyboardInputSizeInBits, this.KeyboardSampleRate, this.CaptureTimeInMinutes);
var MouseSizeInMB = SizeInMB(this.MouseInputSizeInBits, this.MouseSampleRate, this.CaptureTimeInMinutes);
var GamePadSizeInMB = SizeInMB(this.GamePadInputSizeInBits, this.GamePadSampleRate, this.CaptureTimeInMinutes);

this.MemoryUsageInMB = KeyboardSizeInMB * this.KeyboardInputs + MouseSizeInMB * this.MiceInputs + GamePadSizeInMB * this.GamePadInputs;
}
});
}, false);

Input Sizes: Keyboard: bits Mouse: bits Gamepad: bits

Number of Inputs: Keyboards: Mice: Gamepads:

Sample Rates: Keyboard: Hz Mouse: Hz Gamepad: Hz

Time to Capture: minutes

Memory Usage: MB

Note: The data above is completely interactive; play with it to your heart’s content.

So, with the default values above we can store two hours of continuous input1 data that it will only cost us about 20 MB of memory. However, this is basically the worst case scenario. Normal players are simply not going to be able to give you 60 different keyboard inputs over a second.

Instead, if we normalize the values to: keyboard = 1 Hz, mouse = 10 Hz, and gamepad = 5 Hz, now we are only talking about 1.79 MB over a two hour gameplay session.

This is starting to look a lot more feasible that I first suspected.

Next time, I’ll take a look at what it will take to actually process this data and some API structures we may want to use.


  1. Continuous in this context means 60 samples per second. 
Taking a Look at Practical FRP

Value Add – Bring Something to the Table

There was a comment on my Programming Theory and Practice blog article about a reactive solution for interacting with IOHIDManager. The full gist of it can be found here: https://gist.github.com/mprudhom/607560e767942063baaa.

My thoughts on it can be summed up essentially as:

  1. It requires another library to actually see the code that is going on (the ChannelZ lib from the author), that is bad.
  2. The solution requires bridging from Swift to ObjC back to Swift again, that is bad (though it cannot currently be avoided).
  3. It is no better than the ObjC or C

The last point is the real kicker for me, especially as of late. If I'm going to invest in a new technology, I want to understand the value that I'm getting back from it. In other words: am I getting a good ROI (return on investment) from it?

The thing is, we can write the same "reactive-style" API today, in C or C compiler). To me, this is very valuable and is currently a significant negative (really, a potential deal-breaker for me) against Swift1. We don't know the plans for Swift, so it's hard to say what is going to happen here, and I am personally not interested in any of the managed ports I've been seeing for other platforms.

Thinking About the API

The API for the reactive-style needs to care about three basic things:

  1. connect – we need to know when a new device has been connected
  2. disconnect – we need to know when a device has been disconnected
  3. input – we need to know when the input has changed

When we implement these things, we can do so in a pull, push, or a push-pull manner. My primary use case is for a game, so I'm going to use the pull model. This means that I want my input to actually be a collection of input values since the last time I requested them. I'm also going to ignore connect and disconnect for this example as they are really not that interesting and adds very little to this example.

The API is starting to look like this:

struct DeviceManager {
    var input: [Input]
    let hidManager: IOHIDManagerRef
}

The C

struct DeviceManager {
    std::vector<Input> input;
    IOHIDManagerRef hidManager;
};

Let's also say that Input looks like this:

enum Input {
    case Mouse(x: Int, y: Int, buttons: [ButtonState] /* only want max of 5 */)
    case Keyboard(keyCode: VirtualKey, state: KeyState)
}   

To model that in C

namespace DeviceType {
    enum Type { Mouse, Keyboard };
};

struct Input {
    DeviceType::Type type;

    union {
        struct { /* Mouse */
             int x;
             int y;
             ButtonState buttons[5];
        };
        struct { /* Keyboard */
            VirtualKey keyCode;
            KeyState state;
        };
    };
};

Clearly, the Swift version has some nice wins on syntax here.

Now, I said that I'm using the pull-model, so I have some update function:

func update() { /* The signature doesn't matter... */
    let mouseInputs = filter(manager.inputs) {
        switch ($0) {
             case let .Mouse(_): return true
             default: return false
        }
    }
    // do something interesting with the filtered mouse signals
}

I'm not sure if there is a better way to do that check with associated enum values, but it kinda sucks. The C

void update() {
    auto mouseInputs = filter(manager.inputs, [] (DeviceInput input) {
        return input.type == DeviceType.Mouse;
    });
    // do something interesting with the filtered mouse signals
}

Ironically, the C

What's the Value?

The above example is only a subset of the full solution, but it is not really that far off from being complete. For me, I've been looking at and really wondering what the ROI is going to be for me to fully invest in learning and using Swift. Honestly, I'm not sure.

At this point, it is really starting to look like that I should just go back to C11 features, there is really not a whole lot that Swift can do over C

  1. Lack of code portability
  2. Lack of code interoperability
  3. Lack of full memory control

Yes, it is true that each of these also has a cost associated with it in regards to C

Remember, Swift is still in its infancy – it is going to need to get a lot more sophisticated to realize its aspirations, especially to become a "systems level language".

For me personally, I think it is going to be a much better use of my time writing modern C

Your mileage may vary.

  1. You need to evaluate this for your own needs; this type of cross-platform sharing may be of little to no value for you.
Value Add – Bring Something to the Table

Programming Theory and Practice

Note: This is a bit of a personal reflection exploratory post; if you are looking for deep insights into the Swift language or programming in general, you're probably not going to find much here.

A little Swift aside for some context?

So why did I even start to go down this route of exploration into FRP? Well, I wanted to create an API that felt natural in Swift for interacting with the IOHIDManager class for a prototype of a game idea that I have (thanks to much motivation from Handmade Hero). This is the only way that know how to interact with multiple gamepads, keyboards, and mice to build a proper platform layer for a game on OS X (and potentially iOS).

It turns out, building this in Swift is a pretty bad experience. The IOHIDManager class relies on C callback functions for device connection, removal, and input. This is the point in my Swift work that I know is just going to be painful as I've done it multiple times now. The only way for this to work 1 is to:

  1. Create a C file (and corresponding header) to actually implement the callbacks
  2. Export that callback in a variable so that Swift can use it
  3. Create another Swift class with the @objc attribute so that I can pass data from C back into Swift via this class. Of course, this class needs to be public which artificially adds an implementation detail to the public interface, which I also really dislike.

Ok, this is easy to do, but annoying. Also, at this point, I have now introduced a lot of overhead for dealing with input events, both in terms of code required and runtime costs. I don't like this at all… and I need to do for keyboard, gamepad, and mouse events.

What's really annoying, I just simply wanted to start prototyping with something like this:

public enum VirtualKey {
    case A
    // ...
}

public enum KeyState {
    case Up
    case Down
}

public struct MergedKeyState {
    let key: VirtualKey
    let start: KeyState
    let transitions: Int
}

public struct KeyboardInput {
    var samples: [VirtualKey : [KeyState]]

    public init() {
        self.samples = [VirtualKey : [KeyState]]()
        self.samples[VirtualKey.A] = [KeyState]()
    }

    public mutating func add(key: VirtualKey, state: KeyState) {
        samples[key]?.append(state)
    }

    public mutating func key(key: VirtualKey) -> [KeyState] {
        if let input = self.samples[key] {
            self.samples[key]?.removeAll()
            return input
        }
        else {
            return []
        }
    }

    public mutating func keyA() -> MergedKeyState? {
        let merged: MergedKeyState? = reduce(self.key(.A), nil) {
            merged, state in
            if let merged = merged {
                return MergedKeyState(key: merged.key,
                    start: merged.start,
                    transitions: merged.transitions + 1)
            }
            else {
                return MergedKeyState(key: .A, start: state, transitions: 0)
            }
        }

        return merged
    }
}

Is this a good approach to the problem? I don't know, that's why I'm prototyping out potential solutions. There is a bunch of mutating attributes, that might be bad… but the point is, the only way I could actually start playing with this is to remove it from the context of the problem all together because of the C interop issues.

So what's my point?

Excellent question! Honestly, I don't even know if I'm quite sure. I have a lot of thoughts in my head around many tangentially related topics, but I guess if we focus it merely on this trend in functional programming and FRP implementations, I feel like I have to be missing something because all I really see is this: a very primitive architecture for a game engine.

If you are using "pull" to get the data, you really are simply handling input just like many game engines would. That input cascades changes down some structure of data to determine what needs to be updated and re-drawn.

If you are using a "push" model, you've really just implemented a potentially nicer way doing exactly what you are doing today with event driven or notification driven UI patterns. Yes, there are some nice things we can do here, but this seems to continue with one of the biggest problems: mutations from random places at random times, albeit, maybe slightly more controlled.

I guess at the end of it all I have a bit of melancholy about the whole situation. I wish more people were thinking up these models and really applying them to the hardware instead of using the hardware to, essentially, brute force the conceptual model into working on our current hardware.

  1. As of Xcode 6.3 beta 2…
Programming Theory and Practice

Handling Multiple Closure Parameters

So Natasha has a fair criticism about auto-completion1 in Xcode with regards to functions that take multiple closures and Swift's trailing closure syntax. I think there are many issues with auto-complete and Swift, but that's different rabbit hole to go down.

Instead, what I wanted to focus on was another way to solve the problem, which also helps with the auto-complete issue.

Here's the basic original problem:

func doSomething(value: Int, onSuccess: () -> (), onError: () -> ()) {
    if value == 0 {
        onSuccess()
    }
    else {
        onError()
    }
}

So instead of writing code in one the following ways (which all have the weird multiple closure issue):

doSomething(5, { println("success") }, { println("error") })
doSomething(0, { println("success") }) { println("error") }
doSomething(5, { println("success") }, { println("error") })

We can restructure the code using a very simple promise model.

struct CallbackResult<T> {
    let value: T
    let failed: Bool

    func onError(fn: (value: T) -> ()) -> CallbackResult<T> {
        if self.failed {
            fn(value: value)
        }

        return self
    }

    func onSuccess(fn: (value: T) -> ()) -> CallbackResult<T> {
        if !self.failed {
            fn(value: value)
        }

        return self
    }
}

func doSomething(value: Int) -> CallbackResult<Int> {
    return CallbackResult(value: value, failed: value != 0)
}

Then the usage becomes:

doSomething(10)
    .onSuccess { println("foo(\($0)): success") }
    .onError { println("foo(\($0)): error") }

doSomething(0)
    .onSuccess { println("foo(\($0)): success") }
    .onError { println("foo(\($0)): error") }

If you are familiar with JavaScript, you'll see a similar thing with deferred objects in jQuery and lots of other places as well.

There are lots of other benefits to this approach as well, such as helping flatten out async code that has a bunch of callbacks in it.

Anyhow, just another option.

  1. So I just realized that this was an old post of hers… auto-complete is still as terrible as ever though. Maybe one day.
Handling Multiple Closure Parameters