Performance Xcode7 Beta 2

Update June 27th @ 3:21 PM

I was able to squeeze out some more performance by promoting some constants that I noticed I had in my code.

Here's the updated code:

func RenderGradient(inout buffer: RenderBuffer, offsetX: Int, offsetY: Int) {
    buffer.pixels.withUnsafeMutableBufferPointer { (inout p: UnsafeMutableBufferPointer<Pixel>) -> () in
        var offset = 0

        let yoffset = int4(Int32(offsetY))
        let xoffset = int4(Int32(offsetX))

        let inc = int4(0, 1, 2, 3)
        let blueaddr = inc + xoffset

        for var y: Int32 = 0, height = buffer.height; y < Int32(height); ++y {
            let green = int4(y) + yoffset

            for var x: Int32 = 0, width = buffer.width; x < Int32(width); x += 4 {
                let blue = int4(x) + blueaddr

                // If we had 8-bit operations above, we should be able to write this as a single blob.
                p[offset++] = 0xFF << 24 | UInt32(blue.x & 0xFF) << 16 | UInt32(green.x & 0xFF) << 8
                p[offset++] = 0xFF << 24 | UInt32(blue.y & 0xFF) << 16 | UInt32(green.y & 0xFF) << 8
                p[offset++] = 0xFF << 24 | UInt32(blue.z & 0xFF) << 16 | UInt32(green.z & 0xFF) << 8
                p[offset++] = 0xFF << 24 | UInt32(blue.w & 0xFF) << 16 | UInt32(green.w & 0xFF) << 8
            }
        }
    }
}

And the new timings with this update:

Language: Swift, Optimization: -O, Samples = 10, Iterations = 30          ┃ Avg (ms) ┃ Min (ms) ┃ Max (ms) ┃ StdDev ┃
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━┩
RenderGradient ([UInt32].withUnsafeMutablePointer (SIMD))                 │ 15.75163 │ 15.00523 │ 17.31266 │ 0.8139 │
──────────────────────────────────────────────────────────────────────────┴──────────┴──────────┴──────────┴────────┘

Language: Swift, Optimization: -Ounchecked, Samples = 10, Iterations = 30 ┃ Avg (ms) ┃ Min (ms) ┃ Max (ms) ┃ StdDev ┃
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━┩
RenderGradient ([UInt32].withUnsafeMutablePointer (SIMD))                 │ 3.789642 │ 3.272549 │ 5.110642 │ 0.6232 │
──────────────────────────────────────────────────────────────────────────┴──────────┴──────────┴──────────┴────────┘

The -O case was unaffected, however, the -Ounchecked is now about twice as fast as before and practically the same as the C

Update June 27th @ 1:56 AM

I noticed a bug that I had when adding the x-values, they should have been incremented by (0, 1, 2, 3). I updated the code samples and timings, though the analysis comes out to be roughly the same. I did see some the SIMD code not have much benefit under the most aggressive compiler settings. That's not too unexpected as this code is fairly trivial.

Original Entry

Well, it's that time again, to look at the performance of Swift. I've been using my swift-perf repo which contains various implementations of a RenderGradient function.

So, how does Swift 2.0 stack up in Xcode 7 Beta 2? Good! We've seen some improvements in debug builds, which is great. There is still a long ways to go, but it's getting there. As for release builds, not too much difference there.

However, there is a new thing that got added in Swift 2.0 – basic SIMD support.

I decided to update my RenderGradient with two different implementations, one that uses an array of pixel data through the array interface and another that interacts with the array throught a mutable pointer. The latter is what is required for the best speed.

Here's the implementation:

NOTE: I'm pretty new to writing SIMD code, so if there are any things I should fix, please let me know!

func RenderGradient(inout buffer: RenderBuffer, offsetX: Int, offsetY: Int) {
    buffer.pixels.withUnsafeMutableBufferPointer { (inout p: UnsafeMutableBufferPointer<Pixel>) -> () in
        var offset = 0

        let yoffset = int4(Int32(offsetY))
        let xoffset = int4(Int32(offsetX))

        // TODO(owensd): Move to the 8-bit SIMD instructions when they are available.

        // NOTE(owensd): There is a performance loss using the friendly versions.

        //for y in 0..<buffer.height {
        for var y = 0, height = buffer.height; y < height; ++y {
            let green = int4(Int32(y)) + yoffset

            //for x in stride(from: 0, through: buffer.width, by: 4) {
            for var x: Int32 = 0, width = buffer.width; x < Int32(width); x += 4 {
                let inc = int4(0, 1, 2, 3)
                let blue = int4(x) + inc + xoffset

                p[offset++] = 0xFF << 24 | UInt32(blue.x & 0xFF) << 16 | UInt32(green.x & 0xFF) << 8
                p[offset++] = 0xFF << 24 | UInt32(blue.y & 0xFF) << 16 | UInt32(green.y & 0xFF) << 8
                p[offset++] = 0xFF << 24 | UInt32(blue.z & 0xFF) << 16 | UInt32(green.z & 0xFF) << 8
                p[offset++] = 0xFF << 24 | UInt32(blue.w & 0xFF) << 16 | UInt32(green.w & 0xFF) << 8
            }
        }
    }
}

The basic idea is to fill the registers on the CPU with data and perform the operation on that set instead of doing it one value at a time. For comparison, the non-SIMD version is below.

func RenderGradient(inout buffer: RenderBuffer, offsetX: Int, offsetY: Int)
{
    buffer.pixels.withUnsafeMutableBufferPointer { (inout p: UnsafeMutableBufferPointer<Pixel>) -> () in
        var offset = 0
        for (var y = 0, height = buffer.height; y < height; ++y) {
            for (var x = 0, width = buffer.width; x < width; ++x) {
                let pixel = RenderBuffer.rgba(
                    0,
                    UInt8((y + offsetY) & 0xFF),
                    UInt8((x + offsetX) & 0xFF),
                    0xFF)
                p[offset] = pixel
                ++offset;
            }
        }
    }
}

The awesome thing is that the SIMD version is a bit faster (update June 27th, @ 9:20 am : previously it was 2x before I fixed a bug, dang!)! When 8-bit operations are allowed, it should get even faster as we can reduce the amount of work that needs to be done even further and directly assign the result into memory.

Here is the performance break-down for these two methods in -O and -Ounchecked builds:

Swift Performance

Language: Swift, Optimization: -O, Samples = 10, Iterations = 30          ┃ Avg (ms) ┃ Min (ms) ┃ Max (ms) ┃ StdDev ┃
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━┩
RenderGradient ([UInt32].withUnsafeMutablePointer)                        │ 18.07803 │ 17.19691 │ 21.00281 │ 1.4847 │
RenderGradient ([UInt32].withUnsafeMutablePointer (SIMD))                 │ 15.88613 │ 15.11753 │ 20.16230 │ 1.5437 │
──────────────────────────────────────────────────────────────────────────┴──────────┴──────────┴──────────┴────────┘

Language: Swift, Optimization: -Ounchecked, Samples = 10, Iterations = 30 ┃ Avg (ms) ┃ Min (ms) ┃ Max (ms) ┃ StdDev ┃
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━┩
RenderGradient ([UInt32].withUnsafeMutablePointer)                        │ 6.623639 │  6.22851 │ 8.339521 │ 0.6325 │
RenderGradient ([UInt32].withUnsafeMutablePointer (SIMD))                 │ 6.629701 │ 5.930751 │ 8.751819 │ 1.0005 │
──────────────────────────────────────────────────────────────────────────┴──────────┴──────────┴──────────┴────────┘

Now, here's where things start to get really interesting. I have a C

C

Language: C, Optimization: -Os, Samples = 10, Iterations = 30             ┃ Avg (ms) ┃ Min (ms) ┃ Max (ms) ┃ StdDev ┃
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━┩
RenderGradient (Pointer Math)                                             │    9.364 │    8.723 │   11.338 │  0.994 │
RenderGradient (SIMD)                                                     │    7.751 │    7.101 │    9.642 │  0.960 │
──────────────────────────────────────────────────────────────────────────┴──────────┴──────────┴──────────┴────────┘

Language: C, Optimization: -Ofast, Samples = 10, Iterations = 30          ┃ Avg (ms) ┃ Min (ms) ┃ Max (ms) ┃ StdDev ┃
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━┩
RenderGradient (Pointer Math)                                             │    3.302 │    2.865 │    5.061 │  0.693 │
RenderGradient (SIMD)                                                     │    7.607 │    6.991 │    9.923 │  0.887 │
──────────────────────────────────────────────────────────────────────────┴──────────┴──────────┴──────────┴────────┘

When Swift is compiled without the safetey checks, it's sitting right between the "Pointer Math" and the "SIMD" versions. The safety checks are causing about a 2-3 times slow-down over the -Ounchecked version though. There might be some room for improvement still in how I'm structuring things. Also, the C

I find this really exciting! We're really close to being able to write high-level, low syntactical noise code (compared to C

Again, the code for this can be found here: swift-perf. If you know any optimizatinos I should make in the C

Performance Xcode7 Beta 2

The new print() is Flipping Busted (nope, it’s me!)

UPDATE Friday, June26th @ 10:49 PM

Yep… so I've tracked down the issue: http://www.openradar.me/21577729. The problem is that single parameter, generic functions are implicitly turning multiple parameters into a tuple. That's the root cause of the initial bug report.

Good times.

Here's the code if you want to try it out:

func f<T>(value: T) {
    print("value: \(value)")
}

func f<T>(value: T, hi: Int) {
    print("value: \(value) \(hi)")
}

f("hi")
f("hi", append: false)
f("hi", append: false, omg: "what?")

f("hi", hi: 12)  // calls f(value: T, hi: Int)
f("hi", Hi: 12)  // calls f(value: T)

UPDATE Friday, June 26th @ 10:02 PM

OK… so I totally screwed up… It's my fault, Swift is just fine (kind-of). (I still want print() and println() back though).

SO… it turns out I have a typo in my code below in my bug report rant…

for  _ in 0 ..< titleWidth { print("━", appendNewLine: false) }
print("‚ïá", appendNewLine: false)

Should have been this…

for  _ in 0 ..< titleWidth { print("━", appendNewline: false) }
print("‚ïá", appendNewline: false)

I'll let you spot the difference. I am unsure as to why I did not get a compiler error. The only reason I noticed the issue is because I tried to workaround the issue reported below by using another overload of print(). That overload did correctly flag my labeling error. So, still a bug with print(), but no where near as bad as I originally though below.

Also, in the playground, print("hi", whateverIWant: false) works…

Anyhow, back to your regularly scheduled broadcast…

SHAME POST FOR ALL TO LEARN FROM…

I'm just going to copy my bug report here as I think this is worthy of being shared more broadly…

Bug Report First of all, I'm sure this bug has been logged before, but I don't care because this is so flipping irritating right now.

A "systems language" that has no proper ability to write console apps and output text to the screen succinctly is just broken. The move to replace print() and println() with a single function print() with an overload – terrible decision.

Ok… breathe… I can deal, it's just an overloaded function now, no problem…

Imagine your surprise when you try and use it to print out some table headers:

for  _ in 0 ..< titleWidth { print("━", appendNewLine: false) }
print("‚ïá", appendNewLine: false)

And you get output like this in the console:

("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("‚ïá", false)

This is retarded and completely broken. Please change print() back to the proper print() and println() versions and fix the implementation to actually output correctly to the screen.

The new print() is Flipping Busted (nope, it’s me!)

‘Design Question: Extension, Conforming Type, or Generic Type?’

Here's a design question for you Swifters out there.

I'm playing around with building a tokenizer that works based on a set of rules that you provide to it. From what I see, I have three basic design choices.

//
// Option 1: A tokenizer that manages its own cursor into the `ContentType`.
//
public protocol Tokenizer {
    typealias ContentType : CollectionType

    var rules: [(content: ContentType, offset: ContentType.Index) -> ContentType.Index?] { get }
    var content: ContentType { get }

    init(content: ContentType)

    mutating func next(index: ContentType.Index?) throws -> Token<ContentType>?
}

//
// Option 2: A tokenizer that passes the next index back to the user for the next call.
//
// NOTE: A tuple breaks the compiler so this type is needed: rdar://21559587.
public struct TokenizerResult<ContentType where ContentType : CollectionType> {
    public let token: Token<ContentType>
    public let nextIndex: ContentType.Index

    public init(token: Token<ContentType>, nextIndex: ContentType.Index) {
        self.token = token
        self.nextIndex = nextIndex
    }
}

public protocol Tokenizer {
    typealias ContentType : CollectionType

    var rules: [(content: ContentType, offset: ContentType.Index) -> ContentType.Index?] { get }
    var content: ContentType { get }

    init(content: ContentType)

    // HACK(owensd): This version is necessary because default parameters crash the compiler in Swift 2, beta 2.
    func next() throws -> TokenizerResult<ContentType>?
    func next(index: ContentType.Index?) throws -> TokenizerResult<ContentType>?
}

//
// Option 3: A mixture of option #1 and #2 where the tokenizer manages its own cursor location but
// does so by returning a new instance of the tokenizer value.
//
public protocol Tokenizer {
    typealias ContentType : CollectionType

    var rules: [(content: ContentType, offset: ContentType.Index) -> ContentType.Index?] { get }
    var content: ContentType { get }

    init(content: ContentType, currentIndex: ContentType.Index?)
    func next() throws -> Self?
}

Option 1

The main problem I have with option #1 is that I'm in the business of managing some bookkeeping details. This has the terrible side-effect of requiring me to expose all of the details of that bookkeeping work in the protocol so that I can provide a defautl implementation of how this works when ContentType is a String. That's bad.

The other option is to create a struct that conforms to the protocol and provides an implementation for various ContentTypes. However, I want to reserve that for a conforming type for particular structures of data, like CSVTokenizer or JSONTokenizer.

However, this option provides the benefit of being extremely easy to use as the caller of the code doesn't need to maintain the nextIndex as in option 2 or new instances of the tokenizer in option 3. Simply call next() and you get the expected behavior.

Option 2

This gets rid of all of the negatives of option #1, but it does add the burden to call the next() function with the correct index. Of course, this does allow some additional flexibility. My big concern here is the additional code that each caller will need to make each time next() is to be called. They have to unpack the optional result, then the nextIndex value and call next() with it.

Maybe this is OK. The trade-offs seem better at least. And, I can provide a default implementation for any Tokenizer that makes use of a String and String.Index.

The thing I like most about this approach is that each type of Tokenizer simple provides the rules as an overloaded read-only property.

Option 3

This kinda merges option #1 and option #2 together; it's also my least favorite. I don't like all of the potential copying that needs to be done. It's not clear to me that this will be optimized away, especially under all use cases. However, I thought I should at least mention it…

Option 4

Ok, there really is another option. It's to create a struct Tokenizer that provides an init() allowing you to pass in the set of rules to use when matching tokens. I really don't like this approach that much either. This turns handling rules for common tokenizer constructs like CSV and JSON into these free-floating arrays of rules.

That feels wrong to me. A concrete implementation of a CSV Tokenizer seems like the better approach.

Wrapping Up

I'm leaning towards option #2 (in fact, that is what I have implemented currently). It seems to be working alright, though the caller callsite is a little verbose.

guard let result = try tokenizer.next() else { /* bail */ }

// do stuff...

// Do it again!
guard let result = try tokenizer.next(index: result.nextIndex) else { /* bail */ }

This is probably ok and it's likely to be done in a loop. It just feels like a lot of syntax and bookkeeping the caller needs to deal with.

Anyhow, thoughts? Better patterns to consider?

‘Design Question: Extension, Conforming Type, or Generic Type?’

‘RE: Swift Protocols Question (inessential.com)’

Over the weekend, Brent has a question about Swift protocols. This shows, hopefully, just a point-in-time problem with Swift's type system. Unfortunately, there isn't a great workaround for this problem.

The crux of the issue is this: protocols with a Self requirement enforce your code to essentially be made up of homogenous types in order to do anything useful. This kinda stinks, and in Brent's case, is not what he wants.

If you ever do this:

protocol Value : Equatable() {}

Then boom, you're stuck. The Equatable protocol has a Self requirement, which trickles downstream to all of your protocols and types that apply a conformance to it.

Now, I don't know Brent's situation, but there is a way out of this in your situation allows for it: don't conform your protocol to Equatable, instead, conform your types to it.

protocol Value {}
struct MyType : Value, Equatable {}

When you do this, you can now write the signature that Brent wanted.

protocol Smashable {
    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value
}

The problem here, is if you want your types to be equatable to one another, you'll need to provide a heterogenous equality function:

func ==(lhs: Value, rhs: Value) -> Bool {
    // lhs == rhs?
}

This means that your base Value protocol needs to define all of the members for equality, which again, might be OK for your scenario.

A full sample looks like this:

protocol Value {
    var identifier: String { get }
}

protocol Smashable {
    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value
}

struct Foo : Value, Smashable, Equatable {
    let identifier: String

    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value {
        return Bar(identifier: "smashed by Foo")
    }
}
func ==(lhs: Foo, rhs: Foo) -> Bool {
    return lhs.identifier == rhs.identifier
}

struct Bar : Value, Smashable, Equatable {
    let identifier: String

    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value {
        return Foo(identifier: "smashed by Bar")
    }
}
func ==(lhs: Bar, rhs: Bar) -> Bool {
    return lhs.identifier == rhs.identifier
}

func ==(lhs: Value, rhs: Value) -> Bool {
    return lhs.identifier == rhs.identifier
}

let f = Foo(identifier: "foo")
let b = Bar(identifier: "bar")

let fsmash = f.valueBySmashing‚ÄãOtherValue‚Äã(b)
let bsmash = b.valueBySmashing‚ÄãOtherValue‚Äã(f)

if f == b { print("f == b") }
else { print("f != b") }

if fsmash == bsmash { print("fsmash == bsmash") }
else { print("fsmash != bsmash") }

UPDATE June 22, 2015

I probably should have mentioned the isEqualTo guidance from The Protocols talk from WWDC (also in the Crustacean sample code. We can clean up the sample code a bit to not require all of our conforming types to implement an == operator nor an isEqualTo function:

protocol Value {
    var identifier: String { get }

    func isEqualTo(other: Value) -> Bool
}

extension Value {
    func isEqualTo(other: Value) -> Bool {
        return self.identifier == other.identifier
    }
}

protocol Smashable {
    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value
}

struct Foo : Value, Smashable, Equatable {
    let identifier: String

    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value {
        return Bar(identifier: "smashed by Foo")
    }
}

struct Bar : Value, Smashable, Equatable {
    let identifier: String

    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value {
        return Foo(identifier: "smashed by Bar")
    }
}

func == <T : Value>(lhs: T, rhs: T) -> Bool {
    return lhs.isEqualTo(rhs)
}

func ==(lhs: Value, rhs: Value) -> Bool {
    return lhs.isEqualTo(rhs)
}

let f = Foo(identifier: "foo")
let b = Bar(identifier: "bar")

let fsmash = f.valueBySmashing‚ÄãOtherValue‚Äã(b)
let bsmash = b.valueBySmashing‚ÄãOtherValue‚Äã(f)

if f == b { print("f == b") }
else { print("f != b") }

if fsmash == bsmash { print("fsmash == bsmash") }
else { print("fsmash != bsmash") }
‘RE: Swift Protocols Question (inessential.com)’

Catching Errors for Testing (Or Why Enums Suck… Sometimes)

I have a fairly reasonable task: I want to write some test code to ensure that certain paths of my code are throwing errors, but not only that, errors of a certain "type".

OK… this sounds like it should be really trivial to do.

Here's the setup:

enum MyErrors : ErrorType {
    case Basic
    case MoreInfo(title: String, description: String)
}

func f(value: Int) throws {
    switch value {
    case 0:
        throw MyErrors.Basic

    case 1:
        throw MyErrors.MoreInfo(title: "A title?", description: "1s are bad, k?")

    default:
        break
    }
}

And for the tests:

func testFThrowsOn0() {
    do {
        try f(0)
        XCTFail("This was supposed to throw")
    }
    catch MyErrors.Basic {}
    catch {
        XCTFail("Incorrect error thrown")
    }
}

func testFThrowsOn1() {
    do {
        try f(1)
        XCTFail("This was supposed to throw")
    }
    catch MyErrors.MoreInfo {}
    catch {
        XCTFail("Incorrect error thrown")
    }
}

func testFDoesNotThrowOn2() {
    do {
        try f(2)
    }
    catch {
        XCTFail("This was not supposed to throw")
    }
}

Ok, the tests do what they are supposed to do… but that is some ugly code. What I want to write is this:

func testFThrowsOn0() {
    XCTAssertDoesThrowErrorOfType(try f(0), MyErrors.Basic)
}

func testFThrowsOn1() {
    XCTAssertDoesThrowErrorOfType(try f(1), MyErrors.MoreInfo)
}

func testFDoesNotThrowOn2() {
    XCTAssertDoesNotThrow(try f(2))
}

I have no idea how to write this code… the simple version XCTAssertDoesThrow is trivial, just catch any exception and perform the logic. However… how to pass in the value, especially on an enum with associated values, so that it can be properly pattern matched, I don't even know if that's possible.

The only way I know how to even come close to what I want to is to cheat significantly.

enum MyErrors : ErrorType {
    case Basic
    case MoreInfo  //(title: String, description: String)
}

func == (lhs: ErrorType, rhs: ErrorType) -> Bool {
    return lhs._code == rhs._code
}

func != (lhs: ErrorType, rhs: ErrorType) -> Bool {
    return !(lhs == rhs)
}

func XCTAssertDoesThrowErrorOfType(@autoclosure fn: () throws -> (),
    message: String = "", type: ErrorType, file: String = __FILE__,
    line: UInt = __LINE__)
{
    do {
        try fn()
        XCTFail(message, file: file, line: line)
    }
    catch {
        if error != type { XCTFail(message, file: file, line: line) }
    }
}

So I had to:

  1. Get ride of my associated enum value, which means I'll need to wrap this information into a struct.
  2. Define the == and != operators to compare ErrorType through a hack that exposed the _code

    value that is used for bridging to ObjC with NSError the oridinal value of the case statement.

  3. Define the XCTAssertDoesThrowErrorOfType function.

This allows me to achieve most of what I wanted, but it came at a cost and some really hacky code that is likely going to break in future betas of Swift 2.0.

So really I'm back at square one:

  1. Try the do-try-catch boilerplate code each time.
  2. Catch all errors and report that as a test success.
  3. Write the really hack code.

I think option #3 is a no-go. So out of convenience and hope that we'll be able to solve this better at a later date, I'm likely to go with option #2. BUT this leaves a test hole… :/

Any other ideas?

Update – Added the 4th option I failed to mention…

I forgot about the fourth option: promote each of the enum case values to its own type. Though… that has some significant drawbacks as well.

struct MyBasicError : ErrorType {
    let _code: Int
    let _domain: String
}

struct MoreInfoError : ErrorType {
    let _code: Int
    let _domain: String

    let title: String
    let description: String
}

First, we need to add the _code and _domain workarounds. That sucks…

And now we want a type signature like this:

func XCTAssertDoesThrowErrorOfType<T : ErrorType>(@autoclosure fn: () throws -> (),
    message: String = "", type: T, file: String = __FILE__,
    line: UInt = __LINE__)

However… since Swift cannot specialize the generic call, that means we need to pass an actual instance into the function for T or switch to use T.self and T.Type as the parameter. That's also less than ideal. So let's just do this:

func XCTAssertDoesThrowErrorOfType(@autoclosure fn: () throws -> (),
    message: String = "", type: MirrorType, file: String = __FILE__,
    line: UInt = __LINE__)
{
    do {
        try fn()
        XCTFail(message, file: file, line: line)
    }
    catch {
        if reflect(error).summary != type.summary {
            XCTFail(type.summary, file: file, line: line)
        }
    }
}

At least the callsite doesn't need an instance anymore:

XCTAssertDoesThrowErrorOfType(try f(0), type: reflect(MyBasicError))

The plus side is that this is work for both structs and enums, so long as the enum only has a single case that you care about comparing.

Anyhow… still stuck on how to actually do this in a "proper" way or if we'll be able to in Swift 2.0.

Conclusion

It seems that, at least for now, the reflect() solution is the best I can come up with. It works for both enums and structs, and it can be extended to consider the details of each of those if desired. The only real drawback is associated enums: you need to pass in an instance of that enum (you could pass in strings but that exposed an implementation detail and removes the ability to further inspect the types).

XCTAssertDoesThrowErrorOfType(try f(0), type: reflect(EnumError.Info(title: "")))
Catching Errors for Testing (Or Why Enums Suck… Sometimes)

XCTest Missing ‘throws’ Testing

It looks like XCTest is missing some basic support for testing if a function throws or not. Here are two snippets that I'm using until that support is added.

func XCTAssertDoesNotThrow(@autoclosure fn: () throws -> (), message: String = "", file: String = __FILE__, line: UInt = __LINE__) {
    do {
        try fn()
    }
    catch {
        XCTFail(message, file: file, line: line)
    }
}

func XCTAssertDoesThrow(@autoclosure fn: () throws -> (), message: String = "", file: String = __FILE__, line: UInt = __LINE__) {
    do {
        try fn()
        XCTFail(message, file: file, line: line)
    }
    catch {
    }
}

I'm not sure how to extend this to validating for a specific error though…

Using it in a test is pretty easy:

func f() throws {
}

func someTest() {
    XCTAssertDoesNotThrow(try f())
}
XCTest Missing ‘throws’ Testing

Do-Catch Seems Overly Cumbersome

There's been a lot of conversation on Twitter and people's blogs about the error handling model, especially with regards to throws vs. Result<T, E>. I think that's great because people are really getting engaged in the topic and hopefully the situation will get better.

The latest thing I'm running up against in the cumbersomeness of the do-catch pattern for handling errors. Not only that, I think it promotes the bad exception handling that Java had with people simply writing a single catch (Exception e) handler. Sometimes that's OK, I guess.

The thing I like most about throws is that it cleans up the API signature for functions:

func foo() throws -> Int { ... }

vs. this:

func foo() -> Result<Int, ErrorType> { ... }

It's where we go to handle it that is really starting to get to me:

do {
    try foo()
}
catch {
}

My first complaint, I don't think it's a good idea to write giant do-catch blocks with multiple throwing functions within it:

do {
    try foo()
    // ...
    try foo()
}
catch {
}

I just think that's bad style. It is pushing the error handling further and further from the code that is actually going to error-out. Also, I think it defeats the intended purpose of being intentional about handling errors.

This is what will happen once you get a handful of functions that throw:

func someFunc() {
    do {
        /*
         You know this to be true... we've seen it.
        */
    }
    catch {
        // Um.. I guess I should do something with the "error"?
    }
}

Now, I do like the try and the throws annotations. I think they add clarity to the code, especially as code grows and needs to be maintained over time. But, I think it might have been cleaner to do something more like guard.

try foo() catch {
    /* handle the error */
}

The thing I really like about this is that it's only the error-handling code that is being nested a level. This keeps the logic flow of the happy path of the code at the same level and calls out, making it more explicit where the bad path is.

Then, if we combine this with guard, we can get this:

guard let result = try foo() catch {
    /* handle the error */
    /* also, forced scope exit */
}

/* result is safe to use here */

Of course, if we want to bubble the errors up, the catch clause could be omitted if the enclosing function/closure also throws.

func someFunc() throws {
    try foo() // this is ok because someFunc() can throw
}

Anyhow, just some thoughts as I'm starting to write a lot more nesting that I care too.

I've logged rdar://21406512 to track it.

Do-Catch Seems Overly Cumbersome

Protocol Oriented Programming

There was a really great talk at WWDC this year around Protocol-Oriented Programming in Swift. It did get me thinking though about how this is different from what we have today in ObjC, or even in a language like C

There is also a really good blog post talking about some of this by Marcel Weiher that makes for some good reading as well.

Ok, so the heart, as I understood the talk, is about thinking of your types as in the form of protocols instead of base classes. The fundamental idea is to get rid of one of the really nasty problems of OOP – implicit data sharing. That's great because that problem sucks.

It turns out, we can do this today in ObjC with one caveat – default protocol implementations. This is a feature that is new with Swift 2.0 and apparently wasn't worth bringing back to ObjC.

This code is inspired heavily by the Crustacean demo app that goes along with the WWDC video.

ObjC

Let's start with the renderer protocol:

@protocol KSRenderer <NSObject>
- (void)moveTo:(CGPoint)position;
- (void)lineTo:(CGPoint)position;
- (void)arcAt:(CGPoint)center
       radius:(CGFloat)radius
   startAngle:(CGFloat)startAngle
     endAngle:(CGFloat)endAngle;
@end

Ok. that looks simple enough.

Our sample KSTestRenderer will look like this:

@interface KSTestRenderer : NSObject<KSRenderer>
@end

@implementation KSTestRenderer
- (void)moveTo:(CGPoint)position
{
    printf("  moveTo(%f, %f)\n", position.x, position.y);
}

- (void)lineTo:(CGPoint)position
{
    printf("  lineTo(%f, %f)\n", position.x, position.y);
}

- (void)arcAt:(CGPoint)center
       radius:(CGFloat)radius
   startAngle:(CGFloat)startAngle
     endAngle:(CGFloat)endAngle
{
    printf("  arcAt(center: (%f, %f), radius: %3.2f,"
           " startAngle: %3.2f, endAngle: %3.2f)\n",
          center.x, center.y, radius, startAngle, endAngle);
}
@end

Alright, all looking good so far. Now it's time for all of the shape code.

@protocol KSDrawable
- (void)draw:(id<KSRenderer>)renderer;
@end

@interface KSPolygon : NSObject<KSDrawable>
@property (copy, nonatomic) NSArray<NSValue *> *corners;
@end

@implementation KSPolygon
- (instancetype)init
{
    if (self = [super init]) {
        _corners = [[NSMutableArray<NSValue *> alloc] init];
    }

    return self;
}

- (void)draw:(id<KSRenderer>)renderer
{
    printf("polygon:\n");
    [renderer moveTo:[_corners.lastObject pointValue]];
    for (NSValue *value in _corners) {
        [renderer lineTo:[value pointValue]];
    }
}
@end

@interface KSCircle : NSObject<KSDrawable>
@property (assign) CGPoint center;
@property (assign) CGFloat radius;
@end

@implementation KSCircle
- (void)draw:(id<KSRenderer>)renderer
{
    printf("circle:\n");
    [renderer arcAt:_center radius:_radius startAngle:0.0f endAngle:M_PI * 2];
}
@end

@interface KSDiagram : NSObject<KSDrawable>
@property (copy, nonatomic) NSArray<id<KSDrawable>> *elements;
- (void)add:(id<KSDrawable>)other;
@end

@implementation KSDiagram
- (instancetype)init
{
    if (self = [super init]) {
        _elements = [[NSMutableArray alloc] init];
    }

    return self;
}

- (void)add:(id<KSDrawable>)other
{
    [(NSMutableArray *)_elements addObject:other];
}

- (void)draw:(id<KSRenderer>)renderer
{
    for (id<KSDrawable> drawable in _elements) {
        [drawable draw:renderer];
    }
}
@end

Finally, we get to the usage code:

KSCircle *circle = [[KSCircle alloc] init];
circle.center = CGPointMake(187.5f, 333.5f);
circle.radius = 93.75f;

KSPolygon *triangle = [[KSPolygon alloc] init];
triangle.corners = @[ [NSValue valueWithPoint:CGPointMake(187.5f, 427.25f)],
                      [NSValue valueWithPoint:CGPointMake(268.69f, 286.625f)],
                      [NSValue valueWithPoint:CGPointMake(106.31f, 286.625f)] ];

KSDiagram *diagram = [[KSDiagram alloc] init];
[diagram add:circle];
[diagram add:triangle];

KSTestRenderer *renderer = [[KSTestRenderer alloc] init];
[diagram draw:renderer];

The output of this code is:

circle:
  arcAt(center: (187.500000, 333.500000), radius: 93.75, startAngle: 0.00, endAngle: 6.28)
polygon:
  moveTo(106.309998, 286.625000)
  lineTo(187.500000, 427.250000)
  lineTo(268.690002, 286.625000)
  lineTo(106.309998, 286.625000)

Of course, one of the "big wins" was the ability retro-fit a class to make apply conformance to some protocol. This is done in ObjC through a class extension.

First, we'll update the protocol so that it has something that can be opted-into. Note that this is where ObjC and Swift deviate. In ObjC we'll have to mark the selector as optional because we cannot define a default implementation for it.

@protocol KSRenderer <NSObject>
- (void)moveTo:(CGPoint)position;
- (void)lineTo:(CGPoint)position;
- (void)arcAt:(CGPoint)center
       radius:(CGFloat)radius
   startAngle:(CGFloat)startAngle
     endAngle:(CGFloat)endAngle;

@optional
- (void)circleAt:(CGPoint)center
          radius:(CGFloat)radius;
@end

Next, and again because Swift has a feature that ObjC doesn't, we need to update the draw: selector on the KSCircle class to handle selectively calling this optional circleAt:radius:.

- (void)draw:(id<KSRenderer>)renderer
{
    printf("circle:\n");
    if ([renderer respondsToSelector:@selector(circleAt:radius:)]) {
        [renderer circleAt:_center radius:_radius];
    }
    else {
        [renderer arcAt:_center radius:_radius
             startAngle:0.0f endAngle:M_PI * 2];
    }
}

Lastly, a new extension to the KSTestRenderer is in order to retro-fit the class with new functionality.

@interface KSTestRenderer (Hacky)
@end

@implementation KSTestRenderer (Hacky)
- (void)circleAt:(CGPoint)center radius:(CGFloat)radius
{
    printf("  circleAt(center: (%f, %f), radius: %3.2f)\n",
           center.x, center.y, radius);
}
@end

Now the output of our program is this:

circle:
  circleAt(center: (187.500000, 333.500000), radius: 93.75)
polygon:
  moveTo(106.309998, 286.625000)
  lineTo(187.500000, 427.250000)
  lineTo(268.690002, 286.625000)
  lineTo(106.309998, 286.625000)

As you can see, the way in which a circle is rendered is now different because of our class extension. Making a CGContextRef would be as straight forward as applying the protocol extension to it and implementation the methods as done in Swift, except… that is a Swift-only feature too that extensions can be applied to non-ObjC types.

C

Things are a little more interesting in the C is unable to extend classes without creating a new type. Though… I'm not sure that is necessarily a bad thing (more on that later).

Anyhow, here's the full (very rough) code sample for C

struct Renderer {
    virtual void moveTo(CGPoint position) = 0;
    virtual void lineTo(CGPoint position) = 0;
    virtual void arcAt(CGPoint center, CGFloat radius, CGFloat startAngle, CGFloat endAngle) = 0;

    virtual void circleAt(CGPoint center, CGFloat radius) {
        arcAt(center, radius, 0.0f, M_PI * 2);
    }
};

struct TestRenderer : public Renderer {
    void moveTo(CGPoint position) {
        printf("  moveTo(%f, %f)\n", position.x, position.y);
    }

    void lineTo(CGPoint position) {
        printf("  lineTo(%f, %f)\n", position.x, position.y);
    }

    void arcAt(CGPoint center, CGFloat radius, CGFloat startAngle, CGFloat endAngle) {
        printf("  arcAt(center: (%f, %f), radius: %3.2f, startAngle: %3.2f, endAngle: %3.2f)\n",
               center.x, center.y, radius, startAngle, endAngle);
    }
};

struct Drawable {
    virtual void draw(Renderer &renderer) = 0;
};

struct Polygon : public Drawable {
    std::vector<CGPoint> corners;

    void draw(Renderer &renderer) {
        printf("polygon:\n");
        renderer.moveTo(corners.back());
        for (auto p : corners) { renderer.lineTo(p); }
    }
};

struct Circle : public Drawable {
    CGPoint center;
    CGFloat radius;

    void draw(Renderer &renderer) {
        printf("circle:\n");
        //renderer.arcAt(center, radius, 0.0f, M_PI * 2);
        renderer.circleAt(center, radius);
    }
};

struct Diagram : public Drawable {
    std::vector<Drawable *> elements;
    void add(Drawable *other) {
        elements.push_back(other);
    }

    void draw(Renderer &renderer) {
        for (auto e : elements) {
            e->draw(renderer);
        }
    }
};

struct TestRendererExtension : public TestRenderer {
    void circleAt(CGPoint center, CGFloat radius) {
        printf("  circle(center: (%f, %f), radius: %3.2f)\n", center.x, center.y, radius);
    }
};

int main(int argc, const char * argv[]) {
    @autoreleasepool {
        Circle circle;
        circle.center = CGPointMake(187.5f, 333.5f);
        circle.radius = 93.75f;

        Polygon triangle;
        triangle.corners.push_back(CGPointMake(187.5f, 427.25f));
        triangle.corners.push_back(CGPointMake(268.69f, 286.625f));
        triangle.corners.push_back(CGPointMake(106.31f, 286.625f));

        Diagram diagram;
        diagram.add(&circle);
        diagram.add(&triangle);

        TestRenderer renderer;
        diagram.draw(renderer);
    }
    return 0;
}

The above code shows the default implementation of circleAt that can be applied to all types that inherit this base class.

NOTE: A protocol (or interface) is really just a definition of functionality with no member data.

In the C

TestRendererExtension renderer;
diagram.draw(renderer);

Wrapping Up

I think the important takeaway from the Swift talk is not really about a "new paradigm" of programming, but rather showing a better way to compose software using techniques that we already use day-to-day. It makes it easier to do the better thing (get rid of accidental data sharing between types) and reducing the boiler-plate code required to do it.

There is one thing I'm worried about though: the class extension seems to be creating a very similar problem in that we are getting rid of unintentional data sharing between type hierarchies and replacing it with potentially unintentional functional changes in our programs.

Imagine a more complicated set of protocols and types interacting and along comes a protocol extension for a type that overrides a default function for the protocol and now your output went from:

circle:
  arcAt(center: (187.500000, 333.500000), radius: 93.75, startAngle: 0.00, endAngle: 6.28)

To this:

circle:
  circleAt(center: (187.500000, 333.500000), radius: 93.75)

To me, this is smelling a lot like the fundamental issue we saw with data sharing between types – unintended side-effects. For this reason alone, I think I prefer the C

Protocol Oriented Programming

Swift Throws – I Get It

It took me a little while for me to really appreciate the design of throws. At first, it was sheer disbelief that the Swift team created exceptions. Then it was mild irritation that a Result<T, E> type wasn't used.

Now, I really like it.

I see a lot of discussion about it (and some with comments on my blog) and different proposals being made. One of the things I keep seeing being missed though, is that throws forces you to be explicit about handling, ignoring, or bubbling the error back up.

The thing that really won me over was remembering how many times I had to link to this guide: ObjC Error Handling. For me, I've been used to it for so long that it's easy to forget just how much convention there is there. I've also seen some pretty amusing code that is attempting to use NSError but it being so wrong.

It is a good thing that Swift has both codified and simplified the error handling conventions of Cocoa.

Also, there is no more ambiguity on how to handle this case:

- (int)ohCrapAScalar:(NSError **error) {
    // So... what magic value do I use this time?
}

With Swift, it's a non-issue (unless you're bridging with ObjC – rdar://21360155):

func scalarNoProblem() throws -> Int {
}

The other complaint I see is around async… well, I honestly thing that is a point-in-time problem. I think Swift will get something like await, and it seems it would be natural to have this:

try {
    await try someAsyncThrowingCode()   
}
catch {
    fatalError("crap")
}

Of course, that doesn't cover all async cases, but you get the idea.

Swift Throws – I Get It

Xcode UI Testing

Let's talk briefly about testing.

Xcode 7 is coming out with a great feature, XCTest UI tests. Now, some of you might not know, but we've been able to do UI testing for iOS apps for quite a while now using Instruments and the UIA Instrument. If tried that, you know that tool was terrible.

Now, UI testing is a fine approach to testing, but don't be fooled by the awesome demos – UI testing is the most expensive way to author tests. They are the hardest to maintain, they are the hardest to debug, and they are the slowest to run. Now, that does not mean that you should not write them, they just shouldn't be your primary way of testing your app.

Let's look at a simple test from the Lister demo app.

func testExample() {
    let app = XCUIApplication()
    app.tables.staticTexts["Groceries"].tap()

    let addItemTextField = app.tables.textFields["Add Item"]
    addItemTextField.tap()
    addItemTextField.typeText("Burgers")
    app.typeText("\r")

    XCTAssert(app.tables.cells.textFields["Burgers"].exists)
}

The test simply taps on the "Groceries" list, adds a "Burgers" item, and verifies that the item is indeed in the list.

Notice a problem?

Now, this is where we can get into a bit of a philosophical debate about testing and verification. The question is, do we want the UI tests to only verify that the UI is correct? Or, do we want our UI tests to validate that both the UI and the model is correct?

For me, I've seen far too many bugs where the UI was updated but the model didn't update to be satisfied with only validating the UI in our tests.

So, what do we do? The primary problem is that these UI tests are running outside of the process, and in the case of tests on the device, they aren't even running on the same machine. Don't worry, we have options!

The basic requirement is to be able to send a message of some sort to the application under test (AUT). Our options:

  1. Make use of the Accessibility APIs
  2. Make use of the network

My philosophy for building test infrastructure is to build it the cheapest, most simplest way possible for your needs at the time. When you have expensive test infrastructure, it can be a significant cost to update as your needs change later, and they always do.

So, instead of building up a mini-server that runs in your app, I would just fake it and create a UIView/NSView with a simple text field that's shown in response to some gesture. The purpose of this view is to provide a mechanism to provide a command layer into your product accessible via your test.

A command might even be as simple as a string: "get model items". When ENTER is pressed, you'd have a handler for the event that would handle each command and replace the contents of the text field with the result, such as: "Burgers, Apples, Oranges, Bananas, Milk, Bread".

Now, in your test, you can simply get the value of the text field and XCTAssert that the value matches the string.

You might be thinking that this is a bit of a hack, and well, you're right. However, it ends up being extremely cheap to maintain and update this model as it is so extremely easy to get up and running. Adding commands is literally as simple as registering the command name with a block on the view controller as well. And, if you really want to, you can compile out all of this stuff in App Store builds.

Xcode UI Testing