Performance Xcode7 Beta 2

Update June 27th @ 3:21 PM

I was able to squeeze out some more performance by promoting some constants that I noticed I had in my code.

Here's the updated code:

func RenderGradient(inout buffer: RenderBuffer, offsetX: Int, offsetY: Int) {
    buffer.pixels.withUnsafeMutableBufferPointer { (inout p: UnsafeMutableBufferPointer<Pixel>) -> () in
        var offset = 0

        let yoffset = int4(Int32(offsetY))
        let xoffset = int4(Int32(offsetX))

        let inc = int4(0, 1, 2, 3)
        let blueaddr = inc + xoffset

        for var y: Int32 = 0, height = buffer.height; y < Int32(height); ++y {
            let green = int4(y) + yoffset

            for var x: Int32 = 0, width = buffer.width; x < Int32(width); x += 4 {
                let blue = int4(x) + blueaddr

                // If we had 8-bit operations above, we should be able to write this as a single blob.
                p[offset++] = 0xFF << 24 | UInt32(blue.x & 0xFF) << 16 | UInt32(green.x & 0xFF) << 8
                p[offset++] = 0xFF << 24 | UInt32(blue.y & 0xFF) << 16 | UInt32(green.y & 0xFF) << 8
                p[offset++] = 0xFF << 24 | UInt32(blue.z & 0xFF) << 16 | UInt32(green.z & 0xFF) << 8
                p[offset++] = 0xFF << 24 | UInt32(blue.w & 0xFF) << 16 | UInt32(green.w & 0xFF) << 8
            }
        }
    }
}

And the new timings with this update:

Language: Swift, Optimization: -O, Samples = 10, Iterations = 30          ┃ Avg (ms) ┃ Min (ms) ┃ Max (ms) ┃ StdDev ┃
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━┩
RenderGradient ([UInt32].withUnsafeMutablePointer (SIMD))                 │ 15.75163 │ 15.00523 │ 17.31266 │ 0.8139 │
──────────────────────────────────────────────────────────────────────────┴──────────┴──────────┴──────────┴────────┘

Language: Swift, Optimization: -Ounchecked, Samples = 10, Iterations = 30 ┃ Avg (ms) ┃ Min (ms) ┃ Max (ms) ┃ StdDev ┃
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━┩
RenderGradient ([UInt32].withUnsafeMutablePointer (SIMD))                 │ 3.789642 │ 3.272549 │ 5.110642 │ 0.6232 │
──────────────────────────────────────────────────────────────────────────┴──────────┴──────────┴──────────┴────────┘

The -O case was unaffected, however, the -Ounchecked is now about twice as fast as before and practically the same as the C

Update June 27th @ 1:56 AM

I noticed a bug that I had when adding the x-values, they should have been incremented by (0, 1, 2, 3). I updated the code samples and timings, though the analysis comes out to be roughly the same. I did see some the SIMD code not have much benefit under the most aggressive compiler settings. That's not too unexpected as this code is fairly trivial.

Original Entry

Well, it's that time again, to look at the performance of Swift. I've been using my swift-perf repo which contains various implementations of a RenderGradient function.

So, how does Swift 2.0 stack up in Xcode 7 Beta 2? Good! We've seen some improvements in debug builds, which is great. There is still a long ways to go, but it's getting there. As for release builds, not too much difference there.

However, there is a new thing that got added in Swift 2.0 – basic SIMD support.

I decided to update my RenderGradient with two different implementations, one that uses an array of pixel data through the array interface and another that interacts with the array throught a mutable pointer. The latter is what is required for the best speed.

Here's the implementation:

NOTE: I'm pretty new to writing SIMD code, so if there are any things I should fix, please let me know!

func RenderGradient(inout buffer: RenderBuffer, offsetX: Int, offsetY: Int) {
    buffer.pixels.withUnsafeMutableBufferPointer { (inout p: UnsafeMutableBufferPointer<Pixel>) -> () in
        var offset = 0

        let yoffset = int4(Int32(offsetY))
        let xoffset = int4(Int32(offsetX))

        // TODO(owensd): Move to the 8-bit SIMD instructions when they are available.

        // NOTE(owensd): There is a performance loss using the friendly versions.

        //for y in 0..<buffer.height {
        for var y = 0, height = buffer.height; y < height; ++y {
            let green = int4(Int32(y)) + yoffset

            //for x in stride(from: 0, through: buffer.width, by: 4) {
            for var x: Int32 = 0, width = buffer.width; x < Int32(width); x += 4 {
                let inc = int4(0, 1, 2, 3)
                let blue = int4(x) + inc + xoffset

                p[offset++] = 0xFF << 24 | UInt32(blue.x & 0xFF) << 16 | UInt32(green.x & 0xFF) << 8
                p[offset++] = 0xFF << 24 | UInt32(blue.y & 0xFF) << 16 | UInt32(green.y & 0xFF) << 8
                p[offset++] = 0xFF << 24 | UInt32(blue.z & 0xFF) << 16 | UInt32(green.z & 0xFF) << 8
                p[offset++] = 0xFF << 24 | UInt32(blue.w & 0xFF) << 16 | UInt32(green.w & 0xFF) << 8
            }
        }
    }
}

The basic idea is to fill the registers on the CPU with data and perform the operation on that set instead of doing it one value at a time. For comparison, the non-SIMD version is below.

func RenderGradient(inout buffer: RenderBuffer, offsetX: Int, offsetY: Int)
{
    buffer.pixels.withUnsafeMutableBufferPointer { (inout p: UnsafeMutableBufferPointer<Pixel>) -> () in
        var offset = 0
        for (var y = 0, height = buffer.height; y < height; ++y) {
            for (var x = 0, width = buffer.width; x < width; ++x) {
                let pixel = RenderBuffer.rgba(
                    0,
                    UInt8((y + offsetY) & 0xFF),
                    UInt8((x + offsetX) & 0xFF),
                    0xFF)
                p[offset] = pixel
                ++offset;
            }
        }
    }
}

The awesome thing is that the SIMD version is a bit faster (update June 27th, @ 9:20 am : previously it was 2x before I fixed a bug, dang!)! When 8-bit operations are allowed, it should get even faster as we can reduce the amount of work that needs to be done even further and directly assign the result into memory.

Here is the performance break-down for these two methods in -O and -Ounchecked builds:

Swift Performance

Language: Swift, Optimization: -O, Samples = 10, Iterations = 30          ┃ Avg (ms) ┃ Min (ms) ┃ Max (ms) ┃ StdDev ┃
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━┩
RenderGradient ([UInt32].withUnsafeMutablePointer)                        │ 18.07803 │ 17.19691 │ 21.00281 │ 1.4847 │
RenderGradient ([UInt32].withUnsafeMutablePointer (SIMD))                 │ 15.88613 │ 15.11753 │ 20.16230 │ 1.5437 │
──────────────────────────────────────────────────────────────────────────┴──────────┴──────────┴──────────┴────────┘

Language: Swift, Optimization: -Ounchecked, Samples = 10, Iterations = 30 ┃ Avg (ms) ┃ Min (ms) ┃ Max (ms) ┃ StdDev ┃
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━┩
RenderGradient ([UInt32].withUnsafeMutablePointer)                        │ 6.623639 │  6.22851 │ 8.339521 │ 0.6325 │
RenderGradient ([UInt32].withUnsafeMutablePointer (SIMD))                 │ 6.629701 │ 5.930751 │ 8.751819 │ 1.0005 │
──────────────────────────────────────────────────────────────────────────┴──────────┴──────────┴──────────┴────────┘

Now, here's where things start to get really interesting. I have a C

C

Language: C, Optimization: -Os, Samples = 10, Iterations = 30             ┃ Avg (ms) ┃ Min (ms) ┃ Max (ms) ┃ StdDev ┃
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━┩
RenderGradient (Pointer Math)                                             │    9.364 │    8.723 │   11.338 │  0.994 │
RenderGradient (SIMD)                                                     │    7.751 │    7.101 │    9.642 │  0.960 │
──────────────────────────────────────────────────────────────────────────┴──────────┴──────────┴──────────┴────────┘

Language: C, Optimization: -Ofast, Samples = 10, Iterations = 30          ┃ Avg (ms) ┃ Min (ms) ┃ Max (ms) ┃ StdDev ┃
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━┩
RenderGradient (Pointer Math)                                             │    3.302 │    2.865 │    5.061 │  0.693 │
RenderGradient (SIMD)                                                     │    7.607 │    6.991 │    9.923 │  0.887 │
──────────────────────────────────────────────────────────────────────────┴──────────┴──────────┴──────────┴────────┘

When Swift is compiled without the safetey checks, it's sitting right between the "Pointer Math" and the "SIMD" versions. The safety checks are causing about a 2-3 times slow-down over the -Ounchecked version though. There might be some room for improvement still in how I'm structuring things. Also, the C

I find this really exciting! We're really close to being able to write high-level, low syntactical noise code (compared to C

Again, the code for this can be found here: swift-perf. If you know any optimizatinos I should make in the C

Performance Xcode7 Beta 2

The new print() is Flipping Busted (nope, it’s me!)

UPDATE Friday, June26th @ 10:49 PM

Yep… so I've tracked down the issue: http://www.openradar.me/21577729. The problem is that single parameter, generic functions are implicitly turning multiple parameters into a tuple. That's the root cause of the initial bug report.

Good times.

Here's the code if you want to try it out:

func f<T>(value: T) {
    print("value: \(value)")
}

func f<T>(value: T, hi: Int) {
    print("value: \(value) \(hi)")
}

f("hi")
f("hi", append: false)
f("hi", append: false, omg: "what?")

f("hi", hi: 12)  // calls f(value: T, hi: Int)
f("hi", Hi: 12)  // calls f(value: T)

UPDATE Friday, June 26th @ 10:02 PM

OK… so I totally screwed up… It's my fault, Swift is just fine (kind-of). (I still want print() and println() back though).

SO… it turns out I have a typo in my code below in my bug report rant…

for  _ in 0 ..< titleWidth { print("━", appendNewLine: false) }
print("‚ïá", appendNewLine: false)

Should have been this…

for  _ in 0 ..< titleWidth { print("━", appendNewline: false) }
print("‚ïá", appendNewline: false)

I'll let you spot the difference. I am unsure as to why I did not get a compiler error. The only reason I noticed the issue is because I tried to workaround the issue reported below by using another overload of print(). That overload did correctly flag my labeling error. So, still a bug with print(), but no where near as bad as I originally though below.

Also, in the playground, print("hi", whateverIWant: false) works…

Anyhow, back to your regularly scheduled broadcast…

SHAME POST FOR ALL TO LEARN FROM…

I'm just going to copy my bug report here as I think this is worthy of being shared more broadly…

Bug Report First of all, I'm sure this bug has been logged before, but I don't care because this is so flipping irritating right now.

A "systems language" that has no proper ability to write console apps and output text to the screen succinctly is just broken. The move to replace print() and println() with a single function print() with an overload – terrible decision.

Ok… breathe… I can deal, it's just an overloaded function now, no problem…

Imagine your surprise when you try and use it to print out some table headers:

for  _ in 0 ..< titleWidth { print("━", appendNewLine: false) }
print("‚ïá", appendNewLine: false)

And you get output like this in the console:

("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("━", false)
("‚ïá", false)

This is retarded and completely broken. Please change print() back to the proper print() and println() versions and fix the implementation to actually output correctly to the screen.

The new print() is Flipping Busted (nope, it’s me!)

‘Design Question: Extension, Conforming Type, or Generic Type?’

Here's a design question for you Swifters out there.

I'm playing around with building a tokenizer that works based on a set of rules that you provide to it. From what I see, I have three basic design choices.

//
// Option 1: A tokenizer that manages its own cursor into the `ContentType`.
//
public protocol Tokenizer {
    typealias ContentType : CollectionType

    var rules: [(content: ContentType, offset: ContentType.Index) -> ContentType.Index?] { get }
    var content: ContentType { get }

    init(content: ContentType)

    mutating func next(index: ContentType.Index?) throws -> Token<ContentType>?
}

//
// Option 2: A tokenizer that passes the next index back to the user for the next call.
//
// NOTE: A tuple breaks the compiler so this type is needed: rdar://21559587.
public struct TokenizerResult<ContentType where ContentType : CollectionType> {
    public let token: Token<ContentType>
    public let nextIndex: ContentType.Index

    public init(token: Token<ContentType>, nextIndex: ContentType.Index) {
        self.token = token
        self.nextIndex = nextIndex
    }
}

public protocol Tokenizer {
    typealias ContentType : CollectionType

    var rules: [(content: ContentType, offset: ContentType.Index) -> ContentType.Index?] { get }
    var content: ContentType { get }

    init(content: ContentType)

    // HACK(owensd): This version is necessary because default parameters crash the compiler in Swift 2, beta 2.
    func next() throws -> TokenizerResult<ContentType>?
    func next(index: ContentType.Index?) throws -> TokenizerResult<ContentType>?
}

//
// Option 3: A mixture of option #1 and #2 where the tokenizer manages its own cursor location but
// does so by returning a new instance of the tokenizer value.
//
public protocol Tokenizer {
    typealias ContentType : CollectionType

    var rules: [(content: ContentType, offset: ContentType.Index) -> ContentType.Index?] { get }
    var content: ContentType { get }

    init(content: ContentType, currentIndex: ContentType.Index?)
    func next() throws -> Self?
}

Option 1

The main problem I have with option #1 is that I'm in the business of managing some bookkeeping details. This has the terrible side-effect of requiring me to expose all of the details of that bookkeeping work in the protocol so that I can provide a defautl implementation of how this works when ContentType is a String. That's bad.

The other option is to create a struct that conforms to the protocol and provides an implementation for various ContentTypes. However, I want to reserve that for a conforming type for particular structures of data, like CSVTokenizer or JSONTokenizer.

However, this option provides the benefit of being extremely easy to use as the caller of the code doesn't need to maintain the nextIndex as in option 2 or new instances of the tokenizer in option 3. Simply call next() and you get the expected behavior.

Option 2

This gets rid of all of the negatives of option #1, but it does add the burden to call the next() function with the correct index. Of course, this does allow some additional flexibility. My big concern here is the additional code that each caller will need to make each time next() is to be called. They have to unpack the optional result, then the nextIndex value and call next() with it.

Maybe this is OK. The trade-offs seem better at least. And, I can provide a default implementation for any Tokenizer that makes use of a String and String.Index.

The thing I like most about this approach is that each type of Tokenizer simple provides the rules as an overloaded read-only property.

Option 3

This kinda merges option #1 and option #2 together; it's also my least favorite. I don't like all of the potential copying that needs to be done. It's not clear to me that this will be optimized away, especially under all use cases. However, I thought I should at least mention it…

Option 4

Ok, there really is another option. It's to create a struct Tokenizer that provides an init() allowing you to pass in the set of rules to use when matching tokens. I really don't like this approach that much either. This turns handling rules for common tokenizer constructs like CSV and JSON into these free-floating arrays of rules.

That feels wrong to me. A concrete implementation of a CSV Tokenizer seems like the better approach.

Wrapping Up

I'm leaning towards option #2 (in fact, that is what I have implemented currently). It seems to be working alright, though the caller callsite is a little verbose.

guard let result = try tokenizer.next() else { /* bail */ }

// do stuff...

// Do it again!
guard let result = try tokenizer.next(index: result.nextIndex) else { /* bail */ }

This is probably ok and it's likely to be done in a loop. It just feels like a lot of syntax and bookkeeping the caller needs to deal with.

Anyhow, thoughts? Better patterns to consider?

‘Design Question: Extension, Conforming Type, or Generic Type?’

‘RE: Swift Protocols Question (inessential.com)’

Over the weekend, Brent has a question about Swift protocols. This shows, hopefully, just a point-in-time problem with Swift's type system. Unfortunately, there isn't a great workaround for this problem.

The crux of the issue is this: protocols with a Self requirement enforce your code to essentially be made up of homogenous types in order to do anything useful. This kinda stinks, and in Brent's case, is not what he wants.

If you ever do this:

protocol Value : Equatable() {}

Then boom, you're stuck. The Equatable protocol has a Self requirement, which trickles downstream to all of your protocols and types that apply a conformance to it.

Now, I don't know Brent's situation, but there is a way out of this in your situation allows for it: don't conform your protocol to Equatable, instead, conform your types to it.

protocol Value {}
struct MyType : Value, Equatable {}

When you do this, you can now write the signature that Brent wanted.

protocol Smashable {
    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value
}

The problem here, is if you want your types to be equatable to one another, you'll need to provide a heterogenous equality function:

func ==(lhs: Value, rhs: Value) -> Bool {
    // lhs == rhs?
}

This means that your base Value protocol needs to define all of the members for equality, which again, might be OK for your scenario.

A full sample looks like this:

protocol Value {
    var identifier: String { get }
}

protocol Smashable {
    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value
}

struct Foo : Value, Smashable, Equatable {
    let identifier: String

    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value {
        return Bar(identifier: "smashed by Foo")
    }
}
func ==(lhs: Foo, rhs: Foo) -> Bool {
    return lhs.identifier == rhs.identifier
}

struct Bar : Value, Smashable, Equatable {
    let identifier: String

    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value {
        return Foo(identifier: "smashed by Bar")
    }
}
func ==(lhs: Bar, rhs: Bar) -> Bool {
    return lhs.identifier == rhs.identifier
}

func ==(lhs: Value, rhs: Value) -> Bool {
    return lhs.identifier == rhs.identifier
}

let f = Foo(identifier: "foo")
let b = Bar(identifier: "bar")

let fsmash = f.valueBySmashing‚ÄãOtherValue‚Äã(b)
let bsmash = b.valueBySmashing‚ÄãOtherValue‚Äã(f)

if f == b { print("f == b") }
else { print("f != b") }

if fsmash == bsmash { print("fsmash == bsmash") }
else { print("fsmash != bsmash") }

UPDATE June 22, 2015

I probably should have mentioned the isEqualTo guidance from The Protocols talk from WWDC (also in the Crustacean sample code. We can clean up the sample code a bit to not require all of our conforming types to implement an == operator nor an isEqualTo function:

protocol Value {
    var identifier: String { get }

    func isEqualTo(other: Value) -> Bool
}

extension Value {
    func isEqualTo(other: Value) -> Bool {
        return self.identifier == other.identifier
    }
}

protocol Smashable {
    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value
}

struct Foo : Value, Smashable, Equatable {
    let identifier: String

    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value {
        return Bar(identifier: "smashed by Foo")
    }
}

struct Bar : Value, Smashable, Equatable {
    let identifier: String

    func valueBySmashing‚ÄãOtherValue‚Äã(value: Value) -> Value {
        return Foo(identifier: "smashed by Bar")
    }
}

func == <T : Value>(lhs: T, rhs: T) -> Bool {
    return lhs.isEqualTo(rhs)
}

func ==(lhs: Value, rhs: Value) -> Bool {
    return lhs.isEqualTo(rhs)
}

let f = Foo(identifier: "foo")
let b = Bar(identifier: "bar")

let fsmash = f.valueBySmashing‚ÄãOtherValue‚Äã(b)
let bsmash = b.valueBySmashing‚ÄãOtherValue‚Äã(f)

if f == b { print("f == b") }
else { print("f != b") }

if fsmash == bsmash { print("fsmash == bsmash") }
else { print("fsmash != bsmash") }
‘RE: Swift Protocols Question (inessential.com)’

Catching Errors for Testing (Or Why Enums Suck… Sometimes)

I have a fairly reasonable task: I want to write some test code to ensure that certain paths of my code are throwing errors, but not only that, errors of a certain "type".

OK… this sounds like it should be really trivial to do.

Here's the setup:

enum MyErrors : ErrorType {
    case Basic
    case MoreInfo(title: String, description: String)
}

func f(value: Int) throws {
    switch value {
    case 0:
        throw MyErrors.Basic

    case 1:
        throw MyErrors.MoreInfo(title: "A title?", description: "1s are bad, k?")

    default:
        break
    }
}

And for the tests:

func testFThrowsOn0() {
    do {
        try f(0)
        XCTFail("This was supposed to throw")
    }
    catch MyErrors.Basic {}
    catch {
        XCTFail("Incorrect error thrown")
    }
}

func testFThrowsOn1() {
    do {
        try f(1)
        XCTFail("This was supposed to throw")
    }
    catch MyErrors.MoreInfo {}
    catch {
        XCTFail("Incorrect error thrown")
    }
}

func testFDoesNotThrowOn2() {
    do {
        try f(2)
    }
    catch {
        XCTFail("This was not supposed to throw")
    }
}

Ok, the tests do what they are supposed to do… but that is some ugly code. What I want to write is this:

func testFThrowsOn0() {
    XCTAssertDoesThrowErrorOfType(try f(0), MyErrors.Basic)
}

func testFThrowsOn1() {
    XCTAssertDoesThrowErrorOfType(try f(1), MyErrors.MoreInfo)
}

func testFDoesNotThrowOn2() {
    XCTAssertDoesNotThrow(try f(2))
}

I have no idea how to write this code… the simple version XCTAssertDoesThrow is trivial, just catch any exception and perform the logic. However… how to pass in the value, especially on an enum with associated values, so that it can be properly pattern matched, I don't even know if that's possible.

The only way I know how to even come close to what I want to is to cheat significantly.

enum MyErrors : ErrorType {
    case Basic
    case MoreInfo  //(title: String, description: String)
}

func == (lhs: ErrorType, rhs: ErrorType) -> Bool {
    return lhs._code == rhs._code
}

func != (lhs: ErrorType, rhs: ErrorType) -> Bool {
    return !(lhs == rhs)
}

func XCTAssertDoesThrowErrorOfType(@autoclosure fn: () throws -> (),
    message: String = "", type: ErrorType, file: String = __FILE__,
    line: UInt = __LINE__)
{
    do {
        try fn()
        XCTFail(message, file: file, line: line)
    }
    catch {
        if error != type { XCTFail(message, file: file, line: line) }
    }
}

So I had to:

  1. Get ride of my associated enum value, which means I'll need to wrap this information into a struct.
  2. Define the == and != operators to compare ErrorType through a hack that exposed the _code

    value that is used for bridging to ObjC with NSError the oridinal value of the case statement.

  3. Define the XCTAssertDoesThrowErrorOfType function.

This allows me to achieve most of what I wanted, but it came at a cost and some really hacky code that is likely going to break in future betas of Swift 2.0.

So really I'm back at square one:

  1. Try the do-try-catch boilerplate code each time.
  2. Catch all errors and report that as a test success.
  3. Write the really hack code.

I think option #3 is a no-go. So out of convenience and hope that we'll be able to solve this better at a later date, I'm likely to go with option #2. BUT this leaves a test hole… :/

Any other ideas?

Update – Added the 4th option I failed to mention…

I forgot about the fourth option: promote each of the enum case values to its own type. Though… that has some significant drawbacks as well.

struct MyBasicError : ErrorType {
    let _code: Int
    let _domain: String
}

struct MoreInfoError : ErrorType {
    let _code: Int
    let _domain: String

    let title: String
    let description: String
}

First, we need to add the _code and _domain workarounds. That sucks…

And now we want a type signature like this:

func XCTAssertDoesThrowErrorOfType<T : ErrorType>(@autoclosure fn: () throws -> (),
    message: String = "", type: T, file: String = __FILE__,
    line: UInt = __LINE__)

However… since Swift cannot specialize the generic call, that means we need to pass an actual instance into the function for T or switch to use T.self and T.Type as the parameter. That's also less than ideal. So let's just do this:

func XCTAssertDoesThrowErrorOfType(@autoclosure fn: () throws -> (),
    message: String = "", type: MirrorType, file: String = __FILE__,
    line: UInt = __LINE__)
{
    do {
        try fn()
        XCTFail(message, file: file, line: line)
    }
    catch {
        if reflect(error).summary != type.summary {
            XCTFail(type.summary, file: file, line: line)
        }
    }
}

At least the callsite doesn't need an instance anymore:

XCTAssertDoesThrowErrorOfType(try f(0), type: reflect(MyBasicError))

The plus side is that this is work for both structs and enums, so long as the enum only has a single case that you care about comparing.

Anyhow… still stuck on how to actually do this in a "proper" way or if we'll be able to in Swift 2.0.

Conclusion

It seems that, at least for now, the reflect() solution is the best I can come up with. It works for both enums and structs, and it can be extended to consider the details of each of those if desired. The only real drawback is associated enums: you need to pass in an instance of that enum (you could pass in strings but that exposed an implementation detail and removes the ability to further inspect the types).

XCTAssertDoesThrowErrorOfType(try f(0), type: reflect(EnumError.Info(title: "")))
Catching Errors for Testing (Or Why Enums Suck… Sometimes)

XCTest Missing ‘throws’ Testing

It looks like XCTest is missing some basic support for testing if a function throws or not. Here are two snippets that I'm using until that support is added.

func XCTAssertDoesNotThrow(@autoclosure fn: () throws -> (), message: String = "", file: String = __FILE__, line: UInt = __LINE__) {
    do {
        try fn()
    }
    catch {
        XCTFail(message, file: file, line: line)
    }
}

func XCTAssertDoesThrow(@autoclosure fn: () throws -> (), message: String = "", file: String = __FILE__, line: UInt = __LINE__) {
    do {
        try fn()
        XCTFail(message, file: file, line: line)
    }
    catch {
    }
}

I'm not sure how to extend this to validating for a specific error though…

Using it in a test is pretty easy:

func f() throws {
}

func someTest() {
    XCTAssertDoesNotThrow(try f())
}
XCTest Missing ‘throws’ Testing

Do-Catch Seems Overly Cumbersome

There's been a lot of conversation on Twitter and people's blogs about the error handling model, especially with regards to throws vs. Result<T, E>. I think that's great because people are really getting engaged in the topic and hopefully the situation will get better.

The latest thing I'm running up against in the cumbersomeness of the do-catch pattern for handling errors. Not only that, I think it promotes the bad exception handling that Java had with people simply writing a single catch (Exception e) handler. Sometimes that's OK, I guess.

The thing I like most about throws is that it cleans up the API signature for functions:

func foo() throws -> Int { ... }

vs. this:

func foo() -> Result<Int, ErrorType> { ... }

It's where we go to handle it that is really starting to get to me:

do {
    try foo()
}
catch {
}

My first complaint, I don't think it's a good idea to write giant do-catch blocks with multiple throwing functions within it:

do {
    try foo()
    // ...
    try foo()
}
catch {
}

I just think that's bad style. It is pushing the error handling further and further from the code that is actually going to error-out. Also, I think it defeats the intended purpose of being intentional about handling errors.

This is what will happen once you get a handful of functions that throw:

func someFunc() {
    do {
        /*
         You know this to be true... we've seen it.
        */
    }
    catch {
        // Um.. I guess I should do something with the "error"?
    }
}

Now, I do like the try and the throws annotations. I think they add clarity to the code, especially as code grows and needs to be maintained over time. But, I think it might have been cleaner to do something more like guard.

try foo() catch {
    /* handle the error */
}

The thing I really like about this is that it's only the error-handling code that is being nested a level. This keeps the logic flow of the happy path of the code at the same level and calls out, making it more explicit where the bad path is.

Then, if we combine this with guard, we can get this:

guard let result = try foo() catch {
    /* handle the error */
    /* also, forced scope exit */
}

/* result is safe to use here */

Of course, if we want to bubble the errors up, the catch clause could be omitted if the enclosing function/closure also throws.

func someFunc() throws {
    try foo() // this is ok because someFunc() can throw
}

Anyhow, just some thoughts as I'm starting to write a lot more nesting that I care too.

I've logged rdar://21406512 to track it.

Do-Catch Seems Overly Cumbersome