Interviewing is hard work. Somehow, in a single hour (or some other arbitrary span of time), you are supposed to give an opinion of yay! or nay! on whether to hire someone or not. Sure, sometimes they get N number of chances, but if one of those goes wrong, then what?

Unfortunately, there really are not a lot of good ways to measure a candidate's ability. Sure, a GitHub history might be ok. But honestly, I'm not going to spend hours looking through that for each candidate. Also, sometimes the code we push up to GitHub is terrible, especially as we are iterating towards a solution. So how to find the good from the bad?

Work experience… well, that's nice, but that really only spins it how the candidate wants it to be spun. I've had candidates that have clearly embellished their involvement and contributions to a project. So it's not really enough to say, "Wow, cool, you worked at Google for three years doing X! You are clearly a hire!". I also have candidates that undersell the work they've done (I typically fall into this category too).

So basically we are back to a set of interview questions where we bombard a candidate with, hopefully interesting and reflective questions. Some questions though are just not that great.

Take this classic problem: "You have 8 identical balls in every way, expect that one of them is heavier than the others. You are given an old-school balance scale; your job is to find the heavier ball."

One thing I really dislike about this question is that the interviewer is typically biased towards finding the answer in the fewest number of weighs, which to me, is a bit of a trick question (2 weighs vs. 3). Also, answers that take an iterative solution are usually looked down upon as not being correct, even though they are actually the better approach when generalizing the problem and moving it to code (and arguably in the real world scenario as well depending on the speed of your balance scale).

Here are the actual considerations of the problem:

- The cost of counting the initial set of balls
- The cost of partitioning the balls into various groups
- The cost of determining the weight of each partition of balls
- The act of "weighing" the two partitions of balls that can go on the scale

These basic steps need to be repeated until the answer is found.

Now, when dealing with physical objects in the real world, the cost breakdown is: `O(n)`

(or `O(1)`

if we are "told"), `O(n)`

, `O(1)`

, and `O(1)`

. We'll assume that you put the balls into a tray of some sort so moving to and from the scale is some small constant time. There's also the time it takes the balance to move, but that should also be a small constant time (afterall, the weight difference needs to be large enough that a random distribution of balls in a tray wouldn't cause a false negative).

If we are asked to implement this solution in code, the cost breakdown is this: `O(n)`

(need to count them at least once to know), `O(1)`

(in the best case, just index offset updates), `O(n)`

, `O(1)`

.

Notice anything in particular? You *always* have to touch every single ball to find out the answer. The only question is, do any of these operations have a constant time value that dwarfs the apparent `O(1)`

cost. I'd argue no. Maybe you could make the case in the real world, but most definitely not when asked to generalize this solution to N weighted items and write the algorithm to do it.

If I were to ask this question, I would ask it exactly as I framed it above. Regardless of the path the candidate chose, I would expect them to be able to give reasons why they chose that path. However, if you gave me the binary search answer and simply told me that it is the best because it has the fewest number of weighs in it, I would simply look at you and say, "I believe I can solve it faster with more weighs".