On tech interviews

Feb 23, 2017   #Interview  #Algorithms  #Einstein  #Puzzle solving  #Peter Wason 

Recently, many techies have spilt words against doing the Google interview process. Broadly they feel their real and demonstrated abilities are not being valued. The most famous of these cases is Max Howell - the developer of homebrew - being rejected in the interview. Following Google, Amazon and the like, much smaller companies have also begun to subject interview candidates to such “problem solving exercises” - either on a whiteboard or within test environments such as HackerRank where you can be rewarded for coming up with the wrong answer quickly instead of the right answer slowly. These same candidates would speak up against these companies as well, had they interviewed there. Is there a real problem with this interviewing technique or are these candidates crying sour grapes?

Note: This post was written a couple of months ago and shared with a few. I’m publishing it late on request, and with some small edits and additions.

-Srikumar (29 April 2017)

Affordance to reject

Google, Amazon and Microsoft can afford to not hire awesome engineers. The number of candidates who would like to work in these companies will tend to be much bigger than the number they actually need, these companies need strong filtering at the input. So what they need to optimize is whether the folks who get through their process are good with them. They do not need to optimize for the interview process losing out on great candidates. I don’t know for sure, but I suspect that these companies will do ok even if their acceptance is less than 1 percent. Folks who really want to work at these companies and are actually any good can quite easily play the interview game and pick up the necessary algo+DS chops they need to get through. So this works in their favour. This would work anywhere your filtering ratio can be harsh while still meeting your hiring needs.

Smaller companies need to sell the idea of working for them to good engineers. Startups, for example, generally place themselves as attractive to work for. The downside is that there are too many of them for good engineers to consider. If a startup rejects someone like Max Howell, given they’re interviewing him, it is quite likely they lost someone who could’ve been a great ally. Here is someone with the tenacity and perseverance to simplify porting, setup and management of what is usually unix software in the MacOSX world, build a community around it and present it well. These traits go beyond the fence of “programming skills”.

Problem formulation is more valuable

Problem finding and formulation is much harder and more valuable than problem solving. In a lot of real non-google-scale engineering work, the bigger need is to identify problems worth solving and to chisel these problems into a form amenable to computing. This often warrants immersion in the domain at hand. You need to be able to see a tree structure where one may not present to you directly. Or it may be a graph that presents to you, but it is mostly a tree and that would get 95 percent of your business need met 10x faster. Or you find some piece of code slow but have no clue what its actual computational complexity is because it isn’t neatly laid out on a page, just that it is really slow for your application and you need to do something about it. The bottom line is that in regular engineering work, nobody writes out problems for you as neatly as you find in HackerRank tests or interview puzzles, and many other concerns dominate. For example, it is often unclear whether you really have the information you need before you can attempt a solution. To top it all, there are many ways to formulate a problem. “Many” not as in “three or four”, but as in “we don’t even know how many”.

How much harder is problem finding and formulation over problem solving? Between 1905 and 1915, Einstein spent the entire time formulating general relativity. The core inspiring “equivalence principle” was already hinted at by Ernst Mach. The mathematics of curved manifolds was already invented by Riemann. Maxwell’s equations already said that the speed of light is a constant independent of the reference frame. No new experimental results were needed during this period for Einstein’s work. So what did Einstein do? Now, the product of this decade is taught in a semester or two of a good masters level physics course today. If we include the works of a few more physicists such as John Wheeler, Kip Thorne and Schwarzschild whose works are also taught, that is about “three top scientist decades” of work that is taught in under “one graduate student year”. So that’s a factor of 30 there. So having a nose for an unsolved problem waiting to be found and persevering through various false starts and blind alleys is at least 30 times harder than the ability to solve problems of this kind. The required intelligence level is likely possessed by at least several thousand physicists today, but the required perseverance and nose is what made Einstein the great physicist he was.

Having a nose for an unsolved problem waiting to be found and persevering through various false starts and blind alleys is worth at least 30 times the ability to solve problems.

Artificially constrained environments are pointless

So once you find and formulate a problem, isn’t solving it a skill worthy of scrutiny? Certainly it is, but do it in proportion to its importance to you. Also, there is no need to do it in an artificially constrained environment like an interview setup or ignore evidence from other sources. The time bounds set for such exercises, the kinds of problems posed, being deprived of the resources you’d normally use and not getting to work together with colleagues to sort things out are all non-real-world constraints.

For over a decade of our youth, we’re systematically biased to think that collaborating with our mates on something is “cheating”. This is carried into the interview culture.

To reject a market proven candidate in such an artificial setup begs the consideration of whether the setup creators are not creative enough to do better. In many projects at Pramati/Imaginea, we’ve solved real, interesting and tough problems for our customers. Granted they aren’t Google-scale problems, but they are sizeable and very real for our customers. Did we box these engineers into artificial constraints in our interviews? No. So do we have a generous acceptance rate? Nope. Do we give candidates problems and ask them to code? Depends. We know the kind of engineers we want, so we’re happy to sort out all the signals you can throw at us. Github profiles? Great. Significant open source contributions? Awesome. Side projects with demos? Super. But we do need to talk to candidates. We do need to find out if we’ll be willing to work with them as colleagues. We do want to have productive conversations bouncing ideas off each other. We do want to know what our hires will do with our code.

Puzzle solving ability does not generalize

An elephant in the room is that practicing puzzles and games makes you better at … puzzles and games. If you’re a master Kakuro solver, you don’t get magically better at chess. If you’re a chess master, you still can’t be trusted to be in command of an army. Real world contexts call on our abilities in completely different ways.

A time boxed puzzle coding test for an engineering role is like asking an army general to take a “chess test” in order to qualify.

Peter Wason’s four card puzzle illustrates something very interesting here. Go there read the problem, think about it and come back here before reading further.

…. spoiler alert! proceed with caution ….

So you have four cards face down. The top faces of two cards say “A” and “D” and the other two say “4” and “7”. The question is, which ones are worth turning over if you want to know whether the following statement is false - “If a card has a vowel on one side, it has an even number on the other side.”?

This test was used in a study by psychologists Peter Wason and Johnson Laird to show the operation of confirmation bias. People tended to pick “A” and “4” to check, or just “A”, whereas the right checks are “A” and “7”.

Now here is a real world problem -

Problem: You’re a cop and you enter a bar. You see people sitting at a table, each with a drink in hand. One of them is an elderly person. One of them is having a margarita but you can’t tell her age. One is a boy drinking something you can’t make out. The fourth one is drinking coffee. Who would you check to ensure that alcohol drinkers are above the age limit? 1

… Take a moment to solve it …

That was easy wasn’t it? How was the second problem compared to the four card task? Did you struggle with it as much? Did you show “confirmation bias” with the second? Likely not. How a problem is stated makes a great deal of difference to how we understand them. If we suck at a problem expressed in an abstract form, it doesn’t mean we’ll suck at it in the real world.

Algorithms and data structures are but folk wisdom

To top all of this, algorithms and data structures are at best treated as the folk wisdom of computer “science”. This may come as blasphemy to many, but it is a view I’ve come to over a long period. For example, you - dear reader - will likely know that the computational complexity of the quicksort algorithm is O(N log(N)). Can you quickly tell me the complexity of the algorithm if the computer’s memory takes linear time to access the Nth item in a sequence? Your time starts now.

… tick tock …

Our current knowledge in the field depends on the machine architecture we assume. Once that changes, everything may change. A quantum computer can factor large numbers quicker than a classical computer using Shor’s algorithm. A quantum “database lookup” can be done in O(√N) using Grover’s algorithm whereas it can’t be done faster than O(N) on a classical computer.

As long as we can recognize when we’re confounded by some situation and have the humility to reach out to the wisdom of the folk recorded in many an awesome book, we’ll never be stuck in the confounded state forever. As long as we can ask questions about our situation, seek information we don’t have and break things down, we will have ample opportunity to capitalize on the folk wisdom. And as long as we’re engaged with the real world, we stand every chance of contributing to the folk wisdom ourselves.

What then?

All this is not to say that puzzle solving and algorithmic coding challenges are useless. The point being raised is that they are often used as tools by smaller companies because bigwigs like Microsoft, Google and Amazon do them and that’s the wrong reason to do them. Furthermore, they’re done to the exclusion of other important traits that could have bigger impact within their firms.

It is important to have a clear idea of the balance in your context so you can make the right call about your hires. Use what the market is already telling you about them.

Some traits that we shouldn’t lose sight of -

Curiosity: Has the candidate lifted the hood off any of the tools or libraries used in their projects and peered inside?

Relentlessness: Does the candidate give up too soon? Some very smart folks have the tendency to give up too soon if the solution to a problem is not “obvious” to them.

Resourcefulness: What does the candidate do when at their wit’s end? Are they confident enough to talk to others about what they’re having difficulty with? Do they seek help in time, or do they try hard until it is too late?

Courage: Is the candidate comfortable saying “I don’t know” and admits errors when wrong? Do they make themselves vulnerable in the process of accomplishing their tasks? Can they make bold moves based on imperfect information?

Respectfulness: Does a candidate attack ideas or people? Do they talk ill about others behind their backs?

Rather than elaborate these and others here, I defer to the excellently articulated Twelve Virtues of Rationality.

Hiring well is tough already. There is no need to make it harder in a manner that begets no extra rewards.


  1. I thank my teacher Dr. Kevin McGee who presented this version in his class at CNM, NUS. ↩︎