NEW

Where are you on the Analytics Maturity Curve?

Download the Whitepaper

Blog

Finding a Better Way to Interview Engineers

Like most startups, we want to build a team that ships software fast and at high quality. How should we interview engineers?

This is a difficult problem: how can you be confident what it will be like to work with someone, given only one day of their time to make an assessment? How do you make a decision based on a few hours of interaction that someone will be a great engineer over the course of multiple years? What does it even mean for someone to be a “great engineer”? This gets at even deeper questions about what kind of team you want to build. Our answer was that we wanted to ship high-value features at high quality and high velocity. How do you interview for that?

The early Heap team discussed what we had seen at other companies, and the answers were pretty uniform: whiteboard problems and an occasional coding exercise. We thought about the resulting teams that this process selected for, and a common pathology became clear. At previous companies, we had been surrounded by very intelligent engineers, but there was always a wide variance in terms of how much work people got done. Some people consistently shipped new features, bugfixes, or refactors, and some people didn’t.

Why is this so common, and how can we avoid it? Given the standard advice around the cost of a bad hire being severe, we wanted to be sure not to make a mistake. How can we get much more signal than a typical interview track provides?

What’s Missing Here?

In a traditional software interview, a candidate will answer four or five technical questions on a whiteboard, sometimes with a short coding exercise thrown in. This has known issues – it’s a platitude at this point to observe that writing pseudocode to traverse a binary tree is not relevant to the day-to-day of building software. But in general this is a reasonable way to assess someone’s CS fundamentals and problem-solving skills.

The problem is that the traditional interview process doesn’t test for how much someone will get done, and that’s an axis along which there is a lot of variance. If you interview this way, you are collecting a lot of signal about how smart someone is, but very little about how effective they will be at practical software engineering. It’s easy to build a low-output team this way, even if all the members can cut through any 45 minute whiteboard problem you throw them. [1]

How We Interview

We set out to try something new, with a guiding principle in mind: make the interview resemble the job as much as possible.

In our onsite interview, candidates build a realistic feature. We walk them through the concept in the morning, and they create a usable piece of software throughout the day. We spin up an EC2 instance and give them sudo on it, along with some starter code in their language of choice. If all goes well, we’ll have something we can QA by dinnertime.

We check in a few times throughout the day, to make sure candidates are on track and to course-correct if necessary, but candidates spend the majority of the day working on their feature, like they would if they worked at Heap.

The feature has a nontrivial design component, so the interview still captures a lot of what people think is valuable about whiteboard exercises. But it does so in a way that we can be confident is relevant to the job, because it’s in the context of designing a real piece of software we would plausibly build.

Why This Works So Well

This has proven to be a much better way to interview candidates, because it resembles the day-to-day job. For example:

  • Engineers write code on a maker’s schedule, with headphones on, ideally in a state of flow. A whiteboard interview is usually an interactive conversation with another engineer. It’s easy to stay on task when you’re having a conversation. It’s a lot harder when you have three hours to yourself and need to organize your time.
  • Software engineering is full of distractions in the form of inconsequential things to fix. You could spend the whole afternoon getting your IntelliJ settings just right and producing no code.

When it comes to building real software, your productivity depends on a huge number of subtle variables. For example, how effectively do you debug? This comes down to a lot of factors: savvy use of tooling, intuition, ability to set up a tight debug loop, and so on. Anyone who’s built real software can attest to the portion of time that goes towards debugging code, but the standard engineering interview tests none of this.

How much of writing software comes down to googling around for an example use of an API, checking Stack Overflow for an error message, knowing which log to look in, or all manner of other trivial-seeming micro-skills? These add up to the difference between an item of work taking an hour and taking all afternoon.

And deeper than the practical technical skills are the practical self-management skills. Do you stay on task or get distracted? Do you get sucked into yak shaves, or are you mindful enough to step back, reassess, and route around them? Do you prioritize getting a minimal version working end-to-end or spend the whole day goldplating the config loader? If there’s a bug in code you don’t control, do you throw up your hands or find a pragmatic workaround?

By design, our interview process measures the net effect of all of these hidden variables. We make the interview process resemble the job as much as possible, which by design tests the skills you need to do the job well.

Engineers Love This

This interview has another wonderful advantage: most candidates enjoy it! When you interview at Heap, you get to make a feature and demo it by the end of the day. A lot of us got into software because we like making things. Our interview process is about making things.

Candidates who interview at Heap also get to learn a new problem space and become familiar with a new data system. That makes the experience rewarding, even for candidates who don’t wind up with an offer.

You can see this positive experience reflected in Glassdoor data. This is a noisy signal, but a directionally accurate one. A much higher percentage of candidates describe interviewing at Heap as a positive experience, while rating it as more challenging than other companies.

Here are the corresponding numbers for two similarly sized companies with whom we regularly compete over talented engineers. I encourage you to look up these numbers for other top tech companies. (And, if you find someone who interviews better than we do, definitely let me know!)

 

Customize the Interview to Simulate Real-World Problems

If you build your interview around the idea of emulating a workday, you can take it a step further by incorporating other elements of day-to-day life. You can think of this as customizing the interview for the experience of working at your company, or you can think of this as prioritizing the attributes that are the most important to your team.

For example, a lot of engineers struggle with letting perfect be the enemy of good. There is something to be said for thoughtful design and upfront planning, but there is a practical balance to be struck, and in a lot of contexts done is better than perfect. From very early on, we wanted to build a team that made pragmatic choices.

You can test for this in your interview by structuring the problem so there isn’t a clean solution. An early version of our interview had the property that there was no perfect design given the constraints we provided – only a series of hacks that would be fine in practice but are inelegant. Lots of good candidates will spend some time trying to figure out a clean solution, which is reasonable, but some candidates will continue doing so well after we’ve unsubtly hinted that there might not be a way. [2]

You’d be surprised how many attributes you can test for in this kind of interview, if you get a little creative.

  • If you want to hire a team of people who are low-ego and easy to reason with, you can add a section in which an engineer on your team proposes alternative designs. Does the candidate react defensively? Are they attached to their own design?
  • If writing is an important part of your day-to-day, and you want to find engineers who write well, you can include a written design document as part of the deliverable. Can you understand the design based on their description, or do you need to clarify in person?
  • If your team builds complicated distributed systems, you can have candidates build a feature that requires reasoning about distributed systems primitives.
  • If your team is geographically distributed, and you want to know what it would be like to work with someone who would be remote, you can conduct the interview remotely. (We do this as well!)

All of this is preferable to standard whiteboard-based software interviews, because what you’re measuring is what determines success at your company.

We bring a similar philosophy to interviewing for other roles. Sales candidates interview by selling us Heap. Then we coach them on how to improve their pitches, after which they attempt to sell us again. Solutions engineers debug common customer issues. In all cases, we are attempting to simulate the job as realistically as possible and determine how someone would perform.

Tradeoffs and Drawbacks

There are some downsides to this process. The biggest is that it’s a lot harder to scale than a traditional panel interview of short, isolated questions. This interview has more degrees of freedom, and you need to trust an interviewer’s judgement. How do you give hints about a problem that the candidate is going to spend another two hours wrestling with? How do you assess the depth with which someone considered alternative designs? There’s a lot of nuance here.

Administering a 45 minute whiteboard problem requires some calibration, but it’s easy to standardize the questions and answers and an arsenal of common hints. Interviewing is a skill, and the standard system might be a better option if your interviewers are average or if you need to calibrate hundreds of them.

We’re trusting our engineers to make a much deeper read about a candidate than a typical interviewer would need to. In practice, we only trust a subset of our engineers to lead this process. They perform a function that’s halfway between a typical interviewer and a hiring manager.

Another drawback is that a candidate can fall behind to a degree that is impossible with a series of self-contained questions. In a traditional interview, you can mess up one problem badly and you’ll get a clean slate for the next one. Our process doesn’t work that way, and our interviewers need to be careful to make sure that one hour in which someone is off their game doesn’t spoil the whole day.

A third disadvantage is that candidates might not get to know as many engineers at your company. This is important to a lot of people, and for good reason: your coworkers are one of the most important factors in the quality of your job. We’ve tried a few different ways to mitigate this, but, in a fundamental sense, if candidates are spending most of the day in flow, writing code, that’s time in which they aren’t interacting with their potential peers.

We Love This Interview

We’ve interviewed software engineers this way for five years. The details have evolved over time, but the basic structure has been consistent. It has served us well, and the benefits to our team are hard to overstate.

In addition to intelligence and competence with CS fundamentals, engineers who work here are selected based on how effectively they build real software and how fluidly they work with others. The result is that small teams at Heap get more done than larger teams at other companies.

How do we know this process works? It’s easy to know if your process has false positives – engineers whom you hire, who end up not working out. But it can be very hard to know if you have false negatives – engineers who would have been an asset to your team, whom you rejected.

We have two types of evidence that our interview process provides particularly useful signal, and is not arbitrarily filtering out engineers who would do well here:

  • Sometimes our process flags something about an engineer, and we end up moving forward with an offer anyway. When that happens, the concern usually does manifest on-the-job. I.e., the outputs coming from this interview process are real, not noise – they are predictive of what it’s like to work with an engineer day-to-day.
  • When we have experimented with hiring engineers who didn’t do well on an aspect of our onsite, they usually didn’t work out. I.e., the key measurements of this process are predictive of what matters for an engineer to be successful here, not just traits that are desirable in the abstract.

Interviewing Is an Asset

We believe interviewing well is a critical function of an engineering organization, not a profane chore in which we should do whatever everyone else does. We see interviewing as a core competency of our team.

An engineering team will converge to the skill level of the least effective engineer who can get through the interview process. [3] What’s more, there are Gresham’s law effects in the job market: companies fight hard to retain their best performers, so engineers who are less effective on-the-job are overrepresented amongst those applying for jobs.

So, we think it’s highly valuable to be able to detect engineering skill more effectively than anyone else. And if candidates enjoy the process along the way, that makes it easier to recruit them in a competitive market.

Engineering interviewing is a deep topic, and I would love to know what you’ve seen and what you think. Find me on twitter @danlovesproofs. Or, if you’d like to give our interview a test drive, we would love to have you – we’re hiring!

[1] There are lots of other factors here. A team in any domain is not the sum of its parts. You can hire a group of effective engineers and still wind up with a totally dysfunctional team. But if you hire a team of people who don’t get a lot done, you will be hard-pressed to produce a team that does get a lot done.
[2] Some candidates even do so after they’ve agreed this is the case. It’s easy to get nerd sniped.
[3] This is oversimplified, because “engineering effectiveness” is not a one-dimensional concept. You want your team to be well-rounded, with lots of different strengths and points of view. But the concept generalizes: if you give offers to mediocre engineers who don’t make the team better, eventually your team will be comprised of people who fit that description.

Dan Robinson

Share