Jamie Talbot
Medium Engineering
Published in
4 min readJun 9, 2016

--

We recently undertook a project to improve Medium’s engineering interview process. As part of that, I wrote this document to describe why we needed to improve on what we were already doing. It was originally published to Hatch, our internal version of Medium, on February 26, 2016, and is reproduced below without modification, except for link destinations. For more information about Medium’s practice of making internal documents public, see Hatching Inside Medium.

Engineering interviews: refining our process

While we are more thoughtful about interviewing than many companies, our process could nevertheless do with some refinements. Over the last few weeks, I’ve been working with a number of people — internally and externally — to reconsider what it is we actually look for in engineering candidates, and how best to objectively screen for those qualities.

Current issues

Our current candidate scorecard and grading process suffers from a number of deficiencies, which we hope to resolve.

Unclear capabilities

The capabilities we ask for are poorly defined, and sometimes overlap. Nobody can really explain the difference between Critical reasoning and Strong analytical and problem solving skills. The capabilities are listed at different granularities — CS Fundamentals is quite granular (though ambiguous), while Technical pass is a high level check mark. Cultural pass overlaps with the values section.

What are we looking for?

It’s not clear which of these capabilities are absolutely necessary, and which are merely desired. There’s no high-level organisation to the capabilities — we have separate sections for Skills and Qualifications without a clear explanation of how they differ. We would be hard-pressed to answer — going only off the rubric — what we actually look for in engineering candidates at a high-level. We also do not include some capabilities that we know we care about, e.g. Awareness and Empathy.

Inconsistent grading

We rely extensively on “calibration” — which, let’s be honest, more often than not just means “get comfortable with making a call”. Each interviewer has a different idea of what “Strong” means in Strong analytical and problem solving skills. Even if interviewers are internally consistent from candidate to candidate — and, spoiler alert, we’re not — there is a large difference between interviewers. Because we don’t specifically define how important these capabilities are, it follows that different interviewers have different opinions on which capabilities matter most. It is likely that they subconsciously value more highly the capabilities that they themselves exemplify.

What don’t we screen for?

We provide no guidance to interviewers on the things that are unimportant. The tech industry has a habit of screening candidates on criteria that are uncorrelated (and in some cases negatively correlated) with job performance. By failing to be explicit about the things that we consider unimportant, we risk letting people make decisions based on them.

Personality traits

Some capabilities that we looked for, like Confidence, are not universally positive, and under certain circumstances may even be a negative signal. Many great engineers at Medium appear outwardly to lack confidence.

More broadly, Cultural pass is very ambiguous and invites subjectivity in a way that allows for unchecked bias. We recognised the need last year for an update to our Personality Traits section.

Aims

The aims of the refreshed interview process are to:

  • Be more objective and consistent in our assessment of candidates.
  • Continue to hire great people who can help us build a platform for the whole world.
  • Hire only those people who share our values — regardless of their cultural background.
  • Benefit from candidates who we think can thrive at Medium with a not-insurmountable amount of technical coaching.
  • Understand, accept and work to mitigate our biases, and focus on reporting objective interview performance.

And no, we are not “lowering the bar”.

In addition to this introductory piece, there are three living documents that are designed to help us achieve these outcomes. These have been designed in consultation with engineering leadership, frequent current interviewers at Medium, and external subject matter experts from companies brought together by CODE2040.

Part 1: What we screen for

What we screen for is an explicit statement of the things that we care about, grouped into three high-level areas that are easy to understand and communicate.

Part 2: What we don’t screen for

What we don’t screen for lists a number of criteria, some of which we have previously screened on, that we do not consider critical to one’s performance as an engineer at Medium.

Part 3: Grading rubric

The grading rubric is a pretty exhaustive enumeration of each desired category and a guide to what might indicate a Strong No, No, Mixed, Yes, or Strong Yes signal for each.

Changes

At a high-level, there will be a few changes. Most notably among them:

  • All categories for which a grade is recorded must be accompanied by at least one explanatory, objective piece of evidence. In general this means an observable fact that the candidate said or did.
  • Candidates for whom we have previously said No on a technical basis may still be eligible, iff their technical deficiencies are sufficiently minor that they can be corrected with an amount of coaching that the current team can reasonably commit to, and the candidate is strong in learning and teaching, and there is strong values alignment.

Next steps

This is just the beginning of the process. In the coming weeks and months we will:

  • Work with existing interviewers to help them understand and internalise the changes.
  • Update the scorecard in Lever to reflect the new categories.
  • Create a one-sheet to take into interviews to help the interviewer record observable facts.
  • Assess the format of our interviews and determine whether they give us a strong enough signal.
  • Devise a training plan with recruiting to onboard new interviewers.

We may also publish these four documents on the Medium Eng blog to share our learnings with others in the industry, and demonstrate to prospective candidates that they can expect a thoughtful interview experience.

--

--

Ex-gaijin, kangaroo-loving software simian from Merrie England, leading folks at @Axios. Formerly @Mailchimp, @Medium, and @StumbleUpon.