Why Orgspace doesn’t use algorithmic challenges for screening candidates

Posted by

At Geektrust, our aim is to make sure every developer gets the best possible opportunity based on their skill and potential. Unfortunately in today’s highly competitive hiring marketplace, a lot of developers don’t get the opportunities they deserve due to bureaucratic, archaic recruitment processes which rely on resumes to identify skill. In the last few years companies have been relying on coding tests to assess a candidate’s potential. However these tests are focused on algorithmic challenges, and data structure based assessments that have no correlation with what developers do in the real world.

We ask developers to solve small coding challenges at their own leisure, in an IDE of their choice, and with no deadlines. Based on the code they submit, we assess for ability to write clean code, and match them with companies based on their code. The best part is that a developer needs to take up only 1 coding challenge to get matched with and bypass coding rounds at multiple companies!

When I came across this article written by my friend and ex-Thoughtworks colleague Brian Guthrie about how they assess candidates at their startup (Orgspace), it resonated with me. He agreed to share it with the Geektrust community. Thank you Brian (and Aaron).

 


At Orgspace, we recently rolled out a new process for assessing engineering candidates. Notably, we do not use LeetCode, HackerRank, or other third-party tools for screening and assessing prospective engineers. Candidates who go through this process with us describe it to us as a breath of fresh air—that it prioritizes mutual engagement but still feels challenging and rewarding. We think this approach is incredibly strong, and that more companies should consider assessing candidates this way.

What is an algorithmic challenge?

The history of algorithmic coding challenges can be traced back to the ACM International Collegiate Programming Contest (ICPC), which started in the 1970s. The ICPC is a global programming competition that challenges teams of students to solve complex algorithmic problems. These challenges are, for many students, challenging and rewarding—as an undergraduate, I loved this sort of thing. Because they look tough, and because they reward students with deep knowledge of computer science fundamentals, coding challenges have become a popular tool for companies to evaluate engineering candidates.

Companies also like them for certain other properties—they can be delivered and assessed through automation, meaning that they don’t take time away from your engineers to screen; they set a perceptible bar that’s relatively easy to assess consistently; and they can be adjusted to screen for the specific skills you’re looking to assess. For all of these reasons, they are a standard tool for most recruiters, who often treat candidate assessment as a funnel process for eventually landing a butt in a seat.

At Orgspace, we do some interesting algorithmic work from time to time—we love our trees and graphs—but we nonetheless believe that craft is the right lens through which to view candidate assessment. The strength of a code challenge is in its simplicity, not its complexity.

The challenge with challenges

There’s a rich discourse on the structural issues with engineering candidate assessment that I won’t rehash here except to say that I’m broadly sympathetic to it. However, even for candidates for whom these kinds of assessments are suitable, this mode of screening has issues:

  • They don’t map to the kinds of problems most companies encounter most frequently. Tech companies are notorious for screening candidates for technical chops and then asking them to build HTML forms all day. If 95% of a candidate’s job requires executing simple code well, then I don’t want to screen for the 5% they’re least likely to run into, and mostly likely to be able to get some help on.
  • They don’t map to the kinds of mistakes engineers make most frequentlyThere are certainly examples of cases where the right algorithm saved a company time or money, but in general companies suffer from their inability to ship relatively straightforward features reliably and swiftly. These capabilities don’t degrade as companies get larger because their engineers lack algorithms skills.
  • Complex challenges encourage and reward complexity. This is one of the most damaging ways that interviews—which send a strong signal to prospective employees about what you value and reward—build the wrong kind of culture. If your interviews reward mastering complex problems, then employees will seek ways to demonstrate that mastery time and time again. You’ll get complex, and often needless, code.
  • They don’t treat candidates like people. Assessments assigned and reviewed in bulk by machines will get solutions generated by machines and search queries. This kind of cat-and-mouse game will only get worse as AIs become more capable of generating high-quality, robust code.

We believe that this method of assessment is a poor fit for companies that are looking for top-quality candidates, want to distinguish their technical brand in the market, and want to hire people who are phenomenally good at the kinds of problems that most engineering teams face most frequently.

Simplicity, not complexity, is key to good software engineering

At Orgspace, we believe that the ability to solve a simple problem gracefully and cleanly is a more important predictor of success than mastery of algorithms and data structures—while these are important, they can also be learnt along the journey. Simplicity, on the other hand, is an attitude—one that can be cultivated, and sometimes won through hard experience, but not something you can use a search engine to find.

Moreover, what we’ve found is that the ability to execute a simple problem cleanly is a better predictor of your ability to handle complexity than the reverse—that screening for complexity rewards complex solutions to simple problems. That kind of complexity will kill your ability to execute well at scale. For a company that manages as broad a business domain at ours, we screen for people

To learn how a candidate works, work with the candidate

Moreover, we believe that you can learn an enormous amount about someone by simply sitting down and doing the work with them. Pair programming is a powerful way to learn a great deal about how someone approaches a problem relatively quickly. Indeed, for better and for worse, we often learn everything we’d like to know about a candidate’s skill level after fifteen minutes with them.

Pair programming presents its own set of equity and fairness challenges. It can feel intimidating to work with senior engineers; people are uncomfortable being judged in real time; interviewers can, if they’re not careful, put off a candidate or make them feel disrespected or discouraged. Mitigating these requires careful thought:

  • We give the candidate an advantage. We bias towards a candidate’s tools and code, so they’re always starting with an edge over their interviewers, and we open-source the challenge, so that candidate and interviewer are starting on an equal footing.
  • We use a rubricAs much as we can, we align on what a good candidate looks like, what a good solution can look like, and what kind of teammate we want. We share these, in broad strokes, in the pairing assessment itself—no guesswork involved.
  • We train our engineers on pairing assessments. We ask lots of leading, open questions; we assume good intent and expertise; we swap driver and navigator in pursuit of a shared solution to the problem; and we encourage candidates to pursue their own strategies, even if they aren’t the ones we would have chosen ourselves.
  • We go beyond the code. We describe our assessment as a “starting point for a conversation on approach and values,” and we mean it: during the course of a conversation we touch on domain modeling, infrastructure, architecture, and a wide range of other meaningful engineering topics.

Does it work?

Spectacularly, yes.

We aren’t the first company to approach candidates this way—ThoughtWorks and Pivotal Labs, both companies that share a lot of DNA with Orgspace, famously either use lightweight challenges or pair with candidates or both. We’ve also seen firsthand as engineering leaders how algorithmic challenges failed to build the kinds of engineering teams we felt companies needed most to solve their most pressing problems.

I strongly believe that biasing in favor of a candidate’s technical strengths doesn’t hinder your ability to understand their fit for your role, and this is especially true in close technical assessments—in pairing, there’s nowhere to hide. This is especially important for startups in a tough economy, in which every hire has a meaningful impact on the company’s success. We’re using this approach, right now, to hire the best people to build the best people software. If you find this approach interesting, and you’d like to work with us, feel free to reach out to me directly.

This article was originally published on the Orgspace Blog.

About Geektrust

Geektrust is a platform for technologists to find opportunities that match their skill and potential. We are building The New Resume to replace the obsolete old-fashioned resume, shaking up how the world looks at tech hiring, and making recruitment transparent, accountable and efficient.

Liked this story? Please share it or leave us a comment.

Leave a Reply

Your email address will not be published.