Olly Headey

March 11, 2024

Technical interviews in 2024

Technical interviews have long been a hot topic of conversation/head-scratching in tech circles, but candidates now have technology at their fingertips that was unimaginable a few years ago. 

Apparently, things are out of control

I’ve done hundreds of interviews over the years and my approach has always been fairly simple: a CV screen, technical test, follow-up code review call, then technical and non-technical interviews (that’s simple, right? 😅). We focused a lot of effort on reducing cognitive biases over the years, but the underlying process didn’t change that much. Considering the majority of people we hired at FreeAgent over the years were very good, I can assume this approach works pretty well.

The stage that has come under increased scrutiny recently is the technical test. Is it fair? Should candidates be paid? Is it now pointless since AI can produce wonderful solutions in seconds? 

I think technical tests are fair, important, and not entirely pointless, even in 2024. You’re hiring technicians and you’ll be paying them handsomely, so you need to be able to get a measure of their technical competency before you make an offer. Would you hire an electrician without asking them to rewire a circuit, or a plumber without plugging a leak? Just because someone talks a good game doesn’t mean they’re actually any good on the job. You need more insight than that.

I also think it’s fine not to pay candidates for doing the test (it’s ~2–4 hours), but if you have the funds to do so then great – go for it! It might make your company more attractive to applicants. On the flip side I’ve heard tales of week-long test projects and extensive on-the-job pairing. If you’re doing this you should definitely pay, and pay well.

I’m a fan of practical technical tests. Not those awful How Would You Move Mount Fuji riddles (I’m a decent programmer but I am genuinely appalling at these). Ask candidates to do something simple yet tangible. For many years at FreeAgent the test we asked candidates to complete involved parsing an FX data feed from a URL, then building a very basic UI to perform currency conversions on that data (Rails makes this a breeze!). It was an extraction from a basic library I built in FreeAgent and, despite being simple, the test proved to be a good one. 

However, this was before the advent of LLMs.

While writing this article I tried prompting ChatGPT to write the code for the FreeAgent test (I found the instructions on the web 😳) and it did a really good job. It could easily create individual files, tests, ensure atomic commits, and even write the documentation. All this without expending much effort on the prompt. It would easily have passed the test! 

So if anyone with rudimentary software dev skills can produce a more-than-passable test, surely this makes a coding test stage completely redundant?

I don’t think so. 

If someone was completely winging it with AI and they had no idea what they were really doing, any follow-up call to discuss their implementation would be a train wreck. Imposters would be quickly unmasked. However, someone with a decent programming grounding could probably get through a review call too. They’d understand the output that the LLM produced, and they’d be able to reason about it. In this case, the test would demonstrate someone had the nous to use LLMs effectively and also talk confidently about software design decisions in person. You’d hope the following real-time technical interview would raise competency red flags, but it’s still a risk. Ultimately doing a technical test in this way is a futile exercise and does little to deter cheats.

Way back, we used to do the FreeAgent programming test in the office. Some candidates found this stressful, so we moved to take-home tests and it worked out fine. In this new AI age, reverting to an in-person (in-office or real-time video) test is probably a good idea. It’s what I would do if I actually had a job right now 😅 (wanna hire me?!). I’d pair an engineer with the candidate on a problem, such as fixing a bug, shipping a small feature change, or maybe I’d just use the same original test and get them to hack on that. You could allow candidates to use Google or even ChatGPT! Engineers should be using LLMs day-to-day just as everyone already uses Google, right? If engineers are not using them as part of the day-to-day, they’re missing out and probably taking too long to ship. Truth.

You might think there’s a downside to this approach, because it sounds like more time and effort for the hiring team. However, since you’re combining the test and the follow-up call, it could actually speed up your hiring process. You’d need to be more selective in inviting candidates to this stage, but that’s a good thing – all too often people are sent a take-home test when you’re not actually that sure about the candidate because it’s cheap for the employer, so why not. This is something of a dark/lazy pattern, so being more rigorous with your candidate selection at this stage will be beneficial on both sides. 

It’s important to remember that no process is foolproof. At some point you’ll hire someone who doesn’t make the grade, for whatever reason. To minimise the impact of this, you need to make sure you have a rigorous onboarding, mentoring and evaluation process in place to identify issues and take action as early as possible. The last thing your business needs is taking 6 months (or longer) to figure out that someone isn’t good enough – that’s a massive hit to productivity and morale, and a big topic in itself which I’ll save for future articles.

Happy hiring!

peter-thomas-yEYravYsZkU-unsplash.jpg

About Olly Headey

Journal of Olly Headey. Co-founder of FreeAgent. 37signals alumni. Photographer.
More at headey.net.