Best Practices for Hiring Software Developers: Front-End, Back-End, & Full-Stack
Hiring software developers is no easy task. Each role has its own unique set of required skill sets—and different levels of seniority have entirely different sets of requirements. Those distinctions, paired with constantly changing technologies, makes it challenging to nail down a repeatable, sustainable hiring process.
In this piece, we’ll walk through best practices for creating an evaluation process centered on candidate skills, not candidate backgrounds or work history. You’ll find the most in-demand core competencies, the anatomy of an effective technical interview, and best practices for building an assessment framework for your organization. We’ll also share a template for creating assessment rubrics for your own roles.
Watch the video below, or read on for our key takeaways:
Moving beyond resumes: how to modernize your screening
A resume doesn’t tell the whole story. Data like the candidate name, email address, address, and degree, leads reviewers to make assumptions about the candidate’s skills off the bat. And in many ways, that data can be misleading.
Take student developers, for example. HackerRank research has shown that even amongst developers enrolled in a university, more than half still consider themselves at least partially self-taught. One third of them consider themselves entirely self-taught. So if you’re only looking at the coursework on their resume, you’re bound to misinterpret their skills.
And that stretches beyond university hiring. In fact, according to our research, 75% of hiring managers say they’ve hired a highly qualified candidate that didn’t have a strong resume. Otherwise put: resumes aren’t a reliable indicator of technical skills. Opportunities to overstate (or understate) skills make it easy to mis-sort candidates.
For developer roles, technical assessments can be a useful substitute. With the option to blind out candidate details, they limit opportunities to make decisions based on unconscious bias. And by evaluating all candidates against the same exact criteria, decisions are made objectively—not based on subjective interpretations of phone screens, resume screens, and the like.
The anatomy of a technical assessment
To create an effective technical assessment, the first step is to understand how they’re constructed. While early stages of the process might focus on a more basic level of qualifications—like pre-screening, for example—the technical interview is the time to dig deep into the candidate’s problem solving capabilities, communication skills, and technical skills.
- Correctness: Ability to get to a naive solution
- Optimality: Ability to optimize a solution for more difficult constraints
Real World Challenge
- Correctness: Ability to get to a naive solution
- Code Structure: Is the code one giant file, or modularized well?
- Debugging: Ability to find the bug in a large codebase
- Test-Driven Development: Is there good test case coverage?
- Correctness: Create a simple architecture with unlimited resources
- Optimize for Scale: How will the design change for 1M users versus 100k users?
- Knowledge of Technology: What tools will you use for queuing systems, databases, etc.?
- Pair Programming: Ability to work together
- Code Readability: Can others read the code? Is it well documented?
- Language Proficiency: Are they fluent in the language of their choice?
- Depth of Questions from Candidate: Indicates general aptitude
While there’s no universal skills rubric for all technical roles, interviews for front-end, back-end, and full-stack roles generally include some combination of the above.
It’s important to note that the goal isn’t to test technical skills through arbitrary questions, such as brain teasers. Instead, the goal is to understand how they apply their technical skills in the context of a relevant problem. So challenges have to be tailored to the role.
The goal is to create a challenge that’s consistent with the job’s day-to-day responsibilities. So if your company is an investment bank, for example, you probably wouldn’t want to create a technical interview about building a weather app. A challenge aligned to the job gives you a better signal of the candidate’s fit in the role, and gives them a better picture of what it’s like to work at your org.
Choosing the right assessment
The next step is to create an assessment framework for your open roles. What experience levels are you looking for? And how will you change the questions you ask based on the experience level of the developer? Creating a framework that identifies these distinctions will help you make decisions as you construct assessments for each respective role. Ultimately, the goal is to ensure that you’re asking questions that are closely tailored to the role responsibilities, and to the experience level.
Take, for example, the sample assessment rubric below. For a back-end developer, you might want to test for skills like code understanding, code testing, and problems solving, regardless of their experience level. But the candidate experience level will define the questions you ask to assess each of those skills.
Simulating on-the-job experiences
Aside from ensuring you’re tailoring questions to the role and experience level of the candidate, you also need to ensure that your assessments are tailored to the job’s day-to-day tasks. That means using your assessments as an opportunity to simulate on-the-job experiences.
Generally, simple coding questions are most appropriate for entry-level candidates without much work experience. On the other hand, real-world questions do a better job of showcasing the abilities of those with more experience.
You must merge strings a and b, and then return a single merge string. A merge operation on two strings is described as follows:
Append alternating characters from
a and b, respectively, to some new string, mergedString.
Once all of the characters in one of the strings have been merged, append the remaining characters in the other string to mergedString.
Build a country filter:
Evaluating assessment performance data
After designing an assessment for a given role, the next—and perhaps most important step—is to gather assessment performance data. The goal is to answer questions like:
- What’s your completion rate? Are candidates taking your assessments or ignoring them?
- How are candidates responding to your assessments? Do they feel the content is relevant?
- Are the candidates passing the assessment performing well in subsequent steps of the evaluation?
- Is your assessment set at the right difficulty level? Is it passing too many candidates, or is it too restrictive?
- Is the average test completion time in line with your team’s intentions?
For example, HackerRank measures this via the Test Health Dashboard, which benchmarks each assessment across key indicators like: test link clicks, attempt rates, test completion rates, average attempt duration, and more. While you’ll get your most meaningful performance data from attempts in the field, you can also calibrate your assessment before it goes out to candidates. Try recruiting a cohort of about 20 peers to try the test internally. For example, if you’re creating an assessment for entry-level front-end developers, ask recent hires that fit the same description to try the assessment.
If all of your developers are failing the assessment, there’s a good chance you’ll need to tweak the questions. On the flip side, if they’re all breezing through it far under the allotted time, it won’t be a meaningful signal for your team.
Once you’ve calibrated your assessment internally and then used it with roughly 100 candidates in the field, you can circle back to re-evaluate the test’s effectiveness.
Three final tips for creating a skills-first evaluation process
Move beyond resumes
Focus on skills, not resumes. Resumes are minefields for unconscious bias, and aren’t representative of candidate skills. They can overstate (or understate) technical skills, which leads to false signals early in the hiring process. As an alternative, technical skills assessments evaluate all candidates against the same rubric—which means more objective decision-making.
Look deeper than language proficiency
Language proficiency is only the tip of the iceberg when it comes to assessing skills fit. Avoid oversimplifying your assessments with basic coding questions, especially for more senior roles.
Instead, create a rubric of the skills you need for each role, and the question types you’d pose for each experience level within that role. Ensure you’re asking questions that represent the full breadth of required skills, not questions that just scratch the surface.
Focus on candidate experience
Especially for more experienced candidates, it’s important to use your assessments to simulate on-the-job experiences. Simple coding questions may work well for entry-level candidates in some roles, but candidates with work experience should get real-world coding questions.
Create questions that mimic the day-to-day responsibilities of the role at hand. Not only will it help you better evaluate the candidate’s skills in context, but it’ll ensure the candidates know what they’re signing up for in the role.