New Product Updates: Enabling Remote Hiring with HackerRank
Hiring the right candidate is hard. Finding, evaluating, and hiring that candidate remotely (and without the right tech) is even harder.
At HackerRank, our goal is to provide companies with the tech they need to effectively evaluate developers—without sacrificing the interviewer insights and candidate experience you’d expect from a traditional onsite. We’re dedicated to helping you deliver a developer-friendly interview experience: one that effectively assesses both technical and soft skills through a process you can trust.
With those objectives in mind, we’re making significant improvements to our platform to make remote hiring simpler, efficient, and more consistent. Want to stay in the loop as we announce more updates over the next few weeks? Check back on this post, or follow us on LinkedIn to be the first to know—we’ll be announcing updates there as they happen.
Latest Update: April 23, 2020
Jump to a specific update:
- April 23, 2020: Improved candidate experience with the Monaco Editor
- April 15, 2020: Assessing design skills with CodePair
- April 9, 2020: Enabling a connected hiring process for better candidate experience
- April 2, 2020: Skill-based candidate feedback with interviewer scorecards
Improved candidate experience with the Monaco Editor
April 23, 2020 | By Raghav Gopalakrishnan
Why it matters
At HackerRank, facilitating developer-friendly, candidate-centric experiences is one of our top priorities; we call it “developer love.” Our goal is to enable our customers to create effective, painless interviewing experiences that developers love—and in turn, reinforce the strength of your tech talent brand.
We’re lucky to have a community of over 7 million developers on our platform. Thanks to that community, we’re able to gain insights on how developers like to code, what tools they utilize most, and more. It’s our guiding light when seeking new ways to optimize the candidate experience on our platform.
To further that goal, we’ve integrated the Monaco Editor in CodePair. The Monaco Editor is the open source editor behind Microsoft’s Visual Studio Code (VS Code), the most popular IDE currently available. The popularity of VS Code makes the Monaco Editor a more intuitive, more familiar editor to use during coding interviews via CodePair. Monaco has been available in CodeScreen and the HackerRank community for more than a year—and now we’re proud to now introduce it into CodePair, too.
How it works
The Monaco Editor functions the same way as the editor in VS Code, with all of the same functionalities already available in CodeScreen and the HackerRank community. It boasts a number of developer-friendly features, like:
- Code linting
- Context-aware autocomplete (IntelliSense)
- Code hints
- Keyboard shortcuts
- Syntax highlighting
The combination of these features enables a more proficient programming experience by reducing keystrokes and highlighting errors, resulting in a better experience for candidates and interviewers alike.
With this update, HackerRank now supports IntelliSense for more than 10 languages:
- Java 7/Java 8
- Python 2/Python 3
The Monaco Editor is currently available in CodePair for all existing customers. Start a CodePair session to try it today.
Assessing system design skills with CodePair
April 15, 2020 | By Harishankaran Karunanidhi
Why it matters
An effective developer interview process evaluates a few primary skills:
- Problem-solving: Ability to split a problem into subproblems and to convert them into code
- Role-specific: Ability to assess framework knowledge for a front-end / full-stack developer or evaluate AWS skills for a DevOps engineer
- System design: Ability to architect a system to be reliable at scale
System design is particularly important as it evaluates a developer across 4 key dimensions:
- Verify if the individual is able to understand the constraints and use cases
- Check if the developer can construct a high-level design
- Validate if the developer understands the various bottlenecks
- Understand if the developer is able to remove the bottlenecks and scale the design
In onsite interviews, the interviewer and candidate would typically use a whiteboard to assess system design skills. They use it to collaborate together as they talk through technical processes by visualizing their thoughts, sketching designs, and more. That’s especially key in system design interviews, where visuals—not coding questions—are core to evaluation. To go deeper than basic problem solving, interviewers need a means to evaluate candidates’ design skills, and their approach to communicating solutions to high-level problems.
But when interviewing from afar, a physical whiteboard isn’t an option. CodePair provides a collaborative IDE to evaluate the candidates’ hands-on coding skills in real time. But shared visual tools are key for digging deeper on system design skills, and for sketching out abstract thoughts that aren’t easily represented in a concise snippet of code.
How it works
CodePair allows diagrams questions, where candidates can draw system/architecture diagrams and the interviewer can view the diagram in real-time. The goal is to give interviewers and candidates a collaborative space to explore thoughts and designs effectively, and concisely—just like you would with a whiteboard at an onsite interview.
Interviewers and candidates can use a combination of standard diagramming shapes, connections, and text to convey thoughts and designs directly through CodePair. CodePair also allows the creation of multiple diagrams throughout the course of the interview, allowing interviewers to explore multiple designs and save all of them.
For example, consider this system design interview challenge:
Design a rate limiter which will:
- limit clients to N requests per minute
- respond with an error message if a client sends more than N requests per minute
The candidate might start with a primitive, non-scalable approach, where every single request is logged into a table. Every time a request is received by the server, it checks the total requests received in the current minute. If the count is less than the allowed limit, the request is accepted and a row is inserted in the table with the api_key and current timestamp. This architecture diagram is depicted inside CodePair below.
One obvious flaw with this architecture is that the MySQL database server is a single point of failure. Once the interviewer points this out, the candidate applies a solution to add a few read replicas to the master database as illustrated by the diagram below.
Of course, this doesn't work either if the number of requests are high, as the latency MySQL might add to the whole system may be too great. So after some discussion, the candidate and interviewer agree that a better solution will be to replace MySQL with Redis. The system design is once again easily updated by the candidate.
The interview may not end there, but once completed, a snapshot of every design is summarized in the candidate report. This enables interviewers to review the candidate’s work before a debrief, and to use it as a visual aid to explain the candidate’s approach to other panel members.
Most importantly, the interviewer has effectively assessed the candidates system design skills in a developer-friendly and entirely remote interaction.
We’re rapidly innovating this feature and have even more exciting updates to announce soon. Be sure to check back here or reach out to your HackerRank customer success manager to get a preview.
Enabling a connected hiring process for better candidate experience
April 9, 2020 | By Raghav Gopalakrishnan
Why it matters
A developer-friendly hiring process is key for any organization looking to hire great talent. Most hiring processes require a screening assessment as a first step, where candidates spend 60+ minutes coding a solution for a predefined challenge. Once they qualify, the next step is a remote interview, where the candidate gets a new set of challenges, and has to showcase their coding skills from scratch once again.
It’s a subpar experience for both the candidate and the interviewer. First, the disconnected interviewing process puts a toll on candidates, who have to solve new coding questions at every step of the hiring process. Second, it risks skewing the opinions of the interviewer; after all, their feedback is solely based on the candidate’s performance in the interview, with no context from their screening assessment.
By pulling code from the screening test into the interview, the interviewer can base their interview questions on the candidate’s previously submitted code. It’s a less strenuous approach for the candidates, who don’t have to worry about learning and solving a new problem within the span of an interview. It also reduces idle time for interviewers, who would otherwise have to wait for the candidate to learn and solve the new question before starting discussion. And more time for in-depth discussion means more time to ensure you’re hiring the best fit candidate.
How it works
To facilitate a connected interview process, we launched CodeScreen-CodePair integration last year. The integration allows the interviewer to pull questions individually from the candidate's screening assessment into a live CodePair interview.
Since its launch, we’ve seen many customers utilize this integration to improve their interview experience. Now, to make this integration even more useful, we’re introducing a new way to import assessments into CodePair interviews.
At the start of a CodePair session, the interviewer will have the opportunity to import the candidate’s completed assessment into the interview. Once imported, each question from the assessment will open in a separate tab, along with the candidate’s submitted code, and score for each question—which yields more context to the interview. For example, the interviewer can ask questions like:
- How did you arrive at this solution?
- Why do you think you were unsuccessful in solving this problem in the test? Could you solve it now?
- If you were to improve on this solution, what would you change?
For those that have enabled identity anonymization for their account through the Diversity & Inclusion Center, the import window will blind out the candidate’s name. This allows you to protect interviewers from engaging with identifying information that might subconsciously impact their evaluation.
More than 40% of our customers have already started importing assessments into their technical interviews to create a more developer-friendly hiring process. Try it yourself by starting a new CodePair interview from your account today.
Skill-based candidate feedback with interviewer scorecards
April 2, 2020 | By Raghav Gopalakrishnan
Why it matters
When it comes to remote technical interviews, the ability to effectively collect and synthesize interviewer feedback is core to assessing candidate fit.
But that’s easier said than done. More often than not, interviewers end up taking their own notes: in their own document, in their own style, and on their own time. The time it takes to gather and synthesize that feedback from interviewers is not only time-consuming: it also yields more generic, imprecise feedback. After all, it’s challenging to recall the details of an interview that happened hours, or even days ago.
To streamline that process, we built a new interviewer scorecard function into CodePair. Blending seamlessly into the existing CodePair interface, the private scorecard allows interviewers to record feedback on candidate skills throughout the interview.
The goal is to provide interviewers a simple way to capture feedback during the interview, and to focus their feedback on key skills of the candidate. The scorecard also helps ensure structured, standardized feedback from every interviewer—helping to focus debrief meetings on the key skills of the candidate.
How it works
As a part of this update, we’ve added the interviewer scorecard directly into the CodePair interface. The scorecard allows interviewers to rank candidates against four key skills:
- Code Quality: Can the candidate write code that’s modular, maintainable, and follows industry standards?
- Problem Solving: Does the candidate show proficiency in basic data structures (e.g. arrays or strings) and algorithms (e.g. sorting and searching)?
- Language Proficiency: Is the candidate able to understand and use different features of the language utilized in the interview?
- Technical Communication: Can the candidate clearly communicate technical concepts?
Interviewers can now rank candidates against each skill using a five-point scale ranging from “Strong Yes” to “Strong No.” Interviewers can also leave more detailed feedback for each skill to elaborate on their choice.In addition to the four key skills , we’ve also enabled interviewers to leave overall feedback about the candidate.
Interviewers can review a collection of feedback via the CodePair candidate report, which summarizes feedback from the interview for easier collective review. Those collecting interview feedback elsewhere—like an applicant tracking system (ATS)—can also copy and paste their scorecard into their ATS notes to streamline candidate review.
We've refocused our engineering efforts on refining and enhancing our remote hiring technology. To that end, we'll be sharing a number of new improvements to support remote hiring in the coming weeks. To be the first to know about new updates, check back on this post, or follow us on LinkedIn.