(This is the second of a three-part series on the development and refinement of the Bright Score, an algorithm that quickly and efficiently matches resumes to job openings. If you missed Part I, read it here. You can find Part 3 here.)
In creating the technology to more efficiently match candidates to jobs and better identify talent on the Internet, Bright conducted the Human Insights Resume Evaluator Study (HIRES), the first and largest resume-to-job matching study in history. In it 10 HR professionals — recruited from multiple functions within HR, including sourcers, generalists, recruiters, and managers — scored 10,000 pairs of candidate resumes (about 8,800 of them unique) and job descriptions to qualify (or disqualify) the candidates for each position.
Meanwhile we built an algorithm, the Bright Score, that evaluated about 100 different data points (or “features”), taken from a candidate’s resume and publicly available social media information, against the requirements for a given job. Using the results of the HIRES study to determine the most important features of successful job-candidate matches, we optimized the algorithm to examine 77 resume and social media features and “trained” it to deliver a score, between 0 – 100, representing an individual candidate’s fit for a specific job. (To read more about the science behind the original HIRES survey and our use of social media data, see the peer-reviewed paper at Bright Labs.)
Recently, we completed a second, larger survey to further refine the Bright Score. Here are the details. Send in the graphs!
About the New HIRES Survey
Our updated survey involved 96 talent-management professionals reviewing 41,000 pairs of resumes and job descriptions. After reviewing each combination, the evaluator would both 1) “grade” the candidate’s fit for the described job on a scale from A (top) - F (bottom), or grade the candidate as “overqualified;” and 2) make a judgement of “Recommend” or “Do Not Recommend” for the candidate regarding the described position. (Though the letter grades and recommend/do not recommend judgement were two separate decisions, we found that in every case, a letter grade of A or B always correlated to a “Recommend” decision, whereas a letter grade of D or F always correlated to a “Do Not Recommend” decision. There was some variation among the pairings that received C letter grades; some evaluators chose to Recommend a candidate after giving a C grade, while others chose Do Not Recommend after giving a C grade.)
In order to build a system that could honestly improve upon the recruiting and hiring process, we needed to ensure that the evaluators in our survey fairly represented the general population of recruiters. Here is what our panel of 96 evaluators looked like. (Note: Click on any graph below for the full-size version.)
Figure 1a: Evaluator Gender
Figure 1b: Evaluator Age
Figure 1c: Evaluator Education Level
Figure 1d: Evaluator Professional Experience
Figure 1e: Evaluator Industry
Figure 1f: Time Evaluators Dedicated to Recruiting
More than half our evaluators reviewed an average of at least 50 resumes per week.
Figure 2a: Average Amount of Resumes Reviewed Each Week
Half of our reviewers typically gave a “Recommend” judgement to only between 1% - 10% of the resumes they reviewed for a given job.
Figure 2b: Typical Passage Rate
The median amount of time our evaluators dedicated to reviewing each resume was 156 seconds, or a little over two and a half minutes. The mean evaluation time for a resume was 164 seconds, or just about two-and-three-quarters minutes. (The range among evaluators was from 35 seconds, on the low end, to 374 seconds — over six minutes — on the high end.)
Figure 3a: Median Resume Review Time by Evaluator
Figure 3b: Average Resume Review Time by Evaluator
The average Bright evaluator reviewed the average resume more than 27 times longer than does the average recruiter. Even the individual evaluator who reviewed the average resume most quickly did so for almost six times as long as does the average recruiter.
Resume Scoring and Review Time
We found that, on the whole, resume-job description pairings that resulted in A letter grades took the least time to evaluate. Those pairings that resulted in a D letter grade took the most time to evaluate, on average.
Figure 4: Average Evaluation Time by Letter Grade
In my next and final post in this series, we'll look at how this new survey data has actually been put into effect to match resumes to job openings on Bright.