Pre-employment selection tests can be valuable tools, providing vital information about candidates’ applicable knowledge, skills, and abilities before they are offered employment. Ideally, this information will save you time and money by increasing the likelihood that candidates will perform well and stay on the job. To reap these rewards, however, you must carefully weigh your assessment options and choose wisely. Asking the following critical questions is key to identifying a test that will help you select and keep the right people.
Questions #1: How was the test developed?
When assessing the test development process, focus on the steps taken and the people involved.
What steps were used to establish the content, items, scoring criteria, cut scores, and score reporting? Each step should provide evidence that the test relates to the requirements of the job. For example, did the test developers work with content experts to establish the knowledge, skills, and abilities needed for job performance as the basis for the test content? Additionally, ask the vendor what industry standards they followed when establishing their process and whether their test is third-party reviewed.
The often overlooked question is, “What are the qualifications of those involved in the development of the test?” It is essential to verify that both technical experts and content experts contributed. Imagine you want to buy a test to evaluate language proficiency and you learn that the test was developed using proper steps but the content experts were mathematicians without experience in language development or assessment. The test is not likely to reflect the skills necessary for the work and, thus, not likely to effectively, or legally, assist with selection.
Question #2: How is the test administered and scored?
Ask the vendor about test administration and scoring. Is the test delivered in person or online? Is the test unproctored, proctored in person, or proctored remotely? Depending on your business needs, these could be deal-breaker issues. Regardless, do not compromise on the qualifications of those administering and scoring the test. For example, if this is an interactive test, ensure that the assessors are carefully trained to score responses reliably. In addition to inquiring about examiner and assessor training, ask if they are certified and if there is refresher training or re-certification.
Question #3: How does the test perform?
It is essential to get a sense of how the test performs overall. For example, ask about item functioning, reliability, and validity. Have the test developers demonstrated consistency within the test, across administrations, and across assessors? Have they shown that the test measures what it is designed to measure, predicts outcomes of interest (e.g., job performance, customer service, turnover), and relates to cost savings? Additionally, ask about typical pass rates. Knowing typical pass rates will be particularly helpful after you decide whether you want the test to help screen-in exceptional candidates or screen-out deficient candidates.
Question #4: How is the test maintained?
Test maintenance is another often neglected topic. It is crucial, however, to ensure that work on the test continues beyond initial test development. How do the test developers keep up-to-date regarding changes to the relevant jobs and skills? How do they monitor changes in how the test performs and make updates to the assessment? Do they track and address item exposure? What is the process for dealing with candidates’ challenges to test items or results?
Getting the answers to these questions will help you make the difficult decision about what selection test will best serve your needs. Look for test developers who use technical experts, content experts, industry best practices, and qualified examiners and assessors. Strong test performance such as reliability and validity will be well documented and essential to effective selection.
Dr. Tara Myers
Ph.D. Industrial and Organizational Psychology