Performance-based tests have substantial impact on the role of the psychometrician. They require skills in analyzing qualitative data, facilitating group discussions, pragmatics, and correlating variables. But, let me start by defining psychometrics and describing what psychometricians do.
Psychometrics is the field of study concerned with the theory and technique of psychological measurement of unobservable factors, such as knowledge, attitude, and decisions as well as observable factors like physical behaviors. So, psychometricians help construct and validate the instruments used to measure those factors.
The work begins with the analysis of quantitative and qualitative data from surveys, interviews, and focus groups collected during the job task and practice analysis phase to identify and determine the importance and criticality of the knowledge, skills, attitudes, and task behaviors). The results are used to create domains and standards as well as to construct assessment instruments (such as written multiple-choice and performance-based tests). Psychometricians also:
- Analyze test data to set pass scores,
- Determine the equivalency of instruments (pre- & post-tests, and multiple forms of a test),
- Determine if the instruments discriminate appropriately (do people who know pass and those who don’t fail), and finally,
- Answer the question does it matter in the “real world.”
Psychometrics Applied to Performance-Based Tests
On the surface, the work looks to be the same; however, in practice there are a few key differences when psychometrics is applied to performance-based tests. One difference is the shift from norm-referenced to criterion-referenced. Performance tests are criterion referenced; they measure if people can do the task or not. It doesn’t matter how they compare to others. With performance tests everyone can pass, if they can do the task to standard. There is no interest in “grading on the curve” or the upper or lower quartile of results.
Next, in performance-based tests, the standard is set by knowledgeable vested parties, and not by the opinion of most practitioners or the test score, as in norm-referencing. The concern is over the consequences of doing a task incorrectly or incompletely. There is no 70% right, only 100% right. We give diplomas to students who averaged 70% on their grades, but nobody wants to ride in a plane piloted by someone who knows 70% of the skills of piloting. In the software world, you want code that works, an app that works, and interfaces that work (all with zero errors).
Another difference is the expectation that test results correlate with some larger goal, such as whether people who pass perform work faster or with greater accuracy or with less cost; do they engender higher customer satisfaction and confidence scores, or create more profitable products that enter the marketplace faster, and so forth.
Need Some Help with Performance-Based Testing?
Our team is standing by to answer your questions and help you get started on the right track. Just call us toll free at 877-461-8702 for a free consultation.
How Performance-Based Tests Affect Psychometricians
These differences change the role of the psychometrician, because they require skill and experience in analyzing qualitative data, facilitating discussions to help groups achieve consensus, and applying pragmatics.
Pragmatics is dealing with problems in specific situations in a reasonable and logical way instead of depending on ideas, theories, or measurement. Leaders want solutions that make sense, follow deductive reasoning, and are logical. They want products that solve real problems that customers care about.
Psychometricians may not be experienced with criterion-referenced tests, designing studies that correlate tests results with other external measures, evaluating qualitative data from interviews and focus groups, or facilitating discussions about the pragmatics of an instrument and what it measures. The technical part of psychometric work, such as calculating the level of difficulty of a question or the percentage of people who answered a question correctly, can be automated. Perhaps more importantly, some standard analytical techniques don’t work with criterion-referenced tests because the tests often have much less variance.
For example, if everyone does something perfectly, it has no discriminating power and would normally be removed from a multiple-choice test. But if it’s an important intermediate work product of a job task, then you might still want to observe it in a performance-based test. Instead, the psychometric role requires greater skill in facilitation and maneuvering the politics.
When seeking the services of psychometricians, ask about their experience creating and validating criterion-referenced tests; facilitating dialogue to come to consensus on the factors to be used to judge sufficiency or the pragmatics of using test results; evaluating qualitative data; and engaging stakeholders in collecting data that can be used to correlate with test results.
One Final Thought on Psychometrics
“The design of any evaluation requires technical, analytical, and political skills to balance the technical and pragmatic considerations needed to answer the evaluation questions. The capable application of these skills creates an evaluation that is technically rigorous, analytically relevant, and politically feasible.” (Tarek Azzam, Michael Szanyi)
About the Author
This week’s article is by guest author Judith Hale, Ph.D., CPT, CACP, a noted expert, writer, and proponent of performance-based assessment. Judith is the CEO of the Center for International Credentials and has had the privilege of working in thublic and private sectors across all industries for more than 30 years. During that time, she’s written nine books on performance improvement, credentialing, and evaluation. Writing has given her the opportunity to codify and share her thinking and experiences with colleagues. She’s also become increasingly more specialized, focusing on performance-based credentials that demonstrate meaningful change.
To learn more about trends in training and testing register for the 4th Wednesday with Judy, a free monthly webinar, or join the Hale Center [www.HaleCenter.org] to get access to proven tools and guidelines. You can also join Judy and Lenora Knapp, Ph.D. at the ISPI Institute on the Future of Work August 3 & 4 at American University in DC, to learn more about future forecasting, the implications of Gen Z on work, the increasing reliance on and the impact of Artificial Intelligence, and the demand for a more fluid workforce.
Want to Learn More?
Subscribe to get articles like these delivered right to your inbox.
Recent Comments