Extended Reading

by

Guerra-López, I., & Leigh, H. N. (2009). Are performance improvement professionals measurably improving performance? what PIJ and PIQ have to say about the current use of evaluation and measurement in the field of performance improvement. Performance Improvement Quarterly, 22(2), 97-110.

In their study, Guerra-Lopez and Leigh look at the state of evaluation in contemporary performance improvement literature to examine to thoroughness and attention given to evaluation by performance improvement professionals. Based off of their findings, they argue that evaluation – despite being critical to the field – is underrepresented in two of the field’s most prominent academic journals: Performance Improvement Quarterly and Performance Improvement. Their analysis – intended for practitioners and researchers alike – analyses articles published in these two journals to evaluate how often topics surrounding performance improvement evaluation are mentioned and written about at length. Simply put, their findings support their theory that evaluation is not being taken as seriously as it should be in the literature.

The claims discussed in this work shed light on a critical aspect of work being done in the field: evaluation. Despite human performance technologists being aware of evaluation and its merits, the depth of evaluation necessary to appropriately measure an intervention’s success is not always being performed. The absence of this discussion in the literature – as made apparent by the authors – demonstrates not only a professional unawareness, but an altogether disregard for one of the most prominent ways human performance technology professionals can understand their shortcomings and, perhaps more importantly, sell their interventions to those they work with. By not producing measureable results and discussing those results in the literature, the field loses both credibility and sustainability.

Missing from their discussion was much mention of practice. While the literature is, ideally, indicative of this, little mention of direct work being completed is mentioned. Speaking directly with professionals and examining evaluative tools that are being used by those professionals would provide a much more direct examination of the evaluation issue being discussed. This is done second-handedly by examining the literature itself, and, with the depth of literature available, this may be logistically inappropriate, but it would provide the authors with a much more direct source of information.