Working Things Out

Share this post

It's just math

workingthingsout.com

Discover more from Working Things Out

sharing ideas about work and the workforce
Continue reading
Sign in

It's just math

Part 1.5: Extending Part 1

Nov 18, 2023
Share

My first blog post ever can be found as Part 1 (here). Some feedback (thank you!) suggests that it will help to reflect a bit further on what that blog was and was not about:

  • First, the Part 1 blog was about a technical (math) clarification for calculating validity differences in selection systems. Although Schmidt & Hunter (1998) correctly calculated utility differences between two selection systems as the difference in multiple R values (ignoring the cost of testing, etc.), they did not calculate validity differences correctly. Validity differences have to be calculated as the difference between R2 values. You can then report that as ∆R2, or take the square root and report ∆R, or do whatever other transformation you would like to make—so long as it is based on this ∆R2 value. My post did not intend to take a stand on what you “should” report; instead, the goal was very modest, to focus solely on how to calculate a validity difference correctly.

    Let me reinforce the point with a specific numerical example: Say that a selection system has a multiple R of .20, and after adding more predictors, the multiple R goes up to .30. The difference in utility is a function of validity and is indeed simply R = .30 - .20 = .10 (S&H had this correct). But .10 is not the validity difference, where instead you have to subtract the respective R2 values: ∆R2 = .09 - .04 = .05. If you then want to report the R metric instead, you would take the square root of .05 to get R = .22 (…and not .10).

    This isn’t new stuff, by the way! (a) In hierarchical linear regression, if you want to report ∆R, you cannot just subtract multiple Rs from each stage of the regression; you would have to calculate ∆R2 and then take the square root. (b) Similarly, if you want the average of two standard deviations, you first must square them (to get variances), then average them, and then take the square root.

  • Second, given that the Part 1 blog was so narrow and “mathy” in scope, it was decidedly not a commentary on two recent and related papers by Sackett et al. (2022, 2023)1 that update the data and the statistical approach of S&H. Those two papers generated numerous responses2, and two Sackett et al. replies to those responses3. As far as I can tell, none of this recent work performs the calculation the Part 1 blog focused on. However, given the timing of these Sackett et al. papers, I probably should have stated that I was not focused on them—so I’ll do that now.

OK, whew! Now in my next post, I will turn to Part 2, “More Adventures in Meta-Analysis,” which will be much more expansive (and less “mathy”) than Part 1 was. Yes, here I will refer to the Sackett et al. papers and responses, but in the context of a broader meta-analysis idea. Please stay tuned!

Thanks for reading Working Things Out! Subscribe for free to receive new posts and support my work.

1

Sackett, P. R., Zhang, C., Berry, C. M., & Lievens, F. (2022). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology, 107(11), 2040–2068. https://psycnet.apa.org/doi/10.1037/apl0000994

Sackett, P., Zhang, C., Berry, C., & Lievens, F. (2023). Revisiting the design of selection systems in light of new findings regarding the validity of widely used predictors. Industrial and Organizational Psychology, 16(3), 283-300. https://doi.org/10.1017/iop.2023.24

2

Oh, I.-S., Le, H., & Roth, P. L. (2023). Revisiting Sackett et al.’s (2022) rationale behind their recommendation against correcting for range restriction in concurrent validation studies.Journal of Applied Psychology, 108(8), 1300–1310. https://doi.org/10.1037/apl0001078

and 14 replies to Sackett, Zhang, et al. (2023) found here: https://www.cambridge.org/core/journals/industrial-and-organizational-psychology/issue/30929F42437BCF88585C5E2669923D6E

3

Sackett, P. R., Berry, C. M., Lievens, F., & Zhang, C. (2023). Correcting for range restriction in meta-analysis: A reply to Oh et al. (2023). Journal of Applied Psychology, 108(8), 1311–1315. https://doi.org/10.1037/apl0001116

Sackett, P., Berry, C., Lievens, F., & Zhang, C. (2023). A reply to commentaries on “Revisiting the design of selection systems in light of new findings regarding the validity of widely used predictors”. Industrial and Organizational Psychology, 16(3), 371-377. https://doi.org/10.1017/iop.2023.47

Share
Previous
Top
New

No posts

Ready for more?

© 2023 Fred Oswald
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing