Technical Report: Interpreting Applicability Scores

ABSTRACT
“Check-all-that-apply” (CATA) lists are a popular tool in product tests1,2,3. In a typical test, consumers respond to a series of statements and mark those statements that apply to the product of interest. An advantage of CATA testing is that it provides an opportunity to obtain information from consumers that would be difficult in some cases to extract using either a rating or 2-AFC format. A related method, explored by Loh and Ennis4 in 1982, is called applicability scoring. In applicability scoring, consumers mark statements that are applicable but also mark statements that are not applicable. In a CATA list, an unmarked item may imply that the consumer does not think that the item applies, but could also mean that the consumer merely missed that item – applicability scoring avoids this ambiguity. In this report the topic of how to analyze and interpret applicability scores will be discussed. This report will provide guidance on the analysis of applicability counts to test a null hypothesis of no difference and will also discuss the scaling of applicability data using a Thurstonian model. One application of particular interest will be the comparative evaluation of two products on liking.

This technical report appears as:
Ennis, D. M. and Ennis, J.M. (2011). Interpreting Applicability Scores. IFPress, 14(4) 3-4.

Colleagues can download the entire technical report here:
Interpreting Applicability Scores

Not a Colleague? Click here to join for free!

This technical report also appears in our book, Tools and Applications of Sensory and Consumer Science.