How Is Cognitive Testing Conducted for a Quality Report?
In cognitive testing, the interviewer shows respondents a draft of a quality report—in part or whole—that contains either real (with actual names masked) or fictional data that is made to look realistic. As the respondents look through the report, the interviewer prompts them to share their thoughts and impressions.
As with any acquired skill, experience matters, so it is worthwhile to consider hiring someone who is well-versed in doing these types of interviews. Whether you are conducting the cognitive interviews yourself or hiring someone else to do them, here are some important points to help ensure the success of the testing.
Type of Participants
Cognitive testing should be conducted on a sample of individuals who resemble the target audience of your report. For example, if your report is targeted at people who need to select a new primary care provider (PCP), then the participants for your cognitive interviews should be people who have recently selected a new PCP or who may need to do so in the near future. On the other hand, if your report is targeted at a broader population, your cognitive testing participants should represent that breadth.
Number of Participants
You can learn a lot about your report by conducting cognitive interviews with relatively few people. Often one round of testing with as few as 5 to 10 individuals is sufficient to provide useful feedback. The number of interviews needed in any particular round often reveals itself: Once you start hearing the same feedback from participants (i.e., reach a saturation point), it is likely time to stop, regroup, and make changes.
One or more additional rounds of testing with another small group of individuals can also be helpful to explore remaining issues and help you test the changes you made in response to the first round of testing.
Setting the Tone
When starting a cognitive testing session, it is important to put respondents at ease so they feel comfortable sharing their honest opinions. The interviewer should remind participants that there are no correct or incorrect answers to the interview questions and that the purpose of the testing is to learn their opinions and solicit their advice. As an incentive, and to acknowledge the value of their contribution, participants are typically offered a payment of some sort (e.g., cash or gift card).
Collecting the Data
Interviews are usually conducted one-on-one, in-person, and in a professional (office) setting where privacy can be maintained. They typically last about 1 hour.
A notetaker should be present so that the interviewer can focus exclusively on conducting the interview. Besides recording responses, it is useful if the notetaker observes and records respondents’ nonverbal behaviors that may offer clues about how they perceive the materials being tested.
To assure that important data are not missed, it is also a good idea to tape the interview, but only with the participant’s approval.
Learn more about Collecting Data in a Cognitive Interview.
Checking on Comprehension
While it’s tempting to gauge comprehension by asking direct questions about how hard or easy it was to understand the report, this approach is not helpful. People often think they understand a display or a sentence when they’ve actually interpreted it incorrectly.
The best way to check on comprehension is to ask people to interpret the information in their own words. That way, you’ll be able to judge for yourself whether they’ve gotten it right, and if they haven’t, you’ll have a good idea of how the misunderstanding came about.
In most situations, you should not tell participants they gave an incorrect answer. Doing so can change the dynamic so that the participant feels as if the interview is a performance testing situation and begins focusing on what the interviewer thinks rather than reacting naturally to the report.
Summarizing the Results
It is useful to organize the summary of a cognitive interview by listing the various elements of the report being tested, including fairly macro level features like “page 2” as well as specific elements like an icon or a data label.
- For each key element of the report, document what the respondent did with it (e.g., noticed it or not, decided not to read it, or read only part way through it), and what the respondent said about it, including how she interpreted it and whether she found it helpful.
- Document the respondent’s comments about the relationships among report elements, such as how the respondent thinks a newly encountered data display is related to one seen earlier.
Knowing When To Stop Testing
Cognitive testing is an iterative process. Report designers often conduct additional rounds of testing to follow up on problems identified in earlier rounds and to evaluate solutions to those problems.
Multiple rounds of testing can be helpful because changes to fix one type of problem sometimes inadvertently cause new problems. Of course, the cycle of testing cannot go on indefinitely. Report designers must use their best judgment and be cognizant of their budget constraints in determining when it is time to stop testing and finalize the report.
Also in "The Purpose and Process of Cognitive Testing"
- Why Is Cognitive Testing Useful?
- How Is Cognitive Testing Conducted?
- How Is Cognitive Testing Different From Usability Testing?
- Recommending Reading on Cognitive Testing
Page originally created February 2015