2.9.3 Step 3: Responses
Five-point Likert scales (ranging from strongly disagree to strongly
agree) were chosen to rate most items (n=48) as these provide an
“ambivalent” midpoint and are sensitive and
reliable.35,36 Five-point scales also have a
familiarity factor contributing to ease of completion. Four-point scales
with time-based responses (always, often/mostly, sometimes/rarely,
never) were applied to the eight questions in the practice habits
section as items referred to frequency of habits related to OBP.
Open-ended questions were included at the end of each survey section, to
capture other comments that had not been addressed in the scaled
questions.
2.9.4 Step 4: Psychometric
property testing
- Preliminary discussions : Two authors (IV, HB) discussed the
item list and identified redundant items for removal.
- Validation approach : Prompts based on Patrick et
al.37, Burton and Mazerolle34 and
Heale and Twycross38 were developed to seek input on
the draft instrument questions from the expert and peer panels (see
Table 3). The intent of these questions was to seek input from these
participants on phrasing, appropriateness of the questions, flow of
the survey, layout, sequencing and timing, intended meanings of items
to assess whether responses agreed with opinions and nuances in the
understanding of terminology and concepts.33
[Place Table 3 here]
- Dissemination : The expert panel members were emailed a cover
letter, a link to the survey set up on GoogleForms, and a Microsoft
Word document containing the questions in Table 3, set up in table
format, hereafter referred to as the “feedback document”. An
additional opportunity for comment on each item was provided. The peer
members completed the draft questionnaire independently in the
presence of the first author. This was timed to provide information on
clinical utility.
- Cognitive interviews : After completing the questionnaire, the
peers were individually interviewed by the first author, using the
questions in Table 3 as prompts and using concurrent verbal
probing.39 Cognitive
interviews34,37,39 are a method of verifying written
feedback on phrasing, appropriateness of the questions and flow of the
survey.
- Data collection : The experts in the validation group returned
the Microsoft Word feedback document by email to the principal author
(IV) with their feedback addressing each question in Table 3, for each
item in the draft survey instrument. These responses were aggregated,
with the peer responses, into a master document in which respondents
were not able to be identified by the second author (HB).
- Researcher oversight : The collated responses were discussed by
IV and HB, to determine actions for each item. Items for which the
validation group was in complete agreement were accepted “as is”.
Items with suggestions for changes to wording were modified where the
change made sense. These changes were made irrespective of the number
of people who suggested them. Redundant items were identified and
removed to reduce the likelihood of
non-response due
to respondent burden.23,40 Suggestions for changes
in formatting within the survey instrument were considered, and
decisions were made based on the capabilities of GoogleForms, and
respondent burden with regards to length of questionnaire and ease of
use.