Surveys & Assessments: Questionnaire design

This guide offers you an overview of how a questionnaire can be constructed and provides some examples of how different types of questions can be structured.

  1. General Guidelines
  2. Education Assessments
  3. Sample Questionnaires

1. General Guidelines on Designing a Questionniare

Wording of questions should use language that is appropriate to the target audience and that is as simple and clear as possible, avoiding any ambiguity, double meaning, jargon or slang.

Questions should also avoid including assumptions or leading a respondent into giving a specific answer, which can lead to bias. For example:

  • “What made this workshop excellent?” assumes that the workshop was excellent, when it may not have been
  • “How useful was the workshop?” is also a slight leading question, implying that “entirely useless” is not a possible response
  • “Did you find the workshop useful?” or “How useful or not useful did you find the workshop?” do not make any assumptions and do not lead the respondent into overstating its utility.

Do not include two questions in one (e.g. “Did you find the presentations and written materials useful?” should be broken down into two questions).

Introduction

Provide a brief overview of who is delivering the survey or questionnaire and why. Give respondents information about how the data they provide will be used and how their privacy and anonymity will be protected (including any limits to that protection).

Identifier

Every questionnaire should open with a space for a unique identifier and should NOT include a field for entering the participant’s name. The survey administrator (or webpage) should allow for the participant’s name to be entered and matched to the identifier on a separate form:

Survey_ID:  ___________________

Respondent Characteristics

Questionnaires generally open with short questions capturing relevant information about who the respondent is. Even when questionnaires are anonymous, some respondents may prefer to withold some personal information. It is therefore best to include an option of ‘Prefer not to say’.

Most respondent characteristics can be measured using categorical response options. For example:

Q1. What is your gender?
☐ Woman  ☐ Man   ☐ Other _______________     ☐ Prefer not to say

Q2. How old are you?
______ years                                   ☐ Prefer not to say

Q3. Are you currently employed?
☐ Yes, full-time           ☐ Yes, part-time   ☐ No      ☐ Prefer not to say

Q4. Do you experience any of the following:
4A. ☐ Visual impairment(s)         ☐ No      ☐ Prefer not to say
4B. ☐ Auditory impairment          ☐ No      ☐ Prefer not to say
4C. ☐ Learning difficulties        ☐ No      ☐ Prefer not to say
4D. ☐ Mobility impairment          ☐ No      ☐ Prefer not to say
4E. ☐ Mental health problems    ☐ No      ☐ Prefer not to say
4F. ☐ Chronic illness              ☐ No      ☐ Prefer not to say

2. Questionnaire Design for Education Assessments

This section focuses on how to design written assessments to measure educational knowledge or skills acquisition. This form of assessment is extremely important for projects that seek to improve scientific understanding or skills. Such assessments should be used to:

Motivation
  1. Check students prior knowledge
    It is very important to know where students are ‘starting from’. This is particularly vital outside of formal education settings, where professors have in-depth knowledge of previous class content and education qualification requirements.
    In cross-disciplinary classes, summer schools, extra-curricular settings and international exchange programmes this may not apply. There is then a high risk that some of the students will be missing key vocabulary, concept knowledge or skills needed to understand what you are trying to teach them. Wherever this is the case, there is a high risk of doing more harm than good: students are bad at recognising when they are missing a ‘building block’ and more likely to instead conclude that they lack the ability or aptitude to learn the subject. Avoid this by testing students before you teach them and checking whether they know everything they need to! You then have a chance to either adapt your materials or spend extra time with specific students to ‘bridge the gap’ and prepare them.
  2. Check whether students have learned what you expected them to learn
    Projects that seek to provide education or training should always check whether students have acquired the intended educational knowledge and/or skills. If you don’t measure these directly, you have no basis for claiming that the project was a success. This is why nearly all reputable universities and educational establishments around the world use some form of assessment to measure educational progress. Assessments of this kind also give students evidence of what they have achieved.
    Unfortunately, however, very few Astronomy summer schools and extra-curricular programmes use any form of assessment. This deprives students of the opportunity to both experience and demonstrate their achievements; and projects of the ability to show others that they achieved what they intended. It also means that projects can learn from each other about which techniques and strategies are most successful for teaching specific concepts or skills.
Multiple Choice Formats

Having hopefully convinced you of the importance of using educational assessments, we now turn to the format. The OAD suggests that projects use multiple choice assessments. This form of assessment has limitations but it is widely used in education and international development for a number of reasons, which are explained further below.

Multiple choice tests are assessments made up of questions which students respond to by selecting the correct answer from a list of possible answers. Incorrect options are called distractors. These are designed to be attractive to students who have not really understood the concepts you hope they have learned. The goal of is to find out where each student lies on a continuum from being completely unfamiliar to having full mastery of the key course concepts. Students who understand the material should recognise the correct answer while those who have not learned the material should be unable to recognise the correct answer.

Why Use Multiple Choice Tests?

Depending on how questions are constructed, multiple choice tests (MCTs) can be used to measure conceptual understanding, content knowledge, skills or a combination of all three. MCTs can also complement other forms of assessment (such as open questions, essays, projects and so on).

Advantages of MCTs are that they:

  • can test a very wide range of key concept knowledge and applied skills, since many more questions can typically be asked within a single test
  • can prevent students from hiding gaps in their knowledge by selectively presenting only material they feel confident about or using ambiguous wording.
  • eliminate inconsistencies between marks awarded by different graders
  • can be automatically scored and graded, allowing for efficient large-scale data collection

Although MCTs reduce ‘noise’ in the grading process, if questions are poorly designed MCTs do not provide valid measures student knowledge/understanding. Ideally, MCT assessments (particularly those used for research purposes) are tested and validated prior to being used. For example, each item is ideally tested to see whether it is:

  • Externally valid: students have mastered a concept or skill answer the item correctly while those who have not mastered the concept/skill do not (i.e. do no better than a random choice).
  • Internally valid: students who answer an item on one concept are able to answer other items (with different wordings/context) which test the same concept.
  • Reliable: the wording is clear enough that individuals provide the same answer when tested repeatedly (without further instructions between tests). E.g. if you test a student on Monday morning and Monday afternoon, s/he will provide the same answers.

Tips for Writing Questions

  • Every item should reflect specific content
  • Base each item on important content to learn; avoid trivial content.
  • Keep the content of each question independent from the content of other questions
  • Try to use novel material or context to test higher level learning; if you use exactly the same context/example or language as you did in your lecture/slides, there is a risk that you are simply testing recall rather than understanding.
  • Avoid opinion-based or trick questions; use as few words as possible while ensuring that the question is clear
  • Word the question positively wherever possible, avoiding negatives such as NOT or EXCEPT
  • Include the central idea in the actual question rather than in the answer options
  • Items can also be statements to which students must find the best completion

Tips for Writing Answer Options

  • Offer at least three options and no more than five
  • Make sure that only one of the choices is the right answer and that they do not overlap
  • All options should be as close to equal in length as possible. Longer or shorter options tend to attract undue responses because they stand out visually
  • All the distractor options should be plausible. Ideally, use your knowledge of some common student errors/misunderstandings to think of plausible distractors. If you know, for example, that students often miss a step in a calculation, include a distracter that would result from that miscalculation.
  • Phrase choices positively; avoid negatives such as NOT; avoid “all of the above” and “none of the above”
  • There are several studies that have documented common misconceptions in science concepts amongst the public and undergraduates. Building these into multiple-choice distracters is a possibility; if you would like to do so, please write to OAD and we will send you relevant references.

3. Sample Questionnaires

Download an example of and educational assessment in DOC or PDF.