English French German Italian Portuguese Russian Spanish

Questionnaires

Questionnaires are one of the most common and popular tools to gather data from a large number of people. A good questionnaire can be a powerful tool to inform your evaluation, and a poorly designed questionnaire can make life difficult for both those that have to complete it, and those that have to analyse the data.

You may see the word survey as a substitute for questionnaire, but surveys refer to the broader range of methods to collect information from a group of people. Surveys include questionnaires and semi-structured interviews.

 

TYPES OF QUESTIONNAIRES

Post-activity questionnaires

Post-activity questionnaires generally consist of a limited number of questions that ask participants to rate the effectiveness of various aspects of the activity (eg. workshop). The focus of the questions should reflect the key evaluation questions and the related monitoring questions that you have identified in your M&E plan. Asking questions that do not relate to your monitoring questions is a waste of time and effort, and may also impact your response rate if participants perceive the questionnaire to be too long.

Post-activity questionnaires tend to be short in order to reduce the amount of time respondents need to complete them, and therefore increase the response rate. Questions tend to be quantitative and generally consist of close-ended questions (tick the box, or scales). You can also include open-ended questions but it is best to limit these in order to make data analysis and reporting easier.

Post-activity questionnaire template

A template providing examples of post-activity questions using different answering scales is available here.

A post-activity questionnaire template is available here. You can modify this to suit your needs.

The University of Wisconsin Program Development and Evaluation has a guide on Collecting Evaluation Data: End-of-Session Questionnaires.

 

An example of a post-conference questionnaire is available here. This questionnaire uses a 7 point scale and was delivered online following the Show me the Change conference in May 2010. The results of the questionnaire are available here.

 

Pre-then-post questionnaires

Questionnaires can be given to participants before and after an intervention (pre and post) in order to compare their behaviours, practices, and household fitting and appliances. This requires participants to complete two questionnaires if you want to have an individual comparison between pre and post intervention. In many cases, you may get some participants completing the pre questionnaire, and others the post questionnaire, which leaves you to either use a very small sample, or look at the average results for before and after. This can be an important consideration if the number of participants is small.

Retrospective post-then-pre questionnaire

The retrospective post-then-pre questionnaire design overcomes some of the constraints of pre-post designs as it is implemented at only one point in time, thereby reducing the possibility of questionnaire fatigue. A benefit of the post-then-pre design is that participants answer the post then pres questions with the same knowledge and understanding of the issue, thereby reducing the possibility of response shift bias.


Response shift bias occurs when participants understanding of issues are impacted by an intervention. This leads them to answering post-questionnaires differently to the way they understand the pre-questionnaire.


One constraint of the post-then-pre design is that it may lead to participants answering with a social desirability bias, where they feel that they need to report a change or improvement meet the evaluator’s expectations or demonstrate that they are meeting a social norm. It is important to note that social desirability bias is relevant to all questionnaire designs.


An example of how to write a post-then-pre question is presented below. It is important to provide clear instructions for how participants are to answer the questions.


A post-then-pre question

Please tick the response that best represents your practices NOW having taken part in the project on the left hand side of the statement, and then tick the response that best represents your practices prior to your participation on the right had side of the statement.

Retrospective Post then Pre question

 

For more information on retrospective post-then-pre questionnaires, visit the University of Wisconsin Program Development and Evaluation Quick Tips and scroll to the bottom (Resources 27-30).

 

Post-project questionnaire

Post-project questionnaires generally consist of a limited number of questions that ask participants to self-report on the changes they have undertaken or undergone as a result of taking part in a project.

A post-project questionnaire may be used on its own to ask participants what changes that have made as a result of their participation. This type of questionnaire is similar to the retrospective post-then-pre design, except that it does not require participants to articulate their pre-intervention knowledge, attitudes and behaviours. The constraint of only having a post-project questionnaire is that there is a risk of social desirability bias in the answers and you can not be certain as to the validity of the baseline state.


An example of a post-project question is provided below.

Post project question

When to use pre-then-post test design Post-then-pre designs or post-questionnaires

May be most appropriate in larger projects that have significant evaluation resources available in order to ensure that participants are motivated (eg. incentives provided) to respond to two questionnaires.

May be most useful when you have a smaller number of participants, and the evaluation resources are limited.

 

QUESTIONNAIRE LAYOUT & DESIGN

A short tutorial on developing questionnaires - opens up in PowerPoint
How to develop a questionnaire presentation

 

Lay-out

You may not judge a book by its cover, but many people will judge a questionnaire by its layout. It is important to make the appearance of the questionnaire appealing. Things to consider include:

  • Making sure the layout does not look cluttered. Use adequate spacing between questions.
  • Ensuring the questions are numbered and presented in a logical sequence. Group questions by topics or themes.
  • Starting with easier or less controversial questions and finishing with more personal questions, including demographic details such as age and income.
  • Using larger or bold font to attract attention to headings or instructions.
  • Using shading or colour schemes to group similar questions.

The form in which the questionnaire is delivered (mail, phone, web, face-to-face) will impact on the design of the questionnaire.

Piloting the questionnaire with people that are not involved in the design will provide valuable feedback.

 

Length

The shorter the better. Most people are time-poor, and a long questionnaire risks limiting your response rate. A well-designed questionnaire should not take more than 10 minutes to complete. If you are need to have a long questionnaire, you may want to consider incentives for participants to respond.

Remember to only ask the questions that you NEED to have answered. Refer back to your monitoring and evaluation plan to help guide the selection of questions. A question selection and justification table may help you decide what to ask and what to leave out.

The more questionnaires you are asking your target group to complete, the more important it is to ensure that they are succinct. You may be able to get away with a longer questionnaire if you are only asking your target group to complete one.

One way to increase your response rate for a longer questionnaire is to have time after an activity for people to complete it then and there. This saves the barriers faced with mail surveys. A raffle prize for completed questionnaires may be used as an added incentive.

Invitation and instructions

The questionnaire should clearly outline why you want people to take part, and the importance of their participation. A strong invitation that provides respondents with a sense that their answers and opinions are valued and respected should increase the response rate.
It is important that you have clear instructions at the start of the questionnaire that explains:

  • the purpose of the questions
  • who the information is for and how it will be used
  • the confidentiality of the answers
  • deadlines for completion
  • instructions for returning the questionnaire if it is a mail-out.

It is also important to have clear instructions as to how to answer questions. Instructions need to be provided at the start of each new section that uses a different answering format or response scale. Things to consider in instructions include whether you want respondents to:

  • select one answer only
  • select all that apply
  • rate the answers
  • provide a written answer.

 

Wording of questions

The wording of questions is critical in ensuring you obtain the information required to answer your evaluation questions. This includes:

  • using language that is appropriate to the audience
  • using clear, simple questions that avoid ambiguity, double meanings, and jargon,
  • avoiding leading questions that can lead to bias.

Questions can fall into the following categories:

Factual questions

Factual questions are those that can be verified in some way. These include demographic data, and also include one-off actions such as installing items like solar how water, insulation and the like. Some behaviours that are observable could also be included here.

Non-factual questions

Non-factual questions refer to gauging the knowledge, beliefs, attitudes, and opinions. Self-reporting of non-observable behaviours may also be considered in this category.

Open-ended question

A question where the respondent writes their own answer. This allows respondents to think about the question, provide suggestions, or test their knowledge, but it is harder to analyse.

Close-ended question

A question where the respondent has to select their answer from the range of responses provided. The range of responses can vary from two-option answers, to ratings, ranking, or statements.


An anecdote about the importance of wording
From Bardburn, N, Wansink, B & Sudman, S (2004) Asking Questions: the definitive guide to questionnaire design page 4

Two priests, a Dominican and a Jesuit, are discussing whether it is a sin to smoke and pray at the same time. After failing to reach a conclusion, each goes off to consult his respective superior. The next week they meet again. The Dominican says, “Well, what did your superior say?” The Jesuit responds “He said it was all right.” “That’s funny,” the Dominican replies, “my superior said it was a sin.” Jesuit: “What did you ask him?” Reply: “I asked him if it was all right to smoke while praying.” “Oh,” says the Jesuit, “I asked my superior if it was all right to pray while smoking”.

Tips on Wording Questions

 

Avoid double-barrelled questions

Eg. “Do you take action to save water and energy?” should be broken down into two questions.

Be specific about the subject of questions

Eg. “Do you take action to save water?” can be broken down to specific actions, such as shorter showers, using greywater etc.

The same applies for attitudes and opinions- be as specific as possible in order to obtain the information you want.

It is also important to be specific about time frames.

Be specific about timeframes in questions

Eg. In a questions such as “Have you taken action to save water in recent months?”, ‘recent months’ may not be specific enough for your evaluation. It may be best to use more specific timeframes, such as “Since the Water workshop held in July, have you taken action….”.

Avoid leading or loaded questions

Eg. “Have you stopped taking long showers?” is likely to lead to most respondents answering “yes”. Further, a “yes” answer also does not differentiate between those that were taking long showers, and have since stopped, and those that have never taken long showers, but did not have another option to answer.

For more information on how to word questions:
University of Wisconsin Questionnaire Design: Asking Questions with a Purpose
Survey Design - Writing Great Questions for Online Surveys from Survey Analytics

Social Desirability Bias

It is important to consider how wording can lead to a social desirability bias on responses. This refers to the tendency for respondents to present a favourable image of themselves on questionnaires. This generally takes the form of over-reporting of favourable behaviour and/or under-reporting of unfavourable behaviour. This can affect the validity of the information obtained.

Answering Questions

Respondents are generally provided with two options to answer questions: open fields, or close-ended. The most common type of answer is to use close-ended, where respondents must pick from a pre-determined set of answers. This makes data entry and analysis easier. The constraint of close-ended answers is that respondents are generally limited to the answers you provide (unless there is an “other” option that is linked to an open field).

Forcing answers

Respondents may not want, or may not be able to provide the information requested. Privacy is an important issue to most people. Questions about income, occupation, finances, family life, personal hygiene and beliefs (personal, political, religious) can be too intrusive and rejected by the respondent.


Close-ended options

One best answer

Asks respondents to select the best choice from a number of independent or unique answers. This requires you to know the range of relevant choices. Alternatively, you may add an “other” category with an open field, but you want to limit the number of people selecting this response, as it would otherwise make more sense to leave the question as an open-field.

The options should be mutually exclusive so that clear choices can be made. Non-exclusive answers will lead to respondent frustration, and may make data analysis and evaluation difficult.

Two option response

Asks respondents to make a defined choice (eg. YES/NO), but it may put people off answering if they are unsure, or do something part of the time only.

Rating scales

Allow respondents to select the most appropriate point on a scale. Scales can range from 3 points to 10 points.

5 point scales are the most commonly used. 7-point scales are becoming more common as they provide respondents with a greater ability to discriminate between choices.

Odd-numbered scale provides a mid-point (eg. uncertain or neutral).

You need to consider the likelihood that respondents will select this, as this may affect your data analysis, and your evaluation. It is important to have a balanced scale.

This means, for example, that you provide an equal amount of positive and negative options.

The University of Wisconsin Questionnaire Design: Asking Questions with a Purpose provides a wealth of advice and examples on questions and rating scales.

Questions & Scales

The following presents close-ended questions using a rating scale. Rating scales can be from 3-points up to ten. A 5-point scale is the most common, though 7-point scales are becoming more prominent.

So what type of scale should you use? Well, it all depends on the type of answers that you want to get, and how you want to analyse it. Research suggests that respondents prefer bigger scales (eg. 5-points onwards) as this allows them to discriminate between options. Fewer options can lead to respondents becoming frustrated at being forced to select a response which they may feel does not meet their opinion.

 

5-point and 7-point scales are the most used options in questionnaires as they give respondents the ability to discriminate between  choices.

 

An example of an online  post-conference questionnaire that used a 7-point scale is available here.

 

Note that coding the response options with numbers can assist in data entry and analysis.

This example uses a three-point scale from agree to disagree.

3 point scale

Another way to present a three-point scale is as follows:

3 point scale

This example uses a four-point scale from excellent to poor (alternatively, it could range from excellent to needs improvement). Even-number scales such as this do not allow the respondent to select a mid-point.

4 point scale

This example uses a five-point scale from strongly agree to strongly disagree.

five point scale

This example uses 7-point scale from very satisfied to not at all satisfied.

7 point scale

This example uses a 7-point scale that presents the respondents with range of answers to complete a statement.

7 point scale

 

To see a real example,  have a look at this  post-conference questionnaire that used a 7-point scale.

 

You can also use a two-option response instead of a scale.

This example allows respondents to answer from two choices. The choices can be Yes-No, Agree-Disagree, or True-False.

q8

A template providing examples of post-activity questions using different answering scales is available here.

 

It is important to ensure that you are consistent in your choice of scale.  If you select a 5-point scale, stick with it throughout your questionnaire.

How to deliver a questionnaire

Questionnaires can take various forms: mail, web-based, telephone or face-to-face. The pros and cons of the various methods are outlined below.

Written Questionnaires

Pros Cons

Relatively easy to develop and distribute

Good questionnaire design may require expert input

Allows for a large number of questions

Reliability of answers (self-bias)

Provides quantitative data that can be statistically analysed

Potentially low response rates, especially in long-term follow up questionnaires

Standardised questions allow for reproducible and comparable questionnaires

Data input can be cumbersomes


May not capture unintended consequences

Web-based Questionnaires

Pros Cons

Easy and cheap to distribute

Requires access to computers, so may limit respondents

Reminders can be sent easily

Good questionnaire design may require expert input

No need for manual data input

Reliability of answers (self-bias)

Software program can analyse results and generate reports

Potentially low response rates


May not capture unintended consequences

Phone Questionnaire

Pros Cons

Fast mode of data collection

Length/number of questions is limited


Length/number of questions is limited


More costly than written and electronic questionnaires


Must catch people when they have time to talk

 

Web-based Questionnaire Programs

There are a number of software programs that allow you to develop online questionnaires. Some are free, others not. Most are relatively easy to use. As with all questionnaires, it is important to pilot them first to ensure that they make sense and that they work. Examples of online questionnaire programs:
Survey Monkey
Opinio
Survey Gizmo
Zoomerang


Further Links & Resources

University of Wisconsin Program Development and Evaluation - Evaluation Publications

Free Survey Articles, Resources and Best Practice Guides from Survey Analytics

StatPac Survey & Questionnaire Design

Mastering the Art of the Online Survey (Forbes Magazine)

 
Donate Now

Please make a donation to upgrade the Evaluation Toolbox.

The Evaluation Toolbox is maintained by Damien Sweeney and Martin Pritchard from PREA as their in-kind contribution back to the community. The Toolbox now needs several weeks of work to review and upgrade all the contents and add new content for you. Work has begun and we are seeking your donation (big or small) to help support this major upgrade. Email us to indicate what you want to be included in the Toolbox.

case study

What is the Toolbox?

The toolbox aims to provide a one-stop-site for the evaluation of community sustainability engagement projects that aim to change household behaviours. Through the toolbox you can learn how to conduct your own evaluation of a behaviour change project using the guides and templates provided.

see more...

Why Evaluate?

Conducting an evaluation is considered good practice in managing a project.

The monitoring phase of project evaluation allows us to track progress and identify issues early during implementation, thus providing and opportunity to take corrective action or make proactive improvements as required.

see more...

 

DSE
City of Whitehorse City of Whitehorse
Accord