What, you may ask, is behavioural economics? And how does it relate to online surveys?
Behavioural economics (BE), in its most simple form, is the study of irrationality in decision-making. Classical economics pre-supposes that people will behave in a rational way when making purchasing or other financial decisions. In the real world, though, we know that this isn’t really true. Everyone is subject to conscious or unconscious biases that affect their decisions, and this can extend to responses to online surveys.
It really is an enormous subject, and there’s no way it could all be covered in a blog post (at least, not one that anyone would want to read), but let’s take a look at some of the most well-understood effects and how SmartSurvey’s features can work to mitigate them.
Use Question Order Randomisation to reduce the effect of Priming
Every line of text or image in a survey can have a small effect on the mood and frame of mind of a respondent, without the respondent or survey creator realising it. Exposure to brands and imagery, and the associations they create, can have a lasting effect on later questions.
By randomising the order of questions on a page, you can reduce the effect of questions and responses affecting the responses given to questions that come later on the page. This effect is called “Priming” and it can skew responses notably.
Use Question Selection Randomisation instead of direct comparisons
Often, respondents will shy away from comparisons, especially where it relates to things where someone might believe they will always react rationally, such as making purchase decisions (this is to do with economics, after all). They may not want to admit that one advert or another is more likely to influence them into making a purchase (Because they are rational beings who are above such things).
So, in the case of comparing ad creative (as an example) instead of asking a question like “Which of these is more likely to cause you to purchase a product”, consider instead creating a page with two questions, one for each ad creative (as an example) asking a more neutral question such as whether they simply “like” the creative, and then setting the page to only show one question of the two at random. You can then compare the responses to the two questions to make your comparison.
Use Question Selection Randomisation to check the effect of question wording.
Would you like something that works 99% of the time? Or would you prefer something that breaks 1% of the time? You may, quite rightly, say that these are the same thing. However, the study of BE has discovered that people will answer those questions differently. While there’s a general rule that people react more favourably to questions framed positively, it can sometimes be tricky, when dealing with wording such as the above to know which is the more “positive” reading, so to account for this bias you can create a page with multiple versions of the same question, and then choose one at random to show to the respondent. Comparing the results will reveal if there’s a bias caused by question wording.