One common practice used to control for one form of response bias when collecting rating scale data is to reverse-word some items. Survey respondents who are presented with a series of items that are all positively worded (e.g., “I think that statistics is fun.” “I have a good time working with statistics.” “I enjoy reading about statistics.”) can lead some respondents to start checking all high (or all low) ratings without really thinking about what they’re doing. By reverse-wording some of the items (e.g., “I do not think that statistics is fun.” “I do not have a good time working with statistics.” “I do not enjoy reading about statistics.”) the idea is that respondents are forced to read the items more closely which should lead them to make more informed ratings. Of course reverse-worded items must be reverse-scored before total scores are calculated.
While this seems like a good idea in theory, in practice I’ve observed something else when inventories are constructed of a mixture of positively-worded and negatively-worded items. Factor analyses of these inventories frequently find two factors–one on which positively-worded items load strongly, and one on which negatively-worded items load strongly. That means that the internal consistency of the inventory is challenged and that means the inventory totals (or averages) are invalid. Other researchers have noted that ratings given to positively worded items show less variability that ratings to negatively worded items–suggesting that people may have some trouble comprehending negatively worded items.
Perhaps a more desirable approach to breaking response bias is to mix together the items from two (or more) inventories measuring very different constructs. Word items in both inventories in a positive direction, but force respondents to slow down and read the items by ensuring that any two consecutive items deal with different issues.