Study: Simpler writing associated with judged intelligence and quality

Consequences of erudite vernacular utilized irrespective of necessity: problems with using long words needlessly, Applied Cognitive Psychology, Volume 20, Issue 2, pages 139–156, March 2006

Abstract:

Most texts on writing style encourage authors to avoid overly-complex words. However, a majority of undergraduates admit to deliberately increasing the complexity of their vocabulary so as to give the impression of intelligence. This paper explores the extent to which this strategy is effective. Experiments 1–3 manipulate complexity of texts and find a negative relationship between complexity and judged intelligence. This relationship held regardless of the quality of the original essay, and irrespective of the participants’ prior expectations of essay quality. The negative impact of complexity was mediated by processing fluency. Experiment 4 directly manipulated fluency and found that texts in hard to read fonts are judged to come from less intelligent authors. Experiment 5 investigated discounting of fluency. When obvious causes for low fluency exist that are not relevant to the judgement at hand, people reduce their reliance on fluency as a cue; in fact, in an effort not to be influenced by the irrelevant source of fluency, they over-compensate and are biased in the opposite direction. Implications and applications are discussed.

Sounds interesting? Well, it is. I decided to look further into the study. I like psychology but i hate the way they do science. Small samples sizes. No replications, only (more or less) ‘conceptual replications’. Does this study fare better? It seems not. Quoting from the methods sections:

Seventy-one Stanford University undergraduates participated to fulfil part of a course
requirement. The survey was included in a packet of unrelated one-page questionnaires.
Packets were distributed in class, and participants were given a week to complete the entire
packet.

Thirty-nine Stanford University undergraduates participated to fulfil part of a course
requirement. The survey was included in a packet of unrelated one-page questionnaires.
Packets were distributed in class and participants were given a week to complete the entire
packet.

Thirty-five Stanford University undergraduates participated to fulfil part of a course
requirement. Surveys were included in a packet of unrelated one-page questionnaires
that were filled out in a one-hour lab session. An additional 50 Stanford University
undergraduates were recruited outside of dining halls and filled out only the relevant
survey.

Fifty-one Stanford University undergraduates participated to fulfil part of a course
requirement. The survey was included in a packet of unrelated one-page questionnaires.
Packets were distributed in class, and participants were given a week to complete the entire
packet.

Twenty-seven Stanford University undergraduates participated to fulfil part of a course
requirement. The survey was included in a packet of unrelated one-page questionnaires.
Packets were distributed in class, and participants were given a week to complete the entire
packet.

There are five of them becus the paper contains 5 experiments. None of these studies were exact replications. They skipped the reliability test by not replicating the experiments first. They shud have done so, also at another place by different researchers, and in a different population.

As far as i can tell, they did not state which kind of undergraduates were used. I suspect that this might have an effect. In my experience, those that study humanities are much more likely to use obscure language. I suggest that one replicate this with a large sample (>200) with different kinds of participants.

All in all, the paper is suggested and plausible, but not that convincing.

As for other replications of the study, i looked around but didn’t find any. One really shud make dedicated processes to exact replications of others’ studies. The publication of such shud be forced to avoid publication bias.

Leave a Reply