EFL Resources / 勉強資料
- Reasons are Important May 13, 2012
- Academic Literacy Resources May 6, 2012
- Academic Literacy in Higher Ed. May 6, 2012
- International Students in Higher Education May 3, 2012
- Education and the Importance of Connecting April 28, 2012
- Education as it Is and as it Should be April 25, 2012
- Be Careful What You Believe April 21, 2012
- Game-based learning April 17, 2012
- Learning Styles: Fact or Fiction? April 15, 2012
Tag Archives: Double-Blind Studies
I’ve written a few posts in recent weeks on topics including cognitive dissonance, understanding the principles of double-blind studies, and the obligation for educators and institutions to teach academic literacy in schools, including critical evaluation of information and sources. These topics are related in important ways: in simple terms, our minds can play tricks on us and prevent us from seeing what is there to be seen (or make us see what isn’t actually there), and this has serious implications for research and the end users. If we tend to avoid challenges to our established beliefs; if we’re not aware of and fail to take precautions against subjective bias; and if we fail to understand that published and peer reviewed research can still get it wrong, we run the risk of producing, reproducing and disseminating bad information. A lack of intellectual rigour might in some fields only result in the propagation of ignorance, but in others such as biomedical research, errors can lead to serious injury or death.
Yet, it seems that every week I read about new studies which claim to overturn the conclusions reached by previous studies, and the fact that this happens so frequently is cause for some concern. Errors in research and analysis aren’t the only problems. It has been suggested that research in many fields is increasingly showing signs of the ‘file draw effect’ – allowing the results of studies which fail to support hypotheses to go unpublished. It’s not surprising that negative results might not be considered novel enough to be published by journals; that researchers might not want to flaunt results which do not support their hypotheses; or that vested interests in clinical trials of new medicines might want to hide inconvenient truths. Of course, discovering a new way that something doesn’t work isn’t really a failure from a scientific perspective, but unfortunately science isn’t really the issue here. As an interesting side note: I recently read about an MIT study from 2009 which suggested that brain processes in monkeys only improved after achieving success in a task (as opposed to failing), thus casting doubt on whether monkeys actually learn from mistakes. There’s probably a bit more to it, and I might make this the topic of my next post!
Another issue involves intellectual integrity, the effectiveness of editors and peer reviewers, and overreliance on so-called ‘reliable sources.’ Back in 1996, a physics professor by the name of Alan Sokal submitted an article to an academic journal, which was subsequently published on the basis of his credentials (and perhaps on ideological grounds as well), and later famously revealed by the author to be a hoax (known as the Sokal hoax- a good name for an episode of the Big Bang Theory, I think) as a test of the editing process. This particular journal adopted peer review soon afterwards; however, before and since there have been numerous scandals involving publication of research with fabricated data in highly regarded peer reviewed scientific journals.
The following article by Tom Bartlett in the Chronicle of Higher Education discusses the ‘Reproducibility Project’ which aims to replicate every study published in three selected journals in the field of psychology in 2008. The results should be interesting.
Today I’ll introduce you to Edge.org. Its mission: “To arrive at the edge of the world’s knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.”
Each year Edge poses a question to scientists, philosophers and other thinkers, and the responses can make some very thought provoking reading. I found last year’s question and many of the responses particularly interesting: “What scientific concept would improve everybody’s cognitive toolkit?”
Evolutionary biologist Richard Dawkins suggested that a widespread lack of critical thinking or awareness of mindset/bias was a problem in society, and that people would benefit from attaching more weight to evidence based conclusions, equipping them to exercise sound judgement throughout their lives.
How would we do this? Schools should educate students in the principles of the double-blind control experiment and the reasons it is important, which include understanding the difficulty of eliminating subjective bias, and the problems of generalising from anecdotes.
For Social & Technology Network Topology Researcher Clay Shirky, an essential tool in our cognitive toolkit is understanding the Pareto Principle (also known by other names including the 80/20 rule) which originated from Vilfredo Pareto’s observation that 80% of the land in Italy was owned by 20% of the population. It’s not really a rule because the proportions can be quite a bit more extreme than 80/20 when we look at other areas of the real world, and the areas in which it applies are far more common than many believe. Yet, when we hear these figures reported in the media, they are often treated as shocking, unexpected and unpredictable.
The problem is that we are taught that the paradigmatic distribution of large systems is the Gaussian distribution (bell curve) and that examples which display Pareto characteristics are anomalies, which prevents us from seeing the world and probability of events clearly. If you’re interested in the impact of the highly improbable I recommend a book called The Black Swan by Nassim Nicholas Taleb (no, not Natalie Portman’s movie).
The questions posed by Edge go back to 1998 and the responses are well worth a read.
There has been quite a lot of talk in recent times about the amount of conventional health wisdom being overturned by the results of subsequent studies, and which have been exposing serious and often life threatening flaws in medical research. A part of the problem may be with the way vested interests tend to favor positive results, but there is also a failure of research methodology and analysis. Of course, issues of vested interests and flawed methodology and conclusions aren’t unique to medical research, but it provides an interesting example.
The following article by Dr. Steven Bratman talks about the impact and importance of double-blind studies for research, in very clear and comprehensible terms, and is worth a read for anyone involved in research or interested in how easily our own minds can fool us.