Committee for Skeptical Inquiry

» Home » Contact CSI » Search:
Home : Skeptical Inquirer magazine : Mar 2003 : Buy back issue

Cover
Buy this back issue

Follow Up

How Not To Review Mediumship Research

Gary E. Schwartz



Gary E. Schwartz
Gary E. Schwartz

Most rational scientists agree that the credibility and integrity of a review of a body of research is that it includes all the important information, not just the reviewer's favored information. Ray Hyman's review "How Not To Test Mediums" (January/February 2002) is a textbook example of the selective ignoring or dismissing of historical, procedural, and empirical facts to fits one's preferred interpretation. The result is an inaccurate, mistaken, and biased set of conclusions of the current data.

Hyman is a distinguished professor emeritus from the Department of Psychology at the University of Oregon, who has had a longstanding career as a skeptic focused on uncovering potential flaws in parapsychology research. Hyman is well skilled in carefully going through the conventional checklist of potential sources of experimental errors and limitations in research designs.

Hyman's overall appraisal of the research conducted to date is implied by his conclusion: "Probably no other extended program in psychical research deviates so much from accepted norms of scientific methodology as does this one."

Is Hyman's summary conclusion based upon a thorough review of the total body of research? Or does it reflect the systematic ignoring of important historical, procedural, and empirical facts--a cognitive bias used by the reviewer in order to maintain his belief that the phenomenon in question is impossible? As I document below, Hyman resorts to (consciously and / or unconsciously) selectively ignoring important information that is inconsistent with his personal beliefs.

Selective ignoring of facts is not acceptable in science. It reflects a bias that obviates the purpose of research and disallows new discoveries. I have made the statement that the survival consciousness hypothesis does account for the totality of the research data to date. Of course, this does not make the survival hypothesis the only or correct hypothesis--my statement reflects the status of the evidence to date, not necessarily the truth about the underlying process. This is why more research is needed.

Note that I do not use the word "believe" in relationship to the statement. This is not a belief. It is an empirical observation derived from experiments.

It is correct that some of the single-blind and double-blind studies have weaknesses--we discuss the experimental limitations at some length in our published papers as well as in The Afterlife Experiments. However, these weaknesses do not justify dismissing the totality of the data as mistaken or meaningless. Quite the contrary, an honest and accurate analysis reveals that the data, in total, deserve serious consideration.

Our research presents all the findings--the hits and the misses, the creative aspects of the designs and their limitations--so that the reader can make an accurate and informed decision. What we strive for is seeking the truth as reflected in Harvard's motto "Veritas."

I appreciate Hyman's effort to outline some of the possible errors and limitations in the mediumship experiments discussed in The Afterlife Experiments. However, as Hyman emphasizes in his review, I do "strongly disagree" with him about his interpretations. The two fundamental disagreements I have with Hyman's arguments are:

  1. Hyman has chosen to ignore numerous historical, procedural, and empirical facts that are inconsistent with his interpretive descriptions of our experiments; and
  2. 2. Hyman has chosen not to acknowledge the totality of the findings following Occam's heuristic principle as a means of integrating the total set of findings collected to date.

Space precludes my providing a detailed and thorough commentary here illustrating how pervasively Hyman ignores and omits important information. (An extensive commentary has been published on various Web sites, including www.openmindsciences.com.) Four samples of important ignored facts are provided below.

Selective Ignoring of Historical, Procedural, and Empirical Facts

Veritas 1: In his review, Hyman failed to mention the important historical fact that our mediumship research actually began with double-blind experimental designs. For example, the published experiment referred to in The Afterlife Experiments as "From Here To There and Back Again" with Susy Smith and Laurie Campbell was completed almost a year before we conducted the more naturalistic multi-medium/multi-sitter experiments involving John Edward, Suzanne Northrop, George Anderson, Anne Gehman, and Laurie Campbell. The early Smith-Campbell double-blind studies did not suffer from possible subtle visual or auditory sensory leakage or rater bias--and strong positive findings were obtained.

Our decision to subsequently conduct more naturalistic designs (which are inherently less controlled), was made partially for practical reasons (e.g., developing professional trust with highly visible mediums) and partly for scientific ones (e.g., we wished to examine under laboratory conditions how mediumship is often conducted in the field).

Conclusion: Hyman makes a factually erroneous criticism when he reports that double-blind experiments were initiated only late in our research program, and therefore makes a serious interpretative mistake when he decides that all the early data can be dismissed because they were not conducted double-blind.

Veritas 2: In an exploratory double-blind long distance mediumship experiment where George Dalzell (GD) was one of six sitters and Laurie Campbell (LC) was the medium, Hyman states "because nothing significant was found, the results do not warrant claiming a successful replication of previous findings."

However, Hyman minimizes the fact that the number of subjects in this exploratory experiment was small (n=6). More importantly, Hyman fails to cite a important conclusion that we reached in the discussion: "If the binary 66 percent figure approximates (1) LC's actual ability to conduct double-blind readings, coupled with (2) the six sitter's ability, on the average, to score transcripts double-blind, the 66 percent figure would require only an n of 25 sitters to reach statistical significance (e.g., p < .01)."

Hyman fails to mention that NIH, for example, requires that investigators who apply for research grants calculate statistical power and sample size to determine what n is required to obtain a statistically significant result. This is accepted scientific practice and is required for obtaining NIH funding.

Conclusion: Hyman would rather dismiss the fact that the highly accurate ratings obtained in the single-blind published study for GD were indeed replicated in the double-blind published study, than to admit the possibility that individual differences in sitter characteristics are an important and genuine factor in mediumship research.

Veritas 3: It is curious that among the many examples of readings provided in The Afterlife Experiments, one early subset (cluster/pattern) of facts happened to fit Hyman nicely. It is true that mention of the "Big H," a "father-like figure," an "HN sound" would fit Hyman's father like it did the sitter's husband mentioned in the book.

Hyman chose not to report the fact that many other pieces of specific information also reported for the "Big H" did not fit Hyman but did fit the sitter precisely. Moreover, Hyman consistently failed to report scores of examples from readings reported verbatim in the book that were highly unusual and unique to individual sitters (e.g., John Edward seeing a deceased grandmother having two large poodles, a black one and a white one, and the white one "tore up the house").

Conclusion: The reason Hyman failed to mention these numerous examples is because they contradict the conclusion Hyman chose to accept--that the information, by chance, could fit multiple sitters--an erroneous conclusion that can be reached only if we do what Hyman did and accept the information selectively.

Veritas 4: Hyman's conclusion that experienced cold readers can readily replicate the kinds of specific information obtained under the conditions of our experiments is mistaken at best and deceptive at worst.

Under experimental conditions where (a) professional cold readers do not know the identity of the sitters (i.e., cheating is ruled out), and (b) cold readers are not allowed to see or speak with the sitters (i.e., cueing and feedback is ruled out), it is (c) impossible for cold readers to use whatever pre-obtained sitter specific information they might have obtained, and (d) impossible for cold readers to use their feedback tricks to help them get information from the sitters.

At the two-day meeting I convened in Los Angeles of seven highly experienced professional mentalist magicians and cold readers, they all agreed that they could not apply their conventional mentalist tricks under these strict experimental conditions. However, a vocal subset (Hyman was one of the three), made the unsubstantiated claim that if they had a year or two to practice, they might be able to figure out a way how to fake what the mediums were doing.

My response to this vocal subset was simple. It was "show me." Just as I don't take the claims of the mediums on faith, I don't take the claims of the magicians on faith either. I am a researcher. Mentalist magicians who make these claims will have to "sit in the research chair" and show us that they can do what they claim they can do.

Thus far, the few cold readers who have made these claims have refused to be experimentally tested. They have been unwilling to demonstrate in the laboratory that they can't do what the mediums do under these experimental conditions; and they have been unwilling to demonstrate at a later date that their performance can improve substantially with practice.

Conclusion: The claim that cold reading can account for the research findings is not supported when the experimental procedures are honestly taken into account.

Failure to Integrate Information and Appreciate the Process of Discovery

In most areas of science, no single experiment is perfect or complete. Different experiments address different conditions and different alternative explanations to different degrees. The challenge is to connect the dots of the available data and integrate the complex set of findings using the fewest number of explanations (i.e., Occam's razor).

Hyman reveals in his review that he learned as a teenager that it was easy for him to fool many people with palm reading. It is also quite easy to fool many people with fake mediumship, as anyone trained in cold reading will tell you. I have studied a number of books on cold reading and have taken some classes on cold reading myself. However, just because it is possible sometimes to be fooled (especially by the masters of magic) doesn't mean that everyone is fooling you.

Hyman reluctantly agrees that it is improbable that the totality of our findings can be explained by fraud. As a result, his preference is to propose that the set of findings collected to date must involve a complex set of subtle cues providing information in some studies, cold reading techniques being used in some studies, rater bias providing inflated scores in some studies, and chance findings in some studies. The idea that mediums might be obtaining anomalous information that can most simply and parsimoniously be explained in terms of the continuance of consciousness is presumed categorically to be false by Hyman until proven otherwise.

I make no such categorical assumptions, one way or the other. To me the question of whether or not mediums are obtaining anomalous information is a purely scientific one, to be revealed through a program of systematic research. Such research must be conducted by multiple laboratories. The reason for publishing findings, as they emerge, is to encourage other investigators to conduct their own experiments, and then integrate the totality of the findings.

However, the truth is, it is impossible to integrate the totality of the findings in any area of science if one selectively (consciously or unconsciously) ignores those specific findings that do not fit one's preferences or biases.

Scientific Integrity and Changing One's Beliefs

I admit, quite adamantly, that I do have one fundamental bias--my bias is to use the scientific method to discover the truth, whatever it is. Discovering the truth cannot be achieved through selective reporting of history, procedures, and data.

So what is the truth at the present time, based upon the available data? When the totality of the history, procedures, and findings to date are examined honestly and comprehensively--not selectively sampled to fit one's particular theoretical bias--something anomalous appears to be occurring in the mediumship research, at least with a select group of evidence-based mediums.

Over and over, from experiment to experiment, findings have been observed that deserve the term extraordinary. In our latest double-blind, multi-center experiments, stable individual differences in sitters have been observed that replicate across laboratories and experiments. The observations are not going away--even with multi-center, double-blind testing.

Hyman once told me, "I have no control over my beliefs." When I asked him what he would conclude if a perfect large sample multi-center double-blind experiment was conducted, his response was, "I would want to see your major multi-center, double-blind experiment replicated a few times by other centers before drawing any conclusions."

This conversation is revealing psychologically. Until multiple perfect experiments are performed and published, Hyman would rather believe that the totality of the findings must be due to some combination of fraud, cold reading, rater bias, experimenter error, or chance--even if this requires that he selectively ignores important aspects of the history, designs, and findings in order to hold on to his belief that he (or we) are being "fooled."

Why spend the time and money conducting multiple multi-center, double-blind experiments unless there are sufficient theoretical, experimental, and social reasons for doing so?

The critical question is, "Is it possible that consistent with the actual totality of the data collected to date--viewed historically (e.g., the observations of William James) as well as across disciplines (e.g., from anthropology to astrophysics)--that future research may lead us to come to the conclusion that consciousness is intimately related to energy and information, and that consciousness, as an expression of dynamically patterned energy and information, persists in space like the light from distant stars?"

This is ultimately an empirical question; it will be answered by data, one way or the other. If positive data are obtained--and I emphasize if--accepting the data will require that we be able to change our beliefs as a function of what the data reveal. The Afterlife Experiments was written to encourage people to keep an open mind about what the future research may reveal.

Epilogue: What is a Magazine's Responsibility?

If the Skeptical Inquirer wishes to be viewed as being a credible publication, more like the Philadelphia Inquirer than the National Enquirer, it should take responsibility for fact checking its articles and correcting mistakes caused by simple errors and/or the selective ignoring of important information.

For example, Hyman's review begins by stating that I was a professor at Yale University for twenty-eight years--the fact is, I was at Yale for twelve years. If the Skeptical Inquirer had not chosen to keep Hyman's review secret, and had asked me to fact check Hyman's review, I would have gladly done so, and therefore enabled both the magazine and the reviewer to correct at least the obvious errors of fact. Clearly, little mistakes, compounded by big mistakes, do not make for a credible publication or review.

I am taking a strong position about accuracy of reporting here not because of the ultimate validity of the survival hypothesis (i.e., whether it is true or not, since that is an experimental question) but because of the nature of scientific reviewing process itself.

The selective ignoring and omission of important information cannot be condoned in either reviewing or publishing. It must be exposed and understood, regardless of the specific research area that is being reviewed or the specific person doing the reviewing.

Note that my argument is not with Hyman as a person, nor with the Skeptical Inquirer as a publication. My concern is about the process by which Hyman has written his review, and the responsibility of Skeptical Inquirer to decrease the likelihood that this kind of mistaken review will be published in the future. There is a bigger lesson here. It is worth considering, and correcting.

Acknowledgments

I thank a number of my colleagues who have graciously taken the time to provide me with useful feedback about this commentary. They include Peter Hayes, Ph.D., Katherine Creath, Ph.D., Stephen Grenard, Ph.D., Donald Watson, M.D., Emily Kelly, Ph.D., Lonnie Nelson, M.A., and Montague Keen. The comments provided here are those of the author, not necessarily those of my colleagues.


Read Ray Hyman's response.

Content copyright by CSI or the respective copyright holders. Do not redistribute without obtaining permission.

Feedback | Reverse links for this page | Translate this page