Bogus Research Uncovered

http://www.work-learning.com/chigraph.htm

We at Work-Learning Research are always curious about how research is used in the learning-and-performance field. We've seen the following graph (in one form or another) in many presentations and documents, and because we wanted to learn more, we did some research. What we discovered scared us, and reflects poorly on our field. Scroll down to learn more.


The Graph is a Fraud!

After reading the cited article several times and not seeing the graph -- nor the numbers on the graph -- I got suspicious and got in touch with the first author of the cited study, Dr. Michelene Chi of the University of Pittsburgh (who is, by the way, one of the world's leading authorities on expertise). She said this about the graph:

"I don't recognize this graph at all. So the citation is definitely wrong; since it's not my graph."

What makes this particularly disturbing is that this graph has popped up all over our industry, and many instructional-design decisions have been based on the information contained in the graph.


Bogus Information is Widespread

A quick survey of people in the field (people who subscribe to the Work-Learning Research Newsletter) illustrates just how far the information has been spread. Of the 76 people who responded, only 37 reported that they did "Not Remember" seeing the graph. Eight reported they "Might Have" seen it, and 31 reported that they were "Fairly Sure" or "Definitely Sure" they'd seen it, or they had used it themselves.

In other words, 41% of people responding said they'd seen the graph. People saw the graph in the United States, Canada, Australia, Spain, and United Kingdom. People reported passing the information on to others, teaching it in college courses, using it to sell ideas and products, buying different training products, and making instructional-design decisions because of the information in the graph.

But the graph is representative of a larger problem. The numbers presented on the graph have been circulating in our industry since the late 1960's, and they have no research backing whatsoever. Dr. JC Kinnamon (2002) of Midi, Inc., searched the web and found dozens of references to those dubious numbers in college courses, research reports, and in vendor and consultant promotional materials.


Where the Numbers Came From

Although we at Work-Learning Research have not concluded our investigation of this hoax -- you can provide us with more information below -- it appears that those percentages were probably generated by an employee of Mobil Oil Company in 1967, writing in the magazine Film and Audio-Visual Communications. D. G. Treichler didn’t cite any research, but our field has unfortunately accepted his/her percentages ever since.

Michael Molenda, a professor at Indiana University, is currently working to track down the origination of the bogus numbers. His efforts have uncovered some evidence that the numbers may have been developed as early as the 1940's by Paul John Phillips who worked at University of Texas at Austin and who developed training classes for the petroleum industry. During World War Two Phillips taught Visual Aids at the U. S. Army's Ordnance School at the Aberdeen (Maryland) Proving Grounds, where the numbers have also appeared and where they may have been developed.

Ernie Rothkopf, professor emeritus of Columbia University, one of the world's leading applied research psychologists on learning, reported to me that the bogus percentages have been widely discredited, yet they keep rearing their ugly head in one form or another every few years.

Many people now associate the bogus percentages with Dale's "Cone of Experience," developed in 1946 by Edgar Dale. It provided an intuitive model of the concreteness of various audio-visual media. Dale included no numbers in his model and there was no research used to generate it. In fact, Dale warned his readers not to take the model too literally. Dale's Cone, copied without changes from the 3rd and final edition of his book, is presented below:

 Dale's Cone of Experience (Dale, 1969, p. 107)


Somewhere along the way, someone unnaturally fused Dale's Cone and Treichler's dubious percentages. One common example is represented below.

The source cited in the diagram above by Wiman and Meierhenry (1969) is a book of edited chapters. Though two of the chapters (Harrison, 1969; Stewart, 1969) mention Dale's Cone of Experience, neither of them includes the percentages. In other words, the diagram above is citing a book that does not include the diagram and does not include the percentages indicated in the diagram.


The "Evidence" Changes to Meet the Need of the Deceiver

The percentages, and the graph in particular, have been passed around in our field from reputable person to reputable person. The people who originally created the fabrications are to blame for getting this started, but there are clearly many people willing to bend the information to their own devices. Kinnamon's (2002) investigation found that Treichler's percentages have been modified in many ways, depending on the message the shyster wants to send. Some people have changed the relative percentages. Some have improved Treichler's grammar. Some have added categories to make their point. For example, one version of these numbers says that people remember 95% of the information they teach to others.

People have not only cited Treichler, Chi, Wiman and Meierhenry for the percentages, but have also incorrectly cited William Glasser, and correctly cited a number of other people who have utilized Treichler's numbers.

It seems clear from some of the fraudulent citations that deception was intended. On the graph that prompted our investigation, the title of the article had been modified from the original to get rid of the word "students." The creator of the graph must have known that the term "students" would make people in the performance-improvement field suspicious that the research was done on children. The creator of Wiman and Meierhenry diagram did four things that make it difficult to track down the original source: (1) the book they cited is fairly obscure, (2) one of the authors names is spelled wrong, (3) the year of publication is incorrect, (4) and the name Charles Merrill, which was actually a publishing house, was ambiguously presented so that it might have referred to an author or editor.


But Don't The Numbers Speak The Truth?

The numbers are ridiculous, and even if they made sense, they'd still be dangerous.

If we look at the numbers a little more closely, they are highly unconvincing. How did someone compare "reading" and "seeing?" Don't you have to "see" to "read?" What does "collaboration" mean anyway? Were two people talking about the information they were learning? If so, weren't they "hearing" what the other person had to say? What does "doing" mean? How much were they "doing" it? Were they "doing" it correctly, or did they get feedback? If they were getting feedback, how do we know the learning didn't come from the feedback---not the "doing?" Do we really believe that people learn more "hearing" a lecture, than "reading" the same material? Don't people who "read" have an advantage in being able to pace themselves and revisit material they don't understand? And how did the research produce numbers that are all factors of ten? Doesn't this suggest some sort of review of the literature? If so, shouldn't we know how the research review was conducted? Shouldn't we get a clear and traceable citation for such a review?

Even the idea that you can compare these types of learning methods is ridiculous. As any good research psychologist knows, the measurement situation affects the learning outcome. If we have a person learn foreign-language vocabulary by listening to an audiotape and vocalizing their responses, it doesn't make sense to test them by having them write down their answers. We'd have a poor measure of their ability to verbalize vocabulary. The opposite is also nonsensical. People who learn vocabulary by seeing it on the written page cannot be fairly evaluated by asking them to say the words aloud. It's not fair to compare these different methods by using the same test, because the choice of test will bias the outcome toward the learning situation that is most like the test situation.

But why not compare one type of test to another---for example, if we want to compare vocabulary learning through hearing and seeing, why don't we use an oral test and written one? This doesn't help either. It's really impossible to compare two things on different indices. Can you imagine comparing the best boxer with the best golfer by having the boxer punch a heavy bag and having the golfer hit for distance? Would Muhammad Ali punching with 600 pounds of pressure beat Tiger Woods hitting his drives 320 yards off the tee?


The Importance of Listing Citations

Even if the numbers presented on the graph had been published in a refereed journal---research we were sure we could trust---it would still be dangerous not to know where they came from. Research conclusions have a way of morphing over time. Wasn't it true ten years ago that all fat was bad? Newer research has revealed that monounsaturated oils like olive oil might actually be good for us. If a person doesn't cite their sources, we might not realize that their conclusions are outdated or simply based on poor research. Conversely, we may also lose access to good sources of information. Suppose Teichler had really discovered a valid source of information? Because he/she did not use citations, that research would remain forever hidden in obscurity.

The context of research makes a great deal of difference. If we don't know a source, we don't really know whether the research is relevant to our situation. For example, an article by Kulik and Kulik (1988) concluded that immediate feedback was better than delayed feedback. Most people in the field now accept their conclusions. Efforts by Work-Learning Research to examine Kulik and Kulik's sources indicated that most of the articles they reviewed tested the learners within a few minutes after the learning event, a very unrealistic analog for most training situations. Their sources enabled us to examine their evidence and find it faulty.


Who Should We Blame?

The original shysters are not the only ones to blame. The fact that many people who have disseminated the graph used the same incorrect citation makes it clear that they never accessed the original study. Everyone who uses a citation to make a point (or draw a conclusion) ought to check the citation. That, of course, includes all of us who are consumers of this information.


What Does This Tell Us About Our Field?

It tells us that we may not be able to trust the information that floats around our industry. It tells us that even our most reputable people and organizations may require the Wizard-of-Oz treatment---we may need to look behind the curtain to verify their claims.


The Danger To Our Field

At Work-Learning Research, our goal is to provide research-based information that practitioners can trust. We began our research efforts several years ago when we noticed that the field jumps from one fad to another while at the same time holding religiously to ideas that would be better cast aside.

The fact that our field is so easily swayed by the mildest whiffs of evidence suggests that we don't have sufficient mechanisms in place to improve what we do. Because we're not able or willing to provide due diligence on evidence-based claims, we're unable to create feedback loops to push the field more forcefully toward continuing improvement.

Isn't it ironic? We're supposed to be the learning experts, but because we too easily take things for granted, we find ourselves skipping down all manner of yellow-brick roads.


How to Improve the Situation

It will seem obvious, but each and every one of us must take responsibility for the information we transmit to ensure its integrity. More importantly, we must be actively skeptical of the information we receive. We ought to check the facts, investigate the evidence, and evaluate the research. Finally, we must continue our personal search for knowledge---for it is only with knowledge that we can validly evaluate the claims that we encounter.


Our Citations

Before we ask you if you can provide us with more information about the bogus percentages and the graph, we offer our own citations for the information on this webpage:

Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13, 145-182.

Dale, E. (1946, 1954, 1969). Audio-visual methods in teaching. New York: Dryden.

Harrison, R. (1969). Communication theory. In R. V. Wiman and W. C. Meierhenry (Eds.) Educational media: Theory into practice. Columbus, OH: Merrill.

Kinnamon, J. C. (2002). Personal communication, October 25.

Kulik, J. A., & Kulik, C-L. C. (1988). Timing of feedback and verbal learning. Review of Educational Research, 58, 79-97.

Molenda, M. H. (2003). Personal communications, February and March.

Rothkopf, E. Z. (2002). Personal communication, September 26.

Stewart, D. K. (1969). A learning-systems concept as applied to courses in education and training. In R. V. Wiman and W. C. Meierhenry (Eds.) Educational media: Theory into practice. Columbus, OH: Merrill.

Treichler, D. G. (1967). Are you missing the boat in training aids? Film and Audio-Visual Communication, 1, 14-16, 28-30, 48.

Wiman, R. V. & Meierhenry, W. C. (Eds.). (1969). Educational media: Theory into practice. Columbus, OH: Merrill.


Can You Provide More Information?

If you've seen the graph or numbers similar to those represented on it, please let us know where you saw it. The following are representative of the bogus percentages:

Reading (10%)
Seeing (20%)
Hearing (30%)
Seeing and Hearing (50%)
Collaboration (70%)
Doing (80%)