Information Age Education
   Issue Number 170
September, 2015   

This free Information Age Education Newsletter is edited by Dave Moursund and Bob Sylwester, and produced by Ken Loge. The newsletter is one component of the Information Age Education (IAE) publications.

All back issues of the newsletter and subscription information are available online. In addition, five free books based on the newsletters are available: Education for Students’ Futures; Understanding and Mastering Complexity; Consciousness and Morality: Recent Research Developments; Creating an Appropriate 21st Century Education; and Common Core State Standards for Education in America.

This is the 17th IAE Newsletter in a series on Credibility and Validity of Information.

Credibility and Validity of Information Part 17:
Determining Validity and Credibility in the
Search for Truth

Peter Sylwester
Software Engineer
Institute for Disease Modeling

Robert Sylwester
Emeritus Professor of Education
University of Oregon

This is the final article in a series of IAE Newsletters that explored the increasingly complex task of how to determine the truth of an assertion. Validity and credibility are the commonly used terms in the search. Validity refers to the logical and factually sound nature of an assertion, and credibility refers to the level of trust one has in it. It's thus possible to consider something as credible even though it lacks scientific and/or logical validity, and it's also possible that someone could deny the credibility of a valid discovery. For example, global warming discoveries have considerable scientific validity, but some people deny the credibility of the research (or of the researchers).

We began the series with the analogy of how the Olympics determines medalists. They use precise, objectively valid measurements to determine winners in such timed and distance events as running, leaping, and throwing. Conversely, a panel of supposedly credible judges subjectively determines winners in such events as figure skating and gymnastics. Such subjective judgments aren't based on who arrived first or jumped the highest, but rather on how well the winning athlete performed in the event. In either type of event, the medalist wins a legitimate Olympic medal. Winning a gold medal in the 1500-meter race is thus equal to winning a gold medal in a gymnastics event, even though objective validity determines one winner and subjective credibility the other.

Issues in Determining Validity and Credibility

Validity and credibility contribute to each other. For example, Olympic participants assume the credibility of the objective measurement devices used to determine winners. Similarly, subjective Olympic scoring systems can only become credible when judges employ valid mathematical methodology to precisely compute similar scores.

However, in our broader culture, validity and credibility are often employed independently of one another. Credibility without genuine validity is evident in partisan media outlets such as FOX, which solicits credibility from a politically conservative audience, or MSNBC, which does it for a politically progressive audience. Many Internet websites are similarly biased in their approach. Fact-checking organizations consistently challenge the validity that these outlets espouse, yet viewership remains strong. Each outlet seeks credibility from within their subjectively biased audience, despite any questionable validity.

Validity without credibility occurs when researchers measure accurately, but with an intent or method that could be deemed subjective. For example, a researcher can ask suspect questions in a confusing manner, and then draw conclusions that are perfectly valid but lack convincing credibility. This is known as the Experimenter’s Bias [1]. One example in recent history is the Climatic Research Unit email controversy [2]. Climate change skeptics unearthed emails among climate scientists who discussed self-proclaimed tricks they apparently used to “fix” tree-ring data to a foregone conclusion of climate change. Scientists use the width of tree rings to indicate weather differences during successive years. The tricks employed could represent statistically valid sound practices, but they were impeached by some folks due to their dubious credibility (including the casualness with which the scientists discussed them).

In both credibility without validity and validity without credibility the lacking component is not contributing an adequate amount of check and balance to the other. Either, without the other, is merely a shortcut to truth, and the result is obviously bias. So, why do people do it?

No one likes to be wrong. When we believe or say something that is demonstrably wrong, we are embarrassed or even humiliated by our misguided belief. Regret of being wrong can be hard to forget, perhaps becoming a lifelong haunt. The pangs of rumination can thus be a powerful motivator to be more certain of our beliefs. Unfortunately, credibility with validity and validity with credibility are difficult to achieve. Even more, the decisions we make often don't have a black-and-white simplicity. A lot of gray area exists in which credibility and validity must contribute to a thoughtfully weighed decision, and that requires time and effort. Realize that in the Olympics, subjective decisions must often be made very quickly.

Confirmation Bias [3] has been integral to human life since early tribalism emerged. We tend to view a group that we belong to (commonly called a tribe) as credibly correct when compared to the views of other tribes. This phenomenon is a systematic error in inductive reasoning, but it feels good anyway. We display this bias when we selectively use information. We tend to imply stronger bias-directed effects when issues are emotionally charged or our beliefs are deeply entrenched. We tend to assume that ambiguous evidence supports our existing beliefs.

Likewise, a phenomenon known as Frequency Illusion [4] has been observed as a sub-conscious means of reinforcing opinion by seeming to seek out evidence of validity wherever we look. When we're trying to solve a perplexing issue, we can be particularly attuned to noticing every related encounter. For example, a couple who are debating whether to start a family may suddenly seem to see babies and young families everywhere they look. Those babies and families were always there before, but the couple simply hadn’t noticed them before they considered having a family of their own.

Several factors such as these can lead to error in the search for truth. For example, our attitudes may become polarized when others identify errors in the evidence we use. We may persist in our beliefs even when others explain their untenable nature. We may rely more on the value of earlier beliefs than on what emerged in later forms of investigation. We may perceive a predictable spurious connection between two events or situations,

When events occur that suggest that our beliefs about our tribe have been wrong, we may simply join another church or a different political party that agrees with us (or perhaps quit our job or seek a divorce).


A Solution from Epidemiology

It is incumbent upon research science to develop valid systems that will prove credible. The Bradford Hill Criteria [5] have provided a useful checklist of the minimal conditions that signal a causal relationship between an incident and a possible consequence in epidemiology, an area in which scientific validity is essential. Epidemiology thus always questions every supposition, method, and technique. Researchers tirelessly defend every fact and figure because fund administrators and peer scientists require unimpeachable conclusions—as do the people who might otherwise get infections.

The Bradford Hill Criteria

The Bradford Hill criteria (1965, and paraphrased below) otherwise known as Hill's criteria for causation, are a group of minimal conditions necessary to provide adequate evidence of a causal relationship between an incidence and a possible consequence. It was established by the English epidemiologist Sir Austin Bradford Hill (1897–1991) in 1965.
  1. Strength: A small association does not mean that there a causal effect doesn't exist. The larger the association, the more likely that it is causal.

  2. Consistency: Consistent findings observed by different persons in different places with different samples strengthens the likelihood of an effect.

  3. Specificity: Causation is likely if a very specific population at a specific site and disease includes no other likely explanation. The more specific an association exists between a factor and an effect, the better the probability of a causal relationship.

  4. Temporality: The effect has to occur after the cause (and if an expected delay occurs between the cause and expected effect, the effect must occur after that delay).

  5. Biological gradient: Greater exposure should generally lead to greater incidence of the effect. However, in some cases, the mere presence of the factor can trigger the effect. In other cases, an inverse proportion is observed: greater exposure leads to lower incidence.

  6. Plausibility: A plausible mechanism between cause and effect is helpful but knowledge of the mechanism can be limited by current knowledge.

  7. Coherence: Coherence between epidemiological and laboratory findings increases the likelihood of an effect. However, lack of such [laboratory] evidence cannot nullify the epidemiological effect on associations.

  8. Experiment: It is occasionally possible to appeal to experimental evidence.

  9. Analogy: The effect of similar factors may be considered.

Final Thoughts

Daniel Kahnman's Thinking, Fast and Slow (2011) [6] is considered to be one of this decade's best books on the issues related to valid/credible thought. The IAE Newsletter published a synthesis of the book: http://i-a-e.org/newsletters/IAE-Newsletter-2012-89.html.

Articles on other issues in this series, such as assessing the validity of scientific and mathematical research, the value of poetry, the legitimacy of advocacy groups, and the credibility of religious dogma suggest that appropriately assessing validity and credibility will probably remain a complex (and a contentious) societal issue.

Here is an important suggestion: Those who intend to prove or disprove a scientific theory, method, or supposition, should demonstrate a similar level of diligence as was invested in developing the material they dispute.

References

[1] Experimenter’s Bias, http://en.wikipedia.org/wiki/Experimenter's_bias

[2] Climatic Research Unit email controversy, http://en.wikipedia.org/wiki/Climatic_Research_Unit_email_controversy

[3] Confirmation Bias, http://en.wikipedia.org/wiki/Confirmation_bias

[4] Frequency Illusion, aka “Baader-Meinhof Phenomenon,” http://en.wikipedia.org/wiki/List_of_cognitive_biases#Frequency_illusion

[5] Bradford Hill Criteria, http://en.wikipedia.org/wiki/Bradford_Hill_criteria

[6] Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Strauss and Giroux.

Author

Peter Sylwester is a Senior Software Engineer with The Institute for Disease Modeling (IDM), a collective of epidemiologists, applied math scientists, and software developers committed to improving and saving lives in developing countries through the use of quantitative analysis. Currently, IDM is working on disease transmission dynamics for malaria, polio, tuberculosis, and HIV.

Contact information: Peter.Sylwester@gmail.com.

Robert Sylwester is an Emeritus Professor of Education at the University of Oregon, and a regular contributor to the IAE Newsletter. His most recent books are A Child’s Brain: The Need for Nurture (2010, Corwin Press) and The Adolescent Brain: Reaching for Autonomy (2007, Corwin Press). He also helped to write/edit five books for the IAE Newsletter. He wrote a monthly column for the Internet journal Brain Connection during its entire 2000-2009 run.

Contact information: bobsyl@uoregon.edu.

Email: moursund@uoregon.edu.

Reader Comments

We are using the Disqus commenting system to facilitate comments and discussions pertaining to this newsletter. To use Disqus, please click the Login link below and sign in. If you have questions about how to use Disqus, please refer to this help page.


Readers may also send comments via email directly to moursund@uoregon.edu and bobsyl@uoregon.edu.

About Information Age Education, Inc.

Information Age Education is a non-profit organization dedicated to improving education for learners of all ages throughout the world. Current IAE activities and free materials include the IAE-pedia at http://iae-pedia.org, a Website containing free books and articles at http://i-a-e.org/, a Blog at http://i-a-e.org/iae-blog.html, and the free newsletter you are now reading. See all back issues of the Blog at http://iae-pedia.org/IAE_Blog and all back issues of the Newsletter at http://i-a-e.org/iae-newsletter.html.