In his article “Blinded By The Light”, Bill Oppenheim was on target, on several fronts, with his typically insightful data-driven commentary. His observations begin by revealing the fundamental obstacle and limitation inherent in trying to make a case for the existence and nature of “true” nicks. As Bill implies, most hypotheses and generalizations about so-called “nicks” are typically formulated on meager and statistically inadequate sample size and therefore have questionable predictive value.
For sure, pedigrees are wonderful things, possess intrinsic beauty, and have the capacity to trigger our fondest memories. Furthermore, their historical significance provides a fascinating source for intellectual stimulation and aesthetic appreciation. And it makes common sense to study them and to think that quality racehorses will beget quality racehorses. Yet if our preoccupation with pedigrees can yield any practical utility, their value would be found in our ability to analyze the historical detail and emerge with an improved probability of achieving a stakes performer from a particular mating. Toward that end, however, it is not descriptively useful to be told that a proposed mating is an A+ or a C or has an index of 71.93. Instead, we need probability statements that are anchored in substantive data. We need to be told that, based on X number of prior identical or closely similar matings, we have an X probability of breeding a stakes horse.
The problem for most “nicking” paradigms is that the number of prior comparisons in most samples is nearly always either too small or too dissimilar to provide a meaningful (statistically significant) predictive correlation. Unfortunately, this doesn’t seem to stop us, in the face of uncertainty and great monetary expenditure, from grabbing on to them and seeking comfort in pointing to something that happened some place, some time. But because something happens a couple of times does not mean that it is likely to happen again. A handful of occurrences does not create a statistically viable pattern or a sound basis for statistically significant prediction. Thus, “nicks” derived from small samples are perhaps the most visible poster children for our widespread industry tendency to attribute broad meaning from a small number of events.
In more formal terms, attempting to explain or predict an event based on a small number of occurrences defies the laws of Logical Composition. When we draw conclusions (and base our beliefs) on an insufficient sample size, we are particularly subject to the Fallacy of Composition, which arises when we infer that what is true of a part of something is also true of the whole. This fallacy, in turn, frequently spawns a closely related fallacy which we know, in colloquial terms, as the Hasty Generalization. This fallacy has many false faces and disguises that impinge on us daily. Hasty Generalization examples abound at every turn. Someone says that they have a really good foal by a new stallion, so the word is out that the stallion is getting really good foals. A mare has a bad foal and she “needs to find another home.” A trainer has a horse that doesn’t work out and he won’t buy another by the same sire, etc., etc.
Yet, fallacies related to Logical Composition are not the only logical fallacies that work to reduce our sales scene toward its lowest common denominator. With regard to our irrational obsession with first year sires, for example, we see a prime incidence of the fallacy of Argumentum ad Ignorantium. Simply put, this fallacy occurs when it’s argued that something must be true, simply because it hasn’t been proved false. How else can we explain the illogic of looking past proven stallions to line up for stallions who haven’t even produced a runner? And, of course, we continue to find comfort in numbers and conformity as we embrace the fallacy of Argumentum ad Numerum. Plain and simple, this fallacy asserts that the more people who support a belief, the more likely it is that it is correct.
Extract by Rob Whiteley (Thoroughbred Daily News)