The Design of Experiments: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Gareth Leng
No edit summary
imported>Peter Schmitt
m (typo, and "monograph" added)
(6 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{subpages}}
{{subpages}}  
{|align="right" cellpadding="10" style="background-color:#FFFFCC; width:50%; border: 1px solid #aaa; margin:20px; font-size: 92%;"
{|align="right" cellpadding="10" style="background-color:#FFFFCC; width:50%; border: 1px solid #aaa; margin:20px; font-size: 85%;"
|
|
"WHEN any scientific conclusion is supposed to be proved on experimental evidence, critics who still refuse to accept the conclusion are accustomed to take one of two lines of attack.  They may claim that the interpretation of the experiment is faulty, that the results reported are not in fact those which should have been expected had the conclusion drawn been justified, or that they might equally well have arisen had the conclusion drawn been false.  Such criticisms of interpretation are usually treated as falling within the domain of statistics.  They are often made by professed statisticians against the work of others whom they regard as ignorant of or incompetent in statistical technique and, since the interpretation of any considerable body of data is likely to involve computations, it is natural enough that questions involving the logical implications of the results of the arithmetical processes employed, should be relegated to the statistician.  At least I make no complaint of this convention.  The statistician cannot evade the responsibility for understanding the processes he applies or recommends.  My immediate point is that the questions involved can be dissociated from all that is strictly technical in the statistician's craft, and, when so detached, are questions only of the right use of human reasoning powers, with which all intelligent people, who hope to be intelligible, are equally concerned, and on which the statistician, as such, speaks with no special authority.  The statistician cannot excuse himself from the duty of getting his head clear on the principles of scientific inference, but equally no other thinking man can avoid a like obligation.
"WHEN any scientific conclusion is supposed to be proved on experimental evidence, critics who still refuse to accept the conclusion are accustomed to take one of two lines of attack.  They may claim that the interpretation of the experiment is faulty, that the results reported are not in fact those which should have been expected had the conclusion drawn been justified, or that they might equally well have arisen had the conclusion drawn been false.  Such criticisms of interpretation are usually treated as falling within the domain of statistics.  They are often made by professed statisticians against the work of others whom they regard as ignorant of or incompetent in statistical technique and, since the interpretation of any considerable body of data is likely to involve computations, it is natural enough that questions involving the logical implications of the results of the arithmetical processes employed, should be relegated to the statistician.  At least I make no complaint of this convention.  The statistician cannot evade the responsibility for understanding the processes he applies or recommends.  My immediate point is that the questions involved can be dissociated from all that is strictly technical in the statistician's craft, and, when so detached, are questions only of the right use of human reasoning powers, with which all intelligent people, who hope to be intelligible, are equally concerned, and on which the statistician, as such, speaks with no special authority.  The statistician cannot excuse himself from the duty of getting his head clear on the principles of scientific inference, but equally no other thinking man can avoid a like obligation.
Line 8: Line 8:
Now the essential point is that the two sorts of criticism I have mentioned come logically to the same thing, although they are usually delivered by different sorts of people and in very different language.  If the design of an experiment is faulty, any method of interpretation which makes it out to be decisive must be faulty too.  It is true that there are a great many experimental procedures which are well designed in that they may lead to decisive conclusions, but on other occasions may fall to do so; in such cases, if decisive conclusions are in fact drawn when they are unjustified, we may say that the fault is wholly in the interpretation, not in the design.  But the fault of interpretation, even in these cases, lies in overlooking the characteristic features of the design which lead to the result being sometimes inconclusive, or conclusive on some questions but not on all.  To understand correctly the one aspect of the problem is to understand the other.  Statistical procedure and experimental design are only two different aspects of the same whole, and that whole is the logical requirements of the complete process of adding to natural knowledge by experimentation."
Now the essential point is that the two sorts of criticism I have mentioned come logically to the same thing, although they are usually delivered by different sorts of people and in very different language.  If the design of an experiment is faulty, any method of interpretation which makes it out to be decisive must be faulty too.  It is true that there are a great many experimental procedures which are well designed in that they may lead to decisive conclusions, but on other occasions may fall to do so; in such cases, if decisive conclusions are in fact drawn when they are unjustified, we may say that the fault is wholly in the interpretation, not in the design.  But the fault of interpretation, even in these cases, lies in overlooking the characteristic features of the design which lead to the result being sometimes inconclusive, or conclusive on some questions but not on all.  To understand correctly the one aspect of the problem is to understand the other.  Statistical procedure and experimental design are only two different aspects of the same whole, and that whole is the logical requirements of the complete process of adding to natural knowledge by experimentation."
|}
|}
'''The Design of Experiments''' was writtten by [[R.A. Fisher]](1890-1962) in 1935, aimed at "illustrating the principles of successful experiments".<ref>Fisher RA (1935) ''The Design of Experiments'' Oliver and Boyd, Edinburgh</ref> Fisher was one of the leading scientists of the 20th century, and made major contributions to [[Statistics]], [[Evolutionary Biology]] and [[Genetics]]. According to Anders Hald writing in '' A History of Mathematical Statistics'' (1998), "Fisher was a genius who almost single-handedly created the foundations for modern statistical science."<ref>[http://www.economics.soton.ac.uk/staff/aldrich/fisherguide/rafreader.htm A Guide to R. A. Fisher]</ref>
The monograph '''The Design of Experiments''' was written by [[R.A. Fisher]] (1890-1962) in 1935, aimed at "illustrating the principles of successful experiments".<ref>Fisher RA (1935) ''The Design of Experiments'' Oliver and Boyd, Edinburgh</ref> Fisher was one of the leading scientists of the 20th century, and made major contributions to [[Statistics]], [[Evolutionary Biology]] and [[Genetics]]. According to Anders Hald writing in '' A History of Mathematical Statistics'' (1998), "Fisher was a genius who almost single-handedly created the foundations for modern statistical science."<ref>[http://www.economics.soton.ac.uk/staff/aldrich/fisherguide/rafreader.htm A Guide to R. A. Fisher]</ref>


Fisher made statistics an integral part of the [[Scientific method]]<ref>[http://www-groups.dcs.st-and.ac.uk/~history/Obits/Fisher.html RA Fisher] Obituary in ''The Times''</ref>
Fisher made statistics an integral part of the [[Scientific method]]<ref>[http://www-groups.dcs.st-and.ac.uk/~history/Obits/Fisher.html RA Fisher] Obituary in ''The Times''</ref>


<blockquote>
<blockquote>
"To call in the statistician after the experiment is done may be no more than asking him to perform a postmortem examination: he may be able to say what the experiment died of." R.A. Fisher, at Indian Statistical Congress, Sankhya, ca 1938. </blockquote>
"To call in the statistician after the experiment is done may be no more than asking him to perform a postmortem examination: he may be able to say what the experiment died of."  
<br><br>
(R.A. Fisher, at Indian Statistical Congress, Sankhya, ca 1938.)</blockquote>




''The Design of Experiments'' and his earlier ''Statistical Methods for Research Workers'' (1925) established formal methods for rigorously evaluating the outcomes of controlled experiments.
''The Design of Experiments'' and his earlier ''Statistical Methods for Research Workers'' (1925) established formal methods for rigorously evaluating the outcomes of controlled experiments.


Fisher was also a writer of great elegance and wit: the  extract on the right is about 'The Grounds on which Evidence is Disputed' from ''The Design of Experiments.''  
Fisher was also a writer of great elegance and wit: the  extract on the right is about 'The Grounds on which Evidence is Disputed' from ''The Design of Experiments.''
 


==References==
==References==
<references/>
<references/>

Revision as of 17:22, 1 July 2009

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

"WHEN any scientific conclusion is supposed to be proved on experimental evidence, critics who still refuse to accept the conclusion are accustomed to take one of two lines of attack. They may claim that the interpretation of the experiment is faulty, that the results reported are not in fact those which should have been expected had the conclusion drawn been justified, or that they might equally well have arisen had the conclusion drawn been false. Such criticisms of interpretation are usually treated as falling within the domain of statistics. They are often made by professed statisticians against the work of others whom they regard as ignorant of or incompetent in statistical technique and, since the interpretation of any considerable body of data is likely to involve computations, it is natural enough that questions involving the logical implications of the results of the arithmetical processes employed, should be relegated to the statistician. At least I make no complaint of this convention. The statistician cannot evade the responsibility for understanding the processes he applies or recommends. My immediate point is that the questions involved can be dissociated from all that is strictly technical in the statistician's craft, and, when so detached, are questions only of the right use of human reasoning powers, with which all intelligent people, who hope to be intelligible, are equally concerned, and on which the statistician, as such, speaks with no special authority. The statistician cannot excuse himself from the duty of getting his head clear on the principles of scientific inference, but equally no other thinking man can avoid a like obligation.

The other type of criticism to which experimental results are exposed is that the experiment itself was ill designed, or, of course, badly executed. If we suppose that the experimenter did what he intended to do, both of these points come down to the question of the design, or the logical structure of the experiment. This type of criticism is usually made by what 1 might call a heavyweight authority. Prolonged experience, or at least the long possession of a scientific reputation, is almost a prerequisite for developing successfully this line of attack. Technical details are seldom in evidence. The authoritative assertion "His controls are totally inadequate" must have temporarily discredited many a promising line of work; and such an authoritarian method of judgement must surely continue, human nature being what it is, so long as theoretical notions of the principles of experimental design are lacking - notions just as clear and explicit as we are accustomed to apply to technical details.

Now the essential point is that the two sorts of criticism I have mentioned come logically to the same thing, although they are usually delivered by different sorts of people and in very different language. If the design of an experiment is faulty, any method of interpretation which makes it out to be decisive must be faulty too. It is true that there are a great many experimental procedures which are well designed in that they may lead to decisive conclusions, but on other occasions may fall to do so; in such cases, if decisive conclusions are in fact drawn when they are unjustified, we may say that the fault is wholly in the interpretation, not in the design. But the fault of interpretation, even in these cases, lies in overlooking the characteristic features of the design which lead to the result being sometimes inconclusive, or conclusive on some questions but not on all. To understand correctly the one aspect of the problem is to understand the other. Statistical procedure and experimental design are only two different aspects of the same whole, and that whole is the logical requirements of the complete process of adding to natural knowledge by experimentation."

The monograph The Design of Experiments was written by R.A. Fisher (1890-1962) in 1935, aimed at "illustrating the principles of successful experiments".[1] Fisher was one of the leading scientists of the 20th century, and made major contributions to Statistics, Evolutionary Biology and Genetics. According to Anders Hald writing in A History of Mathematical Statistics (1998), "Fisher was a genius who almost single-handedly created the foundations for modern statistical science."[2]

Fisher made statistics an integral part of the Scientific method[3]

"To call in the statistician after the experiment is done may be no more than asking him to perform a postmortem examination: he may be able to say what the experiment died of."

(R.A. Fisher, at Indian Statistical Congress, Sankhya, ca 1938.)


The Design of Experiments and his earlier Statistical Methods for Research Workers (1925) established formal methods for rigorously evaluating the outcomes of controlled experiments.

Fisher was also a writer of great elegance and wit: the extract on the right is about 'The Grounds on which Evidence is Disputed' from The Design of Experiments.

References

  1. Fisher RA (1935) The Design of Experiments Oliver and Boyd, Edinburgh
  2. A Guide to R. A. Fisher
  3. RA Fisher Obituary in The Times