Is Man a Rational Animal? by Stephen Stich I. Introduction: Descriptive and Normative Approaches to the Study of Human Reasoning

Download 135.38 Kb.
Size135.38 Kb.
  1   2   3
Is Man a Rational Animal?

by Stephen Stich
I. Introduction: Descriptive and Normative Approaches to the Study of Human Reasoning
In studying human reasoning it is generally assumed that we can adopt two very different approaches. One approach is descriptive or empirical. Those who take this approach try to characterize how people actually go about the business of reasoning and to discover the psychological mechanisms and processes that underlie the patterns of reasoning they observe. For the most part the descriptive approach is pursued by psychologists, though anthropologists have also done some very interesting work aimed at determining whether and to what extent people in different cultures reason differently. The other approach to the study of reasoning is normative. Those who adopt this approach are not concerned with how people actually reason but rather with how they should reason. Their goal is to discover rules or principles that specify what it is to reason correctly or rationally. Since antiquity, the normative approach to reasoning has been pursued by logicians and philosophers, and more recently they have been joined by statisticians, probability theorists and decision theorists.
The goals and methods of these two approaches to the study of reasoning are quite different. However, some of the most interesting and hotly debated claims about human reasoning concern the extent to which one or another group of people reason rationally. And these claims are intrinsically hybrid or interdisciplinary. In order to address them we have to know both what it is to reason rationally and how the group of people in question actually do reason. By far the most famous claim of this sort is Aristotle’s contention that man is a rational animal. Rationality, according to Aristotle, is an essential property of humankind; it is what distinguishes man from beast. A central goal of this essay is to reexamine Aristotle’s thesis in the light of recent empirical studies of human reasoning. Was Aristotle right? Are humans really rational animals?
In order to address the question seriously we will first have to provide a more precise interpretation of Aristotle’s thesis. It is obvious that humans aren’t always rational. When people are drunk or exhausted or in the grip of uncontrollable rage they often reason very poorly indeed. And, of course, Aristotle knew this. When he said that humans are rational animals he surely never intended to deny that people can and sometimes do reason irrationally. But then what did he mean? In Section II we’ll see how we can begin to provide a quite precise and intriguing interpretation of Aristotle’s thesis by borrowing an idea from contemporary cognitive science -- the idea that people have underlying “competences” in various domains, though these competences are not always reflected in people’s actual behavior. One attractive interpretation of Aristotle’s thesis is that normal humans have a rational competence in the domain of reasoning.
To explain the notion of a rational competence, we’ll need to do more than explain the notion of a competence, however; we’ll also have to say what it is for a competence to be rational. In Section III we will look at one elegant and influential attempt to explain what it is for a pattern to reasoning to be rational -- an approach that appeals to the notion of reflective equilibrium.
Having interpreted Aristotle’s thesis as the claim that normal people have a rational reasoning competence, our focus will shift, in Section IV, to the descriptive study of reasoning. In that section we will survey some fascinating and very disturbing empirical findings which seem to suggest that Aristotle was wrong because most normal people do not have the competence to reason rationally about many sorts of questions! Those findings not only challenge Aristotle’s optimistic thesis about human rationality, they also seem to threaten the reflective equilibrium account of rationality. Section V will begin by explaining why the empirical findings pose a problem for reflective equilibrium accounts, and go on to explore some possible responses.
In Section VI, we’ll return to the empirical literature on human reasoning, this time focusing on some very recent studies by evolutionary psychologists who begin from the assumption that components or “organs” in the mind were shaped by natural selection, just as components or organs in (other) parts of the body were. This perspective leads them to expect that our minds will do quite well when reasoning about problems of the sort that would have been important in the environment in which our species evolved, and they have produced some very interesting evidence indicating that this is indeed the case.
Finally, in Section VII, we’ll ask what these recent studies tell us about Aristotle’s claim. Do they show, as some people suggest, that Aristotle was right after all? The answer, I’ll argue, is that neither Aristotle nor his opponents are vindicated by the empirical research. Rather, what these studies show is that the questions we have been asking about human rationality -- questions like “Is man a rational animal?” – are much too simplistic. If we want plausible answers supported by solid scientific evidence, then we are going to have to learn to ask better and more sophisticated questions.
II. Competence and Performance
Everyone agrees that people are sometimes very irrational. So, for example, people who are drunk or distraught or under the influence of drugs sometimes reason and make decisions in ways that would be not be sanctioned by any theory about the principles governing good reasoning and decision making that anyone would take seriously. Since this is such an obvious and uncontroversial fact, how could anyone maintain that, nonetheless, humans are rational animals? What could they possibly mean? One very attractive answer can be given by exploiting the distinction between competence and performance.
The competence / performance distinction was first introduced into cognitive science by Chomsky, who used it in his account of the explanatory strategy of theories in linguistics. (Chomsky, 1965, Ch. 1; 1975; 1980) In testing linguistic theories, an important source of data are the “intuitions” or unreflective judgments that speakers of a language make about the grammaticality of sentences, and about various linguistic properties (e.g. Is the sentence ambiguous? Is this phrase the subject of that verb?) To explain these intuitions, and also to explain how speakers go about producing and understanding sentences of their language in ordinary speech, Chomsky proposed what has become one of the most important hypotheses about the mind in the history of cognitive science. What this hypothesis claims is that a speaker of a language has an internally represented “generative grammar” of that language. A “generative grammar” is an integrated set of rules and principles – we might think of it as analogous to a system of axioms -- that entails an infinite number of claims about the language. For each of the sentences in the speaker’s language, the speaker’s internally represented grammar entails that it is grammatical; for each ambiguous sentence in the speaker’s language, the grammar entails that it is ambiguous, etc. When speakers make the judgments that we call “linguistic intuitions,” the information in the internally represented grammar is typically accessed and relied upon, though neither the process nor the internally represented grammar itself are accessible to conscious introspection. Since the internally represented grammar plays a central role in the production of linguistic intuitions, those intuitions can serve as an important source of data for linguists trying to specify what the rules and principles of the internally represented grammar are.
A speaker’s intuitions are not, however, an infallible source of information about the grammar of the speaker’s language, because the internally represented rules of the grammar cannot produce linguistic intuitions by themselves. The production of intuitions is a complex process in which the internally represented grammar must interact with a variety of other cognitive mechanisms including those responsible for perception, motivation, attention, short term memory and perhaps a host of others. In certain circumstances, the activity of any one of these mechanisms may result in a person offering a judgment about a sentence which does not accord with what the grammar actually entails about that sentence. The attention mechanism offers a clear example of this phenomenon. It is very likely the case that the grammar internally represented in typical English speakers entails that an endless number of sentences of the form:
A said that B thought that C believed that D suspects that E thought … that p.
are grammatical in the speaker’s language. However, if you were asked to judge the grammaticality of a sentence containing a few hundred of these “that-clauses,” or perhaps even a few dozen, there is a good chance that your judgments would not reflect what your grammar entails, since in cases like this attention easily wanders. Short term memory provides a more interesting example of the way in which a grammatical judgment may fail to reflect the information actually contained in the grammar. There is considerable evidence indicating that the short term memory mechanism has difficulty handling center embedded structures. Thus it may well be the case that your internally represented grammar entails that the following sentence is grammatical:
What what what he wanted cost would buy in Germany was amazing.
though most people’s intuitions suggest, indeed shout, that it is not.
Now in the jargon that Chomsky introduced, the rules and principles of a speaker's internalized generative grammar constitutes the speaker's linguistic competence; the judgments a speaker makes about sentences, along with the sentences the speaker actually produces, are part of the speaker’s linguistic performance. Moreover, as we have just seen, some of the sentences a speaker produces and some of the judgments the speaker makes about sentences, will not accurately reflect the speaker’s linguistic competence. In these cases, the speaker is making a performance error.
There are some obvious analogies between the phenomena studied in linguistics and those studied by cognitive scientists interested in reasoning. In both cases people are capable of spontaneously and unconsciously processing an open ended class of “inputs” -- people are able to understand endlessly many sentences, and to draw inferences from endlessly many premises. In light of this analogy, it is plausible to explore the idea that the mechanism underlying our ability to reason is similar to the mechanism underlying our capacity to process language. And if Chomsky is right about language, then the analogous hypothesis about reasoning would claim that people have an internally represented integrated set of rules and principles of reasoning -- a “psycho-logic” as it has been called -- which is usually accessed and relied upon when people draw inferences or make judgments about them. As in the case of language, we would expect that neither the processes involved nor the principles of the internally represented psycho-logic are readily accessible to consciousness. We should also expect that people’s inferences and judgments would not be an infallible guide to what the underlying psycho-logic actually entails about the validity or plausibility of a given inference. For here, as in the case of language, the internally represented rules and principles must interact with lots of other cognitive mechanisms -- including attention, motivation, short term memory and many others. The activity of these mechanisms can give rise to performance errors -- inferences or judgments that do not reflect the psycho-logic which constitutes a person’s reasoning competence.
We are now, finally, in a position to explain an interpretation of Aristotle’s thesis on which it is compatible with the unquestionable fact that, sometimes at least, people reason very irrationally. What the thesis claims is that normal people have a rational reasoning competence. The rules or principles of reasoning that make up their psycho-logic are rational or normatively appropriate; they specify how to reason correctly. According to this interpretation, when people make errors in reasoning or when they reason irrationally the errors are performance errors which may be due to fatigue or inattention or confusion or a host of other factors. But however common they may be, these performance errors do not reflect the rules of reasoning that constitute a normal person’s reasoning competence. To say that man is a rational animal, on this account, is to say that normal people’s reasoning competence is rational even though their reasoning performance sometimes is not.

III. What Is Rationality? A Reflective Equilibrium Account
What is it that justifies a set of rules or principles for reasoning? What makes reasoning rules rational? About forty years ago, in one of the most influential passages of twentieth century analytic philosophy, Nelson Goodman suggested elegant answers to these questions. In that passage, Goodman described a process of bringing judgments about particular inferences and about general principles of reasoning into accord with one another. In the accord thus achieved, Goodman maintained, lies all the justification needed, and all the justification possible, for the inferential principles that emerge. Other writers, most notably John Rawls, have adopted a modified version of Goodman’s process as a procedure for determining when moral principles are correct. To Rawls, too, we owe the term reflective equilibrium, which has been widely used to characterize a system of principles and judgments that have been brought into coherence with one another in the way that Goodman describes.
It is hard to imagine the notion of reflective equilibrium explained more eloquently than Goodman himself explains it. So let me quote what he says at some length.
How do we justify a deduction? Plainly by showing that it conforms to the general rules of deductive inference. An argument that so conforms is justified or valid, even if its conclusion happens to be false. An argument that violates a rule is fallacious even if its conclusion happens to be true…. Analogously, the basic task in justifying an inductive inference is to show that it conforms to the general rules of induction.

Yet, of course, the rules themselves must eventually be justified. The validity of a deduction depends not upon conformity to any purely arbitrary rules we may contrive, but upon conformity to valid rules. When we speak of the rules of inference we mean the valid rules – or better, some valid rules, since there may be alternative sets of equally valid rules. But how is the validity of rules to be determined? Here … we encounter philosophers who insist that these rules follow from self-evident axioms, and others who try to show that the rules are grounded in the very nature of the human mind. I think the answer lies much nearer the surface. Principles of deductive inference are justified by their conformity with accepted deductive practice. Their validity depends upon accordance with the particular deductive inferences that we actually make and sanction. If a rule yields inacceptable inferences, we drop it as invalid. Justification of general rules thus derives from judgments rejecting or accepting particular deductive inferences.

This looks flagrantly circular. I have said that deductive inferences are justified by their conformity to valid general rules, and that general rules are justified by their conformity to valid inferences. But this circle is a virtuous one. The point is that rules and particular inferences alike are justified by being brought into agreement with each other. A rule is amended if it yields an inference we are unwilling to accept; an inference is rejected if it violates a rule we are unwilling to amend. The process of justification is the delicate one of making mutual adjustments between rules and accepted inferences; and in the agreement achieved lies the only justification needed for either.

All this applies equally well to induction. An inductive inference, too, is justified by conformity to general rules, and a general rule by conformity to accepted inferences. (1965, pp. 66-67)

On Goodman’s account, at least as I propose to read him, passing the reflective equilibrium test is (as philosophers sometimes say) constitutive of justification or validity of rules of inference. For a system of inferential rules and the inferences that accord with them to be rational just is for them to be in reflective equilibrium. But what is the status of this claim? Why is passing the reflective equilibrium test is constitutive for the justification or rationality of inferential rules? The answer, I think, is that Goodman takes it to be a conceptual truth -- it follows from the meaning of terms like ‘justified’ or ‘rational’ or from the analysis of the concept of rationality. Arguably that concept is a bit (or more than a bit) vague, and the reflective equilibrium analysis makes no attempt to capture this vagueness. Rather, it tries to tidy up the vagueness and “precise-ify” the concept. So perhaps Goodman is best read as maintaining that the reflective equilibrium account captures something like our ordinary concept of rationality, and that it is the best way of making that concept precise.
We now have all the pieces in place to interpret Aristotle’s thesis. Man is a rational animal, on the interpretation being proposed, means that normal humans have a reasoning competence – a mentally represented set of rules or principles for reasoning – and that those rules are rational – they would pass the reflective equilibrium test. Let’s now ask how plausible this thesis is. To do that we’ll have to turn our attention to empirical study of human reasoning.

IV. Some Disquieting Evidence About How Humans Reason
We will start our exploration of the psychological evidence about human reasoning by focusing on studies that some authors have thought have “bleak implications” about the rationality of ordinary people. All of these studies involve normal subjects (often university students) who are neither exhausted nor emotionally stressed. Nonetheless, many of them do very poorly on the reasoning tasks that they are asked to solve.
The Selection Task
In 1966, Peter Wason reported the first experiments using a cluster of reasoning problems that came to be called the Selection Task. A recent textbook on reasoning has described that task as “the most intensively researched single problem in the history of the psychology of reasoning.” (Evans, Newstead & Byrne, 1993, p. 99) A typical example of a Selection Task problem looks like this:

Here are four cards. Each of them has a letter on one side and a number on the other side. Two of these cards are shown with the letter side up, and two with the number side up.

E C 5 4

Indicate which of these cards you have to turn over in order to determine whether the following claim is true:

If a card has a vowel on one side, then it has an odd number on the other side.

What Wason and numerous other investigators have found is that ordinary people typically do very poorly on questions like this. Most subjects respond, correctly, that the E card must be turned over, but many also judge that the 5 card must be turned over, despite the fact that the 5 card could not falsify the claim no matter what is on the other side. Also, a large majority of subjects judge that the 4 card need not be turned over, though without turning it over there is no way of knowing whether it has a vowel on the other side. And, of course, if it does have a vowel on the other side then the claim is not true. It is not the case that subjects do poorly on all selection task problems, however. A wide range of variations on the basic pattern have been tried, and on some versions of the problem a much larger percentage of subjects answer correctly. These results form a bewildering pattern, since there is no obvious feature or cluster of features that separates versions on which subjects do well from those on which they do poorly. As we will see in Section VI, some evolutionary psychologists have argued that these results can be explained if we focus on the sorts of mental mechanisms that would have been crucial for reasoning about social exchange (or “reciprocal altruism”) in the environment of our hominid forebears. The versions of the selection task we’re good at, these theorists maintain, are just the ones that those mechanisms would have been designed to handle.

The Conjunction Fallacy
Ronald Reagan was elected President of the United States in November 1980. The following month, Amos Tversky and Daniel Kahneman administered a questionnaire to 93 subjects who had had no formal training in statistics. The instructions on the questionnaire were as follows:
In this questionnaire you are asked to evaluate the probability of various events that may occur during 1981. Each problem includes four possible events. Your task is to rank order these events by probability, using 1 for the most probable event, 2 for the second, 3 for the third and 4 for the least probable event.
Here is one of the questions presented to the subjects:
Please rank order the following events by their probability of occurrence in 1981:
(a) Reagan will cut federal support to local government.

(b) Reagan will provide federal support for unwed mothers.

(c) Reagan will increase the defense budget by less than 5%.

(d) Reagan will provide federal support for unwed mothers and cut federal support to local governments.

The unsettling outcome was that 68% of the subjects rated (d) as more probable than (b), despite the fact that (d) could not happen unless (b) did (Tversky & Kahneman, 1982). In another experiment, which has since become quite famous, Tversky and Kahneman (1982) presented subjects with the following task:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Please rank the following statements by their probability, using 1 for the most probable and 8 for the least probable.
(a) Linda is a teacher in elementary school.

(b) Linda works in a bookstore and takes Yoga classes.

(c) Linda is active in the feminist movement.

(d) Linda is a psychiatric social worker.

(e) Linda is a member of the League of Women Voters.

(f) Linda is a bank teller.

(g) Linda is an insurance sales person.

(h) Linda is a bank teller and is active in the feminist movement.

In a group of naive subjects with no background in probability and statistics, 89% judged that statement (h) was more probable than statement (f). When the same question was presented to statistically sophisticated subjects -- graduate students in the decision science program of the Stanford Business School -- 85% made the same judgment! Results of this sort, in which subjects judge that a compound event or state of affairs is more probable than one of the components of the compound, have been found repeatedly since Kahneman and Tversky’s pioneering studies.

Base-Rate Neglect

On the familiar Bayesian account, the probability of an hypothesis on a given body of evidence depends, in part, on the prior probability of the hypothesis.1 However, in a series of elegant experiments, Kahneman and Tversky (1973) showed that subjects often seriously undervalue the importance of prior probabilities. One of these experiments presented half of the subjects with the following “cover story.”

A panel of psychologists have interviewed and administered personality tests to 30 engineers and 70 lawyers, all successful in their respective fields. On the basis of this information, thumbnail descriptions of the 30 engineers and 70 lawyers have been written. You will find on your forms five descriptions, chosen at random from the 100 available descriptions. For each description, please indicate your probability that the person described is an engineer, on a scale from 0 to 100.
The other half of the subjects were presented with the same text, except the “base-rates” were reversed. They were told that the personality tests had been administered to 70 engineers and 30 lawyers. Some of the descriptions that were provided were designed to be compatible with the subjects’ stereotypes of engineers, though not with their stereotypes of lawyers. Others were designed to fit the lawyer stereotype, but not the engineer stereotype. And one was intended to be quite neutral, giving subjects no information at all that would be of use in making their decision. Here are two examples, the first intended to sound like an engineer, the second intended to sound neutral:
Jack is a 45-year-old man. He is married and has four children. He is generally conservative, careful and ambitious. He shows no interest in political and social issues and spends most of his free time on his many hobbies which include home carpentry, sailing, and mathematical puzzles.
Dick is a 30-year-old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is well liked by his colleagues.
As expected, subjects in both groups thought that the probability that Jack is an engineer is quite high. Moreover, in what seems to be a clear violation of Bayesian principles, the difference in cover stories between the two groups of subjects had almost no effect at all. The neglect of base-rate information was even more striking in the case of Dick. That description was constructed to be totally uninformative with regard to Dick’s profession. Thus the only useful information that subjects had was the base-rate information provided in the cover story. But that information was entirely ignored. The median probability estimate in both groups of subjects was 50%. Kahneman and Tversky’s subjects were not, however, completely insensitive to base-rate information. Following the five descriptions on their form, subjects found the following “null” description:
Suppose now that you are given no information whatsoever about an individual chosen at random from the sample.

The probability that this man is one of the 30 engineers [or, for the other group of subjects: one of the 70 engineers] in the sample of 100 is ____%.

In this case subjects relied entirely on the base-rate; the median estimate was 30% for the first group of subjects and 70% for the second. In their discussion of these experiments, Nisbett and Ross offer this interpretation.
The implication of this contrast between the “no information” and “totally nondiagnostic information” conditions seems clear. When no specific evidence about the target case is provided, prior probabilities are utilized appropriately; when worthless specific evidence is given, prior probabilities may be largely ignored, and people respond as if there were no basis for assuming differences in relative likelihoods. People’s grasp of the relevance of base-rate information must be very weak if they could be distracted from using it by exposure to useless target case information. (Nisbett & Ross, 1980, pp. 145-6)
Before leaving the topic of base-rate neglect, I want to offer one further example illustrating the way in which the phenomenon might well have serious practical consequences. Here is a problem that Casscells et. al. (1978) presented to a group of faculty, staff and fourth-year students and Harvard Medical School.
If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person’s symptoms or signs? ____%
Under the most plausible interpretation of the problem, the correct Bayesian answer is 2%. But only eighteen percent of the Harvard audience gave an answer close to 2%. Forty-five percent of this distinguished group completely ignored the base-rate information and said that the answer was 95%. (If, like most of the Harvard physicians, you don’t see why 2% is the right answer, read on. After you’ve read Section VI reason why this is the right answer will be a lot clearer.) It is a bit alarming, to put it mildly, that these same experimental subjects were diagnosing real patients and offering them advice on such questions as what treatments to seek and whether to have exploratory surgery.

One of the most extensively investigated and most worrisome cluster of phenomena explored by psychologists interested in reasoning and judgment involves the degree of confidence that people have in their responses to factual questions -- questions like:
In each of the following pairs, which city has more inhabitants?
(a) Las Vegas (b) Miami

(a) Sydney (b) Melbourne

(a) Hyderabad (b) Islamabad

(a) Bonn (b) Heidelberg

In each of the following pairs, which historical event happened first?
(a) Signing of the Magna Carta (b) Birth of Mohammed

(a) Death of Napoleon (b) Louisiana Purchase

(a) Lincoln’s assassination (b) Birth of Queen Victoria
After each answer subjects are also asked:
How confident are you that your answer is correct?

50% 60% 70% 80% 90% 100%

In an experiment using relatively hard questions it is typical to find that for the cases in which subjects say they are 100% confident, only about 80% of their answers are correct; for cases in which they say that they are 90% confident, only about 70% of their answers are correct; and for cases in which they say that they are 80% confident, only about 60% of their answers are correct. This tendency toward overconfidence seems to be very robust. Warning subjects that people are often overconfident has no significant effect, nor does offering them money (or bottles of French champagne) as a reward for accuracy. Moreover, the phenomenon has been demonstrated in a wide variety of subject populations including undergraduates, graduate students, physicians and even CIA analysts. (For a survey of the literature see Lichtenstein, Fischoff & Phillips, 1982.)
The empirical findings we’ve been reviewing are only a small sample of the extensive empirical literature on shortcomings in human reasoning that has appeared over the last twenty-five years. (For much more detailed reviews of the literature in what is sometimes called the “heuristics and biases” tradition, see Nisbett and Ross, 1980; Baron, 1988; Piatelli-Palmarini, 1994; Dawes, 1988; and Sutherland, 1994. Kahneman, Slovic and Tversky, 1982 is very useful anthology.) One apparently unavoidable consequence of this huge body of experimental findings is that people’s performance on a wide range of inferential problems leaves much to be desired. The answers given by many experimental subjects depart substantially and systematically from the answers that accord with a rational set of inferential principles. Of course it could be the case that all of these errors are merely performance errors and that they do not accurately reflect the principles of reasoning that make up the subjects’ underlying reasoning competence or “psycho-logic”. But many writers have offered a more disquieting interpretation of these experimental results. These authors claim that in experiments like those described in this Section, people are reasoning in ways that accurately reflect their psycho-logic. The subjects in these experiments do not use the right principles because they do not have access to them; they are not part of the subjects’ internally represented reasoning competence. What they have instead, on this view, is a collection of simpler principles or “heuristics” that may often get the right answer, though it is also the case that often they do not. So according to this bleak hypothesis, the subjects make mistakes because their psycho-logic is normatively defective; their internalized principles of reasoning are not rational principles.
Daniel Kahneman and Amos Tversky, who are widely recognized as the founders and leading researchers in the heuristics and biases tradition, make the point as follows:
In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors.”(1973, p. 237)
Slovic, Fischhoff and Lichtenstein, important contributors to the experimental literature, are even more emphatic. “It appears,” they write, “that people lack the correct programs for many important judgmental tasks…. We have not had the opportunity to evolve an intellect capable of dealing conceptually with uncertainty.” (1976, p. 174) Stephen J. Gould, a well known evolutionary theorist and the author of many widely acclaimed books about science, concurs. After describing the “feminist bank teller” experiment, he asks: “Why do we consistently make this simple logical error?” His answer is: “Tversky and Kahneman argue, correctly I think, that our minds are not built (for whatever reason) to work by the rules of probability.”(1992, 469) If these authors are right, then Aristotle was wrong. Man is not a rational animal!

Download 135.38 Kb.

Share with your friends:
  1   2   3

The database is protected by copyright © 2023
send message

    Main page