Aristotle characterized humans as rational animals(1). He believed that humans’ ability to think deliberately, rather than react instinctively, defined us as a species different from even those animals most like us.
In the millennia since, we have learned that other animals are more, and humans less, rational than Aristotle seemed to think. Other differentiators such as language and tool use have also turned out not to provide as clear a difference as once thought. Still, humans seem to take rational thought to a level that other animals don’t, at least so far as we know. Given this, understanding human rationality is an important step in understanding human nature.
It’s important to distinguish between idealized rationality and human rationality. Idealized rationality is something like what a perfect reasoner would do – an agent who always followed the best logic with no biases or errors – Spock or Data from the Star Trek universe exemplify idealized rationality. But famously, humans don’t reason like that. We bring bias to the table. We make unwarranted assumptions, commit fallacies, and jump to conclusions. We may aspire to be perfect reasoners, but none of us actually are or ever have been. Perfect rationality is not part of human nature.
To understand human nature, we need to approach not an idealized view of rationality, but reasoning as humans actually perform it. A good starting point for this is Daniel Kahneman’s book Thinking Fast and Slow.
Kahneman proposes that human decision-making is best thought of as the product of two separate systems. System 1 is fast, effortless, and is mostly subconscious. It is the system you use to answer questions such as “How much is 3 + 7?”, to decide if you like someone when you first meet them, or to choose whether to brake or accelerate when the traffic light turns yellow. System 1 is automatic. We don’t typically choose to use it, it just happens. System 2, in contrast, is slower, not automatic, and requires conscious effort. It is the system you use to answer questions such as “How much is 84 ✕ 12?”, to decide whether an idea is worth bringing up in a business meeting, to carefully back out of a tight parking space, or to wonder whether you are now using System 1 or System 2.
To the extent that there is something unique about human rationality, we are likely to find it in System 2. All animals must make decisions. Decisions that can be made quickly based on instinct or experience are typical of system one. These decisions are a large part of decisions we know animals face. A mouse, seeing a cat, must choose how to react, but it is not clear that the choice involves deliberation of the sort we see in System 2. However, things become less clear if we place the mouse in a maze with cheese at the other end. Is the mouse’s journey through the maze driven by reacting to what it sees in the moment, or is there a deeper recognition of the task and an attempt to reason through it? To be honest, we don’t know and the mouse can’t tell us. However, it seems reasonable to think that no mouse has ever attempted to construct a general theory of how to solve mazes. The difference between a mouse’s rationality and ours thus lies somewhere within System 2, the slow, effortful, and deliberate mode of thinking.
Our exploration of human nature, thus, requires a nuanced understanding of our specific type of rationality—rationality that is far from perfect, often biased, but that aims toward an ideal. It requires understanding how we share rationality to varying degrees with other animals, but also how our rationality diverges from theirs.
(1)There is some dispute about the degree to which Aristotle held this view as he does not explicitly use the term “rational animal”. However, although the term is largely from the Scholatics, Aristotle does take the view that humans have a rational principle that distinguishes them from animals.
Thank you for your post Dr. Hardy. I am curious what ways the human form of rationality bias can be characterized. One example that I can think of is our ability to rationalize actions so that whatever went wrong is someone else’s fault. Another is to justify some illegal action by an appeal to “the ends justify the means”. If you know of a list of common human rationalization strategies I would be interested to know the source.
I’m not aware of a serious attempt to list or categorize types of rationalizations in the same way that one often sees such lists for biases or fallacies. There is some discussion of it within business ethics, but that seems to be fairly restricted.
I will say that lists of biases and fallacies, while useful, do have some problems. There isn’t a lot of agreement on how to group these, nor the best level of detail for making distinctions. For example, Ad Hominem is a commonly listed fallacy. However, some lists break it into several different types while others leave it as a single type. There is inconsistency among lists about what the sub-types are or even whether they really are subtypes. Some places list Tu Quoque as a subtype of Ad Hominem, while others consider it a separate fallacy. The same kind of issues appear in lists of biases. Furthermore, even when it’s clear that someone has engaged in biased or fallacious reasoning, it’s not always possible to authoritatively assign that reasoning to a single category. Often, the reasoning can be interpreted in different ways that would yield different assignments.
Still, lists are useful in getting an appreciation of what to look out for, it’s just best not to take them as definitive or exhaustive.
Some searching on Google yielded a number of articles with lists of rationalization in them, but I didn’t find a lot of commonality between them. This article has a good discussion and a list of 6 types of rationalizations. It might be a good place to start your investigations.
I think it’s very important to distinguish between logic and rationality. Logic builds on accepted premises which may or may not be rational. For example, a person who believes that a particular race is inferior might logically decide to colonize people of that race “for their own good.” In fact, that’s exactly what was done for hundreds of years — by very logical people. Spock and Data are logical, but if you watch Star Trek you’ll see how often their logic leads them down inappropriate paths. In some cases they behave heartlessly; in other cases, they make decisions that are logical but not useful.
Rationality, however, builds to a greater degree on ethics, experience, and common sense. Thus, for example, a rational person of the 18th century might, after living in a colonized area, notice that the logical choice to colonize is not rational because it leads to problems ranging from illiteracy to revolution.
@lisa-jo-rudy The trick here comes in spelling out what you would mean by “rationality” that doesn’t presuppose some particular view of ethics. There is no widely agreed-upon theory of ethics to use as a starting point for a definition. The closest I can think of is a general view that ethics should be agnostic with respect to point of view, i.e. that ethical rules are entirely abstract and apply to everyone equally. An example is that my duty not to harm others is taken by both Utilitarianism and Deontology to apply equally to everyone – my duty to not harm my child is the same as my duty not to harm an unknown person on the other side of the world. However, some ethical theories deny this, e.g. Care ethics.
Generally, rationality is roughly the same as logic though vaguer and broader. It involves making plausible inferences from the information we have, including inferences about the adequacy of our information and how we might seek more. It encompasses making our best guesses in cases where we have to make decisions with inadequate information. Logic is generally concerned solely with the move from premises to a conclusion and not with the adequacy of the premises themselves.