“If you want to make good decisions or get good advice about them, don’t pay too much attention to your feelings…” (Bennett, 2015).
This quote comes from a blog by a psychistrist, and in my opinion, reflects a fairly prevalent viewpoint regarding the usefulness of emotions. In fact, I find this attitude quite surprising, especially from someone in the mental health field. As a student therapist, I’m wary of how attitudes like this exist as mental filters, defining our understanding of matters. Throughout my studies, I have discovered interesting threads of belief woven throughout research seeking to define the nature of emotions. For example, a neurological perspective of emotions provides universal insights on the biological components in our brain responsible for the production and experience of feelings. In contrast, a social sciences perspective can help us understand unique variants in emotional expression and experience within individuals and across cultures. My question is, how accurate are these divergent theories about emotions and the belief systems that underlie them? Are emotions matters of self-deception as byproducts of limbic activity – and nothing more? If this is the case, they play no role in moral judgment. On the other hand, is there more to be said about the role of emotions in our judgments and decisions? What is – if anything – is being missed?
“If one’s sole avenue for assessing whether something is relevant and worthy of consideration is empiricism…literalism is the only kind of truth…the motto here [would then be] ‘either it’s a fact or it’s meaningless'”(Gross, 2012, p76-77).
What follows is my personal attempt to make sense of what I’ve been reading lately about the role of emotion in moral judgment. Mind you, this blog post is me “talking out loud” as I sort through my personal interpretation of information I’ve been ingesting lately. In this respect it is a “mental bookmark” highlighting a subject I might like to delve into further, at some point.
Debunking Conventional Wisdom…
In the introduction to her book “Upheavals of Thought”, Nussbaum, (2003), makes the following comment:
“A lot is at stake if we view emotions in this way, as intelligent responses to perception of value. If emotions are suffused with intelligence and discernment, they cannot…be easily sidelined in accounts of ethical judgment” (Nussbaum, 2003, p1).
The problem with including emotions in discussions of moral judgment, is that the subject matter is instantaneously muddied ten-fold. You’re left to wonder what emotions play a role in motivations and attributions of value. How do feelings reflect perceptions of need in determining our desire for an object? In what way do our emotions exist as an experiential connecting point between the body’s interaction with the environment and our minds belief-systems, dictating the next “appropriate action”? Two interesting resources I’ve uncovered address these issues from a neurological perspective. In an article titled, “The Multi-System of Moral Psychology”, Cushman, et al, (2010) state that evidence exists indicating a cognitive and affective moral judgment system in the brain, however this claim “has been met with skepticism” (p2). Damasio (2006), addressees a similar skepticism in his book “Decartes Error”, stating the following in the introduction:
“I began writing this book to propose that reason may not be as pure as most of us think it is or wish it were, that emotions and feelings may not be intruders or bastion of reason at all, they may be enmeshed in networks for worse and for better” (Damasio, 2006, xii).
So if moral reasoning is not a purely cognitive endeavor, what role do emotions play in our own moral reasoning? I spend the remainder of this article answering this question.
The Brain’s Dual System of Moral Judgment…
Cushman, et al (2010), in an article titled “The Multi-System of Moral Psychology”, utilize the analogy of a camera with both automatic & manual settings to describe the brain’s moral judgment system. A camera’s automatic settings, might be useful for portraits or landscapes. Likewise, the brain’s automatic settings, are useful for the millions of little decisions a person has to make in a day. The manual settings of a camera are essential when adjustments must be made in a unique instances. Similarly, the brain has manual settings, that allow it to make more reasoned judgments for complex situations. Citing brain imaging research on survivors of brain injury, Cushman, et al, (2010), describe two divergent brain system responses to moral dilemmas based – much like the camera scenario. These studies compare the responses of healthy individuals to moral dilemmas with those who have suffered a brain injury. In one study, individuals are asked to respond two to hypothetical scenarios, described below:
Scenario one – “The Switch Dilemma”
SCENARIO ONE – (SWITCH DILEMMA) “a runaway trolley threatens to run over and kill five people. Is it morally permissible to flip a switch that will redirect the trolley away from five people and onto one person instead? (Cushman, et al, 2010, page 3).”
In this study, no differences were seen in the responses between healthy individuals and those with a brain injury. Responses reflected a consequentialist moral judgment that focused on the idea of saving the greatest number of lives. Participants displayed greater activity in brain regions responsible for controlled cognitive activity.
Scenario two – “The Footbridge Dilemma”
SCENARIO TWO – (FOOTBRIDGE DILEMMA) Here, one person is standing next to a larger person on a footbridge spanning the tracks, in between the oncoming trolley and the five. In this case, the only way to save the five is to push the large person off of the footbridge and into the trolley’s path, killing him. (Cushman, et al, 2010, p3)”
In this study, participants with lesions in the ventromedial prefrontal cortex, were more likely to utilize the “greatest benefit” standard. The idea of having to push someone off a bridge didn’t factor into their assessment of the situation. In contrast, healthy participants responded strongly to the idea of pushing someone off a bridge. Their response reflected a moral absolute. Greater activity was seen in brain regions associated with emotion for healthy individuals. In contrast those with brain injury displayed an absence of function in the same area. Cushman, et al, (2010), end this research review with the following summative comment:
“These lesion studies lend strong support to the theory that characteristically deontological judgments are – in many people, at least – driven by intuitive emotional responses that depend on the ventromedial prefrontal cortex while characteristically consequentialist judgments are supported by controlled cognitive processes based in the dorsilateral prefrontal cortex” (Cushman, et al, 2010, p5).
Essentially, Cushman, et al, (2010) propose a dual-process theory of moral judgment, on the basis of studies such as these, as most reflective of brain function. This perspective runs counter to traditional philosophy which characterizes consequentialist perspectives as sentimental and the deontological perspective as rational. When reading the quote above, my old mind searches through mental files pertaining to the subject of ethics and moral philosophy. Since my recent academic focus has been the social sciences, it’s honestly been a while. In my intro to ethics course as an undergrad I recall reviewing key moral philosophies throughout history. The instructor organized the subject matter in a spectrum-oriented fashion, starting with absolutist stances and ending in nihilism. As I recall, this placed Decartes’ deontology at the start of the course and consequentialist perspectives such as Mill’s utilitarianism somewhere in the middle. A review of these moral perspectives is necessary to appreciate the claim that deontological judgments are conducted by the emotionally-driven brain and the fact that the abstract logical components engage in consequentialist judgment.
“Consequentialism is the view that morality is all about producing the right kinds of overall consequences” (Haines, n.d.). This welfare-maximizing principle involves increasing pleasure and minimizing pain. From this perspective, our main focus is the “overall consequence” (Haines, n.d.) of one’s actions and the sum total of their effect. Did our action create more harm than good? Or did was it a beneficial decision for the majority of those involved? A criticism of this moral stance is that it relies on sentiment at the expense of duty and principled standards. The consequences of this cost-benefit analysis is that an “ends justify the means” standard is reflected in our actions.
According to the work of Cushman, et al, (2010), brain research provides evidence of a neurological system that acts on the basis of this consequentialist welfare-maximizing standard. The affective component acts at a subconscious level and creates a motivational push while our cognition creates value-based thinking. This moral reasoning occurs in a manner similar to a camera’s manual settings as we weigh alternatives in complex situations such as in the switch scenario. The brain’s affective component is characterized by Cushman as “currency-style” emotions:
“A set of meso-limbic brain regions…represent expected monetary value in a more graded fashion…These regions, in a rather transparent way, support currency-like representations… Currency-like emotions function by adding a limited measure of motivational weight to a behavioral alternative, where this weighting is designed to be integrated with other weightings in order to produce a response. Such emotional weightings, rather than issuing resolute commands, say, “Add a few points to option A” or “Subtract a few points from Option B.” (Cushman, et al, 2010, p.12-13).
“In contemporary moral philosophy, deontology is one of those kinds of normative theories regarding which choices are morally required, forbidden, or permitted “(Alexander & Moore, 2012). This absolutist perspective rejects the consequentialist notion that actions could be assessed in terms of their consequences. Instead, right and wrong are concrete and absolute normative constructs. From a deontological perspective we are expected to uphold duties and obligations.
What I find interesting about deontology, is that it reflects a “cause-I said-so mindset” of my son’s concrete-operational Piaget-like thinking. When pushing further and asking “why” his absolutist standards exist, it is discovered they have no underlying well-reasoned basis. “It is as it is because I said so.” Cushman, et al, (2010) uses the term “moral dumbfounding”, (p, 11), to describe this sort of difficult-to-justify, moral standard. Brain research indicates this deontological judgment system lies in unconscious mental processes and is supported by affective bcomponents (Cushman, et al, 2010). To understand these mental processes in the brain, it might be useful to revisit the footbridge scenario once more…
So what is it about the footbridge scenario that sets off this deontological reasoning system? This scenario forces one to imagine engaging in an action that causes grave harm to somebody. This immediately elicits a well of negative emotions. I, for example, can’t help but react to this scenario with the thought, “so you want me to push this dude off the bridge and kill him, are you kidding me!?!” Associated with this response are the “fight-or-flight” emotions created by the amygdala. This limbic structure is capable of producing what Cushman, et al, (2010) describes as “alarm bell” emotions (p. 12).“The core idea is that alarm-bell emotions are designed to circumvent reasoning, providing absolute demands and constraints on behavior” (Cushman, et al, 2010, p 12).
“Historically, consequentialism is more closely allied with the ‘sentimentalist’ tradition….while deontology is more closely associated with Kantian rationalism. According to the dual-process theory, this historical stereotype gets things mostly backwards” (Cushman, et al, 2010, p.6).
As expected, this post is longer than I had originally intended it to be. I would like to stop here by reviewing key insights from the article, “Our Multi-System Moral Psychology” by Cushman, et al, (2010).