Operant Conditioning Inspired Models of Therapy

What is Operant Conditioning

“B. F. Skinner and the radical behaviorism that he advocates are very controversial.(Nye, 1979, p. 2).” Skinner’s operant conditioning emphasizes that behavior is a function of behavioral consequences and rejects “mentalistic or cognitive explanations of behavior (Bower & Hillgard, 1981, p. 169).”  Also known as instrumental leaning it has historical roots in Edward Thorndike’s work on Animal Intelligence (Rosenthal, 2005) which describes the law of effect:

“Rewards and nonrewards or successes and failures [are]…proposed as mechanisms for selection of the more adaptive response. (Bower & Hillgard, 1981, p. 24).”

Corsini & Wedding, (2011) describe Applied Behavioral Analysis as a modern extension of Skinners work.  Rosenthal, (2005), describes Behavior therapy (classical conditioning) and Behavior Modification (Instrumental Models) as based on Skinner’s theory.  Treatment procedures are based on altering relationships between overt behaviors and their consequences.  It makes use of “reinforcement, punishment, extinction, stimulus control…(Corsini & Wedding, 2011, p. 236).”  Before providing an overview of Skinner’s work, I’d liked to review insights from another course textbook that provides an interesting perspective, titled “Clinical Case Formulations,” (Ingram, 2012).  This textbook provides an overview of clinical case formulations based on theoretical approaches such as Skinners as away of organiz[ing], explain[ing] or mak[ing] sense of large amounts of data and…treatment decisions (Ingram, 2012 p. viii).”

A Core Clinical Hypothesis

In this section, I’m going to review a clinical hypotheses based on Skinner’s work.  It is useful for case formulation purposes and is useful in determining how Skinner’s work might be utilized in a contemporary therapy setting.

“Hypothesis BL1: Antecedents and Consequences” (Ingram, 2012, p. 227)

“The treatment plan should be based on an analysis of Antecedents (triggers) and Consequences (Rewards and punishments)…lead[ing] to specific hypotheses about contingent relations among variables. Interventions are based on this functional analysis of behavior and use strategies of behavior change developed from studies of operant conditioning (Ingram, 2012, p. 227).”

Clinical Case Example

“[John} is referred by court for anger management therapy after committing assault in a road rage incident.  You help him identify the triggers for his excessive rage reactions:  An external trigger (the driver gave him ‘the finger’) and an internal trigger (the thought: ‘someone is disrespecting me’).(Ingram, 2012, p. 227).”

Potential Problem Areas & Treatment Methods…

Some problem areas in which Applied Behavioral Analysis is useful might include autism, addictive behaviors, overeating (Ingram, 2012). The treatment process begins with a baseline measure of problematic behaviors that are defined in clear operational terms. For example, in the above hypothetical clinical case, the client has trouble controlling his anger.  Next, the antecedents (i.e. triggers) that precede his displays of anger need to be described.  These triggers can include physical symptoms (i.e. hunger), emotional states, or thought processes (Ingram, 2012).  Finally, identifying the consequences and reinforcers of one’s behavior is essential.   Behavioral modification strategies including stimulus control, reinforcement, and contingency contracts are then useful.

Common Criticisms….

In the book “What was B.F. Skinner Really Saying?  Nye, (1979) states the following:  “B.F. Skinner and the radical behaviorism that he advocates are very controversial…for many people there remain misunderstandings….about what Skinner is really saying, (p. 2).”  What follows are responses from this book to common criticisms about Skinner’s Theory:

MISUNDERSTANDING #1: Radical Determinism

One criticism of Skinner’s work rests in the fact that he “denies that our behaviors are self-determined” (Nye, 1979, p. 78).”  Personally, when reading about Skinner’s work, I can’t help but feel he’s describing human beings as mindless automatons.  Nye, (1979) respond by noting that Skinner simply believed we relied too much on inner qualities like motivation to achieve our life goals. Instead he suggested that since “ultimately the environment has the control” (Nye, 1979, p. 81), we utilize information about the external world, since this is far more pragmatic.

MISUNDERSTANDING #2:  The Proposed Nature of Control

Another common misunderstanding regarding Skinner’s work  pertains to the idea that we control environmental factors in order to adjust an individual’s behavior.  Honestly, this seems a little f’d up, doesn’t it?  Nye, (1979) respond by noting: “Skinner’s proposals for controlling behavior advocate the systematic use of positive reinforcement rather than aversive methods of control (p. 86).”

MISUNDERSTANDING #3:  Oversimplified View of Human Nature.

Critics of Skinner’s work, often complain that it ignores many aspects of what makes us human.  For example, while Skinner does acknowledge the presence of thoughts and feelings as components of our inner world, they have no causal role in our behaviors.  Instead thoughts and feelings are controlled by environmental factors along with our behavior.  In response to this question, Nye (1979) ask “Even if SKinner’s approach were judged to be simple, is that necessarily bad?” (p. 94).

MISUNDERSTANDING #4: Skinner’s View of Self-Knowledge

In light of radical determinism, and the utter meaningless of thoughts and feelings, it would seem to me that Skinner suggests self-knowledge and spiritual introspection are meaningless.  In response to this criticism, Nye, (1979), note that his view of self-knowledge is “materialistic” (p 99) in nature.  Self-knowledge under Skinner’s view, knowing oneself “has to do with stimuli, responses and reinforcement” (Nye, 1979, p. 99).

Unique Aspects Of This Behavioral Therapy Approach

Skinner’s approach focuses on the learning process for voluntary behaviors through the manipulation of consequences.  Consequences refer to those events that follow our behaviors. Manipulating the consequences of our behaviors is key to creating change in our lives. In contrast, Pavlov focuses on involuntary reflexive behaviors and emotional responses.  Learning then becomes a matter of readjusting our chosen responses to a specific situation.  Pavlov’s independent variable is the antecedent while Skinner’s is the consequence.  In this sense, Skinner’s approach is conceives the client’s role as more active and participatory while Pavlov’s clients play a passive role in the learning process.

Who is Skinner?

“Burrhus Fredric Skinner…was born on March 20, 1904, in Susquehanna a small railroad town in northeastern Pennsylvania” (Nye, 1979, p. 8).  His father worked as a lawyer was a homemaker.  He had one younger brother who died in childhood.  He received a bachelors in literature from Hamilton College and Phd in Psychology from Harvard in 1931.  “Beginning in the 1930s, Skinner published a series of papers reporting results of laboratory studies with animals in which he identified the various components of operant conditioning. He summarized much of this early work in his influential book, The Behavior of Organisms” (Shunk, 1990, p. 88).   “Skinner was interested in giving a scientific account for behavior…buy found the S–R paradigm insufficient to explain the majority of behavior, especially for those behaviors for which there appeared to be no obvious antecedent environmental causes” (Dixon, 2012, p. 4).  What follows is an overview of his research and perspective.

Fundamental Premises

Essentially, Skinner upheld a biological perspective of human behavior which notes that behavior is a function of consequences.   These consequences are determined by human biology as well as our environment.  In this respect, “behavior is lawful [and] the role of psychology is to discover the ’cause-and effect’ relationships” (Nye, 1979, p. 23) between behavior and their consequences. Understanding this cause and effect can be achieved through experiments that manipulate key environmental conditions associated with the consequences of one’s behavior.   “To pursue his interest in operant behavior, Skinner built a small soundproof chamber…referred to as a Skinner Box” (Nye, 1979, p. 27).  As the above video shows, this involves adjusting environmental conditions in the experiment in order to observe how this influences an animal’s utilization of the food dispenser.  Based on these observations, Skinner developed the following concepts of human behavior in his reinforcement theory.  As stated earlier, Skinner criticized the Stimulus-Response Model of behavior reflected in classical conditioning, because it left out a key component of the puzzle: The consequences of our behavior (Dixon, 2012).

Key Concepts

Respondent vs. Operant Behavior

Bower & Hilgard, (1981), begin their discussion of Skinner’s work by defining clearly the specific types of behaviors that are the focus of his research.  For example, Skinner rejects Watson’s “dictum ‘no stimulus, no response'” (Bower & Hilgard, 1981, p. 169), which states that any behavior must have a stimuli that elicited it.  Naturally, this excludes from one’s examination of human behavior, anything that doesn’t have a clear antecedent stimuli associated with it.  In order to differentiate his work from classical conditioning’s view of behavior Skinner created a classification of two types of responses.  With the classical models behavior is elicited by antecedent stimuli.  Skinner conceives behavior as operant, emitted in response to the perceived consequences of one’s actions.

Types of Behavioral Conditioning

With two different definitions of behavior, there are two unique ideas on how we learn new behavior.  Within the classical conditioning model behavior is reinforced by the close association of an unconditioned stimuli with a conditioned stimuli.  For example, Pavlov taught dogs to drool when they hear a bell, by ringing a bell at feeding time.  Skinner calls this type of behavioral conditioning an “S Type” (Bower & Hillgard, 1981, p. 171).

In contrast, “Type R” (Bower & Hillgard, 1981, p .1971), conditioning reflects Thorndike’s law of effect that states behaviors are a function of their consequences.  In other words, the consequences of one’s behavior are manipulated through the provision of positive and negative reinforcers discussed next…

Discriminative Stimuli

While Skinner classical classical conditioning of its assertion that antecedents play a role in the determination of behavior, it is wrong to assume he eliminates this variable completely.  “Discriminative stimuli change the probability that an operant will be emitted based on an operant in one situation but not in another” (Shunk 1991, p. 84).  In other words, discriminative stimuli are defined by the behaviors that they influence.  Additionally, the influence of this stimuli is determined by an individual’s past experiences with consequences they tend to be associated with.

Learning & Reinforcement

Consequences from the environment that increase the possibility that behaviors will be repeated are called reinforcements (Rosenthal, 2005).  These reinforcers can be positively or negatively valued by the individual.  Here are a few examples

An Example of Positive Reinforcement

Positive reinforcers are positively valued consequences that follow a behavior.  For example, if my son does his homework and completes his nightly chores, I allow him to play for one hour on his xbox before bedtime.

An Example of Negative Reinforcement

Negative reinforcement occurs when a negatively valued consequence is removed for a specific behavior.  Skinner uses the term “aversive stimulus” (Bower & Hillgard, 1981, p. 172) to describe negatively valued stimuli such as electrical shocks, extreme cold, or loud noises.  In negative reinforcement, this aversive stimulus is removed in order to increase one’s behavior.  For example, last quarter when I was putting in about 30+ internship hours a week, my oldest son had to learn how to make dinner in the events.  This helped stave off his own hunger while also prevented my younger brother whining about there being “nothing to eat”.  He was able to eliminate two aversive stimuli: hunger and hearing his brother whine.

Extinction & Punishment

Extinction involves the reduction in frequency of a behavior over time either through the cessation of a reinforcer (described above) or the provision of punishments.  While reinforcers increase the probability of a behavior punishments decrease the occurrence of negative responses (Rosenthal, 2005).  As with reinforcers, punishments can be both positive and negative.

An Example of a Positive Punishment

With positive punishments, negatively valued stimuli are presented after a specific behavior.  For example, if my youngest son brought home a few bad grades home.  Firstly, I made him redo these assignments since his teacher told me he didn’t put in a good effort and “was goofing off in class”.  Secondly, I made him go to bed early and gave him extra chores around the house.

An Example of a Negative Punishment

With negative punishments involve the removal of a positively desired stimulus for a poorly desired behavior.  For example, if my sons get in trouble at school for some reason, I have also utilized my favorite punishment:”eliminating for one week any and all devices that require electricity.  This means he can’t play with his computer, ipad, iphone, or xbox.

Behavioral Modification

Behavioral modification techniques are based upon Skinnerian principles of operant conditioning defined above.  Essentially, it involves replacing negative behaviors for positive ones through the utilization of positive and negative reinforcement.  This approach is based on the “Premak Principle” which basically notes that reinforcers should be based on what individuals valure most (Rosenthal, 2005).  In other words, reinforcers that yield the greatest amount of intrinsic enjoyment, have the greatest influence on one’s responses (Shunk, 1991).  It is for this reason, that Skinner focused so much of his work on the functional relationship between behaviors and environmental consequences.

“the external variables of which behavior is a function provide for what may be called a causal or functional analysis. We undertake to predict and control the behavior of the individual organism” (Skinner, 1953, p. 35).

Intrinsic vs. Extrinsic Motivation

Before discussing Skinner’s reinforcement theory it might help to differentiate between intrinsic and extrinsic motivation:

“The most basic distinction is between intrinsic motivation, which refers to doing something because it is inherently interesting or enjoyable, and extrinsic motivation, which refers to doing something because it leads to a separable outcome.” (Ryan & Deci, 2000, p. 56).

In other words, this definition notes that those things which provide intrinsic motivation are desired for their own sake.  In contrast, extrinsic motivators are desirable because they can lead to something else.

Schedules of Reinforcement

In his research, Skinner learned that the maintenance of a specific behavior required a schedule of reinforcers.  In other words, for my boys, they know that in order to get an allowance they must do their chores and keep up their grades.  For Skinner the utility of a reinforcement schedules is determined by the rate of response of his animal subjects.  In the real world, the enviornment’s natural reinforcers for certain behaviors are not provided in a uniform manner.  In his research, Skinner defines two main types of reinforcement schedules.

Continuous Schedules

With a continuous reinforcement schedule, an individual is rewarded every time after performing a specific task.  Nye, (1979), notes that the real world tends to provide reinforcements in this manner.  For example, my youngest is allowed to have his snack and extra free play-time if he does is schoolwork in a timely manner.  Rosenthal, (2005) notes that continuous reinforcement schedules promote each the establishment of a new habit, but ineffective when working on maintaining the behavior.

Intermittent Schedules

Skinner spent a great deal of time studying intermittent reinforcement schedules in his work.  As the term indicates, this involves providing reinforcements only part of the time, for a specific response.   In his work, Skinner describes two main types:  “interval schedules are based on passage of time and ratio schedules are based on number of responses” (Nye, 1979, p. 59).

Example of Ratio Schedule

Ratio schedules are determined based on the number of positive behavioral responses.  For example, at my son’s school they utilize behavior cards.  Green is good, yellow is okay, and red is when he’s being “naughty”.   If my son is able to get at least three greens in a row during a school week then I take him out to his favorite restaurant.  This is a ratio schedule. “These schedules also can be fixed (FR) and variable (VR)” (Nye, 1979, p. 60).  Rosenthal, (2005) states that “variable ratio schedules are the highest rate of behavioral response and are very hard to extinguish.”

Example of Interval Schedule

Interval schedules are determined based on a time interval.  For example, if my youngest son can sustain 30 minutes of homework time without whining I give him a treat.  Interval schedules can also be on a fixed or variable schedule.  Another example might include the weekly allowance I provide my kids for certain behaviors.  Rosenthal, (2005) notes that while variable rato schedules are most successful, regular interval schedules are the least reliable.

Primary and Secondary Reinforcers

Primary reinforcers provide intrinsic motivation and are – by themselves – highly desirable to clients.  These primary reinforcers tend to be associated with our innate biological drives.  In contrast, secondary reinforcers provide extrinsic motivation.  Also known as conditioned reinforcers they are stimuli that are “originally neutral but gain the power to reinforce through its pairing with one or more primary reinforcers” (Nye, 1979, p. 36).  “Conditioned reinforcement occurs when behavior is strengthened by events that have an effect because of a conditioning history” (Pierce & Cheney, 2013, p. 271).  In other words, past experiences define our expectations of future events.  Secondary reinforcers tend to gain power through their association with primary reinforcers.  Conditioned reinforcement is a process of learning and involves the association between primary and secondary behaviors.  For example, my son at school gets behavior cards that are good for a drawing in class.  He gets an opportunity to either pick an item from the prize jar or gets “first picks” on the toy he wants to play with during free time.

Generalization

“Once a certain response occurs regularly to a given stimulus the response also may occur to other stimuli.  This is called generalization” (Schunk, 1991, p. 97).  For example, oldest son is proud of his academic achievements.  He completes his homework, participates in class, and is mindful of when assignments are due, because these behaviors have been useful in achieving good grades in the past.  Skinner notes, this process of generalization allows us to apply experiences in similar situations to new ones.  “If generalization didn’t occur, we would have to learn each time how to respond in every new situation” (Nye, 1979, p. 50).  In describing this process, Skinner noted the existence of a generalization gradient where responses to specific stimuli decrease in frequency as the stimuli are more dissimilar from what we’ve encountered in the past.  For example, my cat think he’s being fed whenever he hears the sound of the can-opener.  However a dissimilar sound, like the doorbell, does not yield the same response.  The sound of a doorbell is highly dissimilar from a can opener.

Discrimination

The processes of discrimination and generalization each act as a complement to the other.  With discrimination, we learn to discriminate between “different aspects of similar situations and our responses become more finely tuned” (Nye, 1979, p. 51).  Discriminative stimuli refer to those aspects of a situation that allow us to determine the relevance of certain learned behaviors to a specific situation.

Shaping

In everything discussed thus far, many of Skinner’s concepts refer to the learning of simple behaviors in a series of one-shot experimental situations.  However, with behavioral modification, Skinner’s concepts are often utilized to help individuals develop patterns of behavior in a gradual process of evolution or development (Rachlin, 1991).  This process is called shaping (Rachlin, 1991; Schunk, 1981) and involves the gradual improvement of behaviors through the continual reinforcement process.  Bower & Hilgard, (1981) describe this shaping process as involving the following steps:

Identify the client’s current behaviors as well as any antecedents and consequences. 
Identify the desired behavioral changes & define the goal.  
Identify potential reinforcements based on the client’s environment & preferences.  
Break down the desired behavioral change into a realistic step-by-step process.
Utilize a reinforcement schedule to help the client to engage in a successive approxmations and reinfinements towards the defined end goal. 

Chaining

Skinner utilize the term chaining to describe how we learn more complex behaviors like playing the piano.  He describes the process of learning complex behaviors as consisting of several individual responses like links in a chain (Nye, 1979).  Each link in the chain generates consequences that influence the next response (Nye, 1979).

References

Bower, G.H. & Hilgard, E.R. (1981). Theories of Learning, 5th Ed..  Englewood Cliffs, N.J.: Prentice-Hall.
Corsini, R. J. & Wedding, W. (2011). Current Psychotherapies. Belmont, CA: Brooks/Cole
Dixon, D. R., Vogel, T., & Tarbox, J. (2012). A Brief History of Functional Analysis and Applied Behavior Analysis. Functional Assessment for Challenging Behaviors, 3-24.
Ingram, B.L. (2012). Clinical Case Formulations: Matching the Integrative Treatment Plan to the Client. (2nd ed.). Hoboken, NJ: Wiley.
Kazdin, A. E. (2012). Behavior modification in applied settings. Waveland Press.
Nye, R.D. (1979). What is B.F. Skinner Really Saying?. Englewood, N.J.: Prentice-Hall.
Pierce, W. D., & Cheney, C. D. (2013). Behavior analysis and learning. Psychology Press.  Retrieved from: https://scholar.google.com/scholar?cluster=8530085740643864664&hl=en&as_sdt=0,28&as_ylo=2012
Rachlin, H. (1991). Introduction to Modern Behaviorism, 3rd. Ed. New York: Freeman & Company.
Rosenthal, H. (2005). Vital Information and Review Questions for the NCE and State Counseling Exams. Routledge.
Ryan, R. M., & Deci, E. L. (2000). Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemporary educational psychology, 25(1), 54-67.
Schunk, D. H. (1991). Learning theories: An educational perspective. Retrieved from: http://s3.amazonaws.com/academia.edu.documents/44670152/_Dale_H._Schunk__Learning_Theories_An_Educational..pdf?AWSAccessKeyId=AKIAJ56TQJRTWSMTNPEA&Expires=1475602918&Signature=jyd3eErVXh%2F6RaD8I%2B056ea%2FlNw%3D&response-content-disposition=inline%3B%20filename%3DDale_H._Schunk_Learning_Theories_An_Edu.pdf
Skinner, B. F. (1953). Science and human behavior. New York: The Macmillan Company.
Wolpe, J., & Plaud, J. J. (1997). Pavlov’s contributions to behavior therapy: The obvious and the not so obvious. American Psychologist, 52(9), 966-972.

Share This: