Assessment

NCE – Assessment Section

FIRST, You need to know some things about assessing and/or estimating attributes of the client….(i.e. appraisal)

Testing – now 7% of job as a teacher. 20-37% of a high school counselor’s work.

A Test is simply a systematic way of assessing a sample behavior.  Select the appropriate test format. The manner in which the test items are presented.  Should consider the following.

  1. Objective or Subjective Test?
    1. Scoring procedure is specific.
    2. Essay for example will based on subjective impression.
  2. Free response items or Recognition Items.
    1. Free response items – can respond however you choose.
    2. Recognition Items – forced choice items.
    3. ABCD Structure – multipoint item
    4. Likert Scales are considered multi-point recognition items
    5. Agree/Disagree item – dichotomous recognition item.
  3. Normative or Ipsative Measure
    1. Normative each item independent of all other items. You can legitimately compare various people who have taken the test.  (i.e. I.Q. or MMPI)
    2. Ipsative – person being tested needs to compare items with each other. Occupational preference surveys.  You cannot legitimately two or more people who have taken a ipsative measure.  Strengths/weaknesses within a specific person.
  4. Speed versus Power Tests –
    1. Speed test – keyboarding test. Timed and assesses accuracy.
    2. Power Test – not timed. Achievement Test is a power test.  Level of difficulty of individual taking the test. Nobody can receive perfect score ideally.
  5. Maximum / typical Performance Measure –
    1. Maximum – assesses best possible performance (Achievement Test)
    2. Typical – A typical or characteristic performance (Interest Inventory)
  6. Spiral versus Cyclical –
    1. Spiral -items get more and more difficult.
    2. Cyclical – several sections each of which is spiral in nature.
  7. Vertical versus horizontal
    1. Vertical -different forms of the test for various age groups / grade levels.
    2. Horizontal – measures various factors at once.
  8. Test battery to describe the situation where we administer a group of tests to the same person. Can be combined into a profile.   More accurate than merely assessing the individual with a single measure.
  9. Parallel Forms / Equivalent Forms –
    1. Test has various versions that all measure the same thing.
    2. Parallel Forms – each person takes different version of test.

NEXT, you should be concern with the quality of the test.  How good is it?  There are two things to consider.  Most critical issue is validity & Second is reliability.

  1. Validity – does test measures what it purports to measure.
    1. Content validity – extent that the test samples the behavior that it is supposed to.
    2. Construct validity – refers to the extent that a test measures an abstract trait, construct, or psychological notion.
  • Criterion-Related Validity – test is correlated with an outside criterion (i.e. a standard).
    • Concurrent Validity – A job test might be compared to an actual score on an actual job performance.
    • Predictive Validity – predict future behavior. (GRE scores).
  1. Face validity – does it look like it is testing what it is supposed to.
  1. Reliability – refers to whether a test will consistently yield the same results. Does the score remain stable over repeated measures.
    1. Experts often assert that the quality of a test is determined by validity and reliability. A reliable test is not always valid.  However, a valid test will always be reliable.
    2. Test-Retest Reliability – simply test same group using same measure 2x and correlate to see if consistent.
  • Equivalent Forms Reliability – to equivalent forms of same test administered to same pop and correlated.
  1. Split-Half Method – examiners take whole test and split it in half with two tests. And a correlation made between two halves of the test.
  2. Interrater reliability – with subjective tests. You take the test and then have two independent raters grade it and see if scores are similar.
  3. Reliability coefficient can tell you if it is reliable.
    • 00 is perfect reliability in the test. Happens with physical measure
    • Coefficient .90 or +.90 is considered really good in a psych test.
      • .90 is accurate
      • .10 is d/t error

Intelligence Testing

  1. Francis Gaulton – intelligence is a unitary factor that was normally distributed like height or weight (Bell shaped curve). 1869 he chose 197 men who achieved fame.  It was 300x more likely that famous person would have a famous relative.  Gaulton felt it was a product of genetics.  ½ cousin Charles Darwin.
  2. Charles Spearman – 1904 British psychologist postulated a 2 factor theory of intelligence, (G & S Factors)
  3. Louis Thurston – intelligence is a series of factors, primary abilities. Used factor analysis to develop these.
  4. P. Gilford – 120 elements add up to intelligence. Best remembered for dimension of convergent and divergent thinking
  5. Raymond B. Katell – two forms of intelligence. Fluid intelligence and crystalized.
    1. Fluid – dependent on nervous system and the ability to solve complex novel problems.
    2. Crystalized – application of fluid to education. Is the ability to use facts.
  6. James McKean Katel – mental test coined in 1890. First person to use psychological tests to predict academic performance.
  7. FIRST INTELLIGENCE TEST – Alfred Binet French psychologist & French doctor Theodore Simone in 1905. Revisions occurred in 1908 and 1911. The first test was named the Binet Simone scale.
    1. In 1904 the French government wanted to discriminate normal Parisian children from those who were mentally deficient.
    2. Teacher’s could not be trusted to make this distinction.
    3. Dull children could be separated from the others and placed in a simplified curriculum….
    4. Used the concept of age-related tasks.
    5. Binet never believed his tests measured intelligence.
  8. Intelligence Quotient – IQ is divided computed us Wilhelm Stern’s formula
    1. Mental Age / Chronological age x 100 = IQ.
    2. This is know as a ratio IQ.
    3. oday prefer deviation IQ. Compare obtained IQ against a normative sample
  9. Louis Turman – 1916 adapted for American Usage. Stanford Binet. Updated in 1937 and again in 1960 and 1986 the MA/CA no longer used.  Not called IQ.  Now called SAS “Standard Age Score” at this time.   Since 2003 the standford Binet intelligence scale 5th edition, has been used and can be administered ages 2-85 and beyond.  The current version created by Gale H. Royd uses 10 subtests.  5 verbal subtests and five nonverbal subtests.  Mean is 100 and SD is 15.  One small controversy remains.  The old Form LM is till the best test for measuring ability of gifted individuals.
  10. Weschler Scales – Mean score is 100 SD is 15. David Weschler first published in 1939 Weschler Bellevue.  Grew in popularity for adults.
    1. WAIS-3 most popular adult intelligence test in the world. 14 sub-tests. 7 verbal subtests and 7 performance subtests.  Verbal IQ / Performance IQ and full IQ.
    2. WISC-IV – for children is used for ages 6-16 11 months. Takes 50-70 minutes. Six verbal subtests and subtests.
  11. WIPSI-3 Weschler preschool and primary scale of intelligence revised for ages 2/6mths – 7/3mths. Takes 1.5 hours.  Wipsi is long, can administer over two sessions.  The rationale is that children at this age have difficulty concentrating for long periods of time.
  12. Infant and Preschool IQ tests – useful to pick up mental retardation. Predictive validity is extremely poor of IQ-
    1. Denver Developmental Screening Test 2
    2. Bailey Scale of Infant Development (BSIDII) – most widely used. 1-42m
    3. FTII Fagan test of infant intelligence.
    4. Tests given before age 7 do not correlate well with tests later in life.
  13. Group IQ tests – not as accurate as individual tests. Began in 1917 Army Alpha and Army Beta testing recruits during WW2. In WW2 the Army general classification test AGCT test.  Armed forces qualification tests.  Used frequently in schools.
    1. PROS – don’t need special training to give. Give to many people.
    2. CONS – Not as accurate
  14. Asian Americans score highest then European Americans, then Hispanic Americans and at the bottom African Americans.
    1. Some feel any IQ test should be a culturally fair test. (eliminate BIAS)
    2. Culture fair tests do not predict academic performance as well
    3. ake them culture free…take problems on test and make them problems that would not depend on knowledge of any culture.
  15. Heated debate in social science has been over racial differences in IQ. Arthur Jensen had social science community arguing back and forth when publishing 1969 article which states that blacks scored 11-15 points lower than whites and this can be due to genetics.  Robert Williams created the BITCH test “Black Intelligence Test of Cultural Homogenity”.  Any black inner-city child that a duce and a quarter is a Buick Electra 225.  How many high IQ kids would answer this question.
  16. SOMPA – System of Multicultural Pluralistic Assessment. Eliminate Culture from tests and create culture-free tests.  Some say you can eliminate culture from an exam.  Proponents of these test remind us that they tell us nothing about our makeup.  They are good predictors of success in life.
  17. The FLYNN Effect – IQ tests worldwide are going up. We are unsure whether it is because of better nutrition, earlier maturation.  Or increase practice of video games.

Personality Testing

 MMPI – Minnesota Multiphasic Personality Inventory (MMPI-2)

  1. First published in 1943 by Hathaway and McKinnley extent of emotional disturbance and helps with diagnosis using 567 true/false questions.
  2. 10 clinical scales –
    • Hypochondriasis – Concern about health.
    • Depression
    • Hysteria – use of physical/mental symptoms to avoid symptoms
    • Psychopathic Deviante
    • Masculinity/Feminity
    • Paranoia – suspicious
    • Psychasthenia – excessive worry or guilt
    • Schizophrenia
    • Hypomania – overlyactive
    • Social Introversion – Shy

Myers Briggs Type Indicator – Based on Carl Jung theory of types four bipolar scales which result in four letter type.

  1. Exam hint. Myers Briggs a theory based inventory since it Is based on a theory. MMPI is a criterion based inventory since it compares a person taking it to a criterion group.
  2. Self-report inventories like MBTI more accurate than projective tests. Projective test shows neutral stimuli and asked to interpret, (ink blot).

Other Misc Personality Things….

  1. Rorschach inkblot test – Association Projective test, ink blot test…what does the blot bring to mind. Most popular ink blot measure for ages 3 and up.  By Herman Rorschach using 10 ink blot cards.
  2. Construction Projective Test – TAT Thematic Apperception Test. Person being tested is asked to describe make up or construct about a picture on a card.  The picture is ambiguous.  Created by Henry Murray and Christina Morgan in 1935.  Orignally based on needs pressed theory today you can utilize psychoanalytic.
  3. Expressive Projective Test – draw a person or house/tree/person test. Bender Gestalt test, a test of organicity and screens for brain damage.
  4. Arrangement Projective Test – place pictures in a sequence and discuss why in this order. Sentence completion test.  Difficult to hide things here….

Interest & Aptitude testing

  • Interest Inventory – Occupational and Educational Interests. Students younger than the 10th grade show instability in interests and the interests may not be that valid.  It is very easy to give untruthful responses on these.  Strong Interest Inventory (SII) Based on Holland’s six types. Most famous.
    1. Ask people who are happy and successful for three years what they like.
    2. When a person’s profile matches this, then a particular profession might be appropriate.
    3. Self-directed Search (SDS) administered by self and scored self.
    4. Fairly reliable and nonthreatening.
  • Aptitude Test – measure an inherited capability rather than what you have learned.
    1. ACT/SAT/GRE – examples
    2. Great aptitude must have superb predictability.
    3. GATB – assesses 9-12 students and adults on pen and pencil
  • Achievement test – what have you learned and are primarily used in educational settings. National Counseling Examination. GRE….Some books call GRE tests that measure aptitutde and achievement both.  Some tests cross this fine line.
  • STANDARD ERROR OF MEASUREMENT (SEM) How accurate or inaccurate a test is. The Standard Error is a measurement of the variation in a single person’s score of he/she would take test again.
    1. EXAMPLE IQ TEST STANDARD IQ ERROR +/- 3.
    2. YOU GET 100
    3. 68% YOU FALL BETWEEN 97-103.
    4. SMALLER PERCENT OF TIME YOUR SCORE WILL BE HIGHER/LOWER
    5. INACCURATE TO SAY THAT BOB SMARTER THAN NANCY IF ONE IS AT 100 and ONE AT 102.
    6. AKA CONFIDENCE LIMIUTS OF THE TEST.

Share This:

NCE – Assessment Stuff (Transforming Raw Scores)

What to do with raw scores???

  1. Raw Scores – a meaningless number by itself….105
  2. Derived scores –make meaning of a raw score by converting it into a derived score. Will need to be converted or compared against some criterion. Three types of derived scores:  106
    1. comparison with scores obtained.by other individuals (norm-referenced),
    2. comparison with an absolute score established by an authority (criterion-referenced),
    3. comparison with other scores obtained by the same individual (self-referenced).
  3. Organizing raw score data visually allows counselors to garner information beyond simply scanning a list of raw scores. There are several ways to visually organize raw data (106)
    1. frequency distribution tabulates the number of observations (or number of individuals) per distinct response for a particular variable.
      1. Row and column format
      2. Values first column, frequencies second colum, percentages third,
    2. histogram is a graph of bars that presents the data from a frequency distribution in a more visual format
    3. frequency polygon is a line graph of a frequency distribution
    4. a bar graph visually depicts nominal data.

Measures of Central Tendency Measures of central tendency refer to typical score indicators or the average score for a distribution of scores.

  1. The mean or arithmetic average, has algebraic properties that make it the most frequently used measure of central tendency. (108)
  2. median is the middle score below which one half, or 50%, of the scores will fall and above which the other half will fall. (108)
  3. mode is the score that appears the most frequently in a set scores. The mode for the assessment score distribution is 87. (108)

Measures of Variability Measures of variability indicate the extent of individual differences around a measure of central tendency.

  1. RANGE – distance between the lowest and the highest scores (109)
  2. interquartile range may be a more useful as it removes potential outliers and focuses on the range around the median (109).
  3. The standard deviation is the most frequently reported measure of variability and represents a standardized number of units from a measure of central tendency. (109).
    1. it is the basis for standard scores
    2. It yields a method of presenting the reliability of an individual test score
    3. It is used in reach studies for statistical significance.

Characteristics of data distributions –

  1. Normal curve – (109)
    1. data evenly distrusted around a measure of central tendency…
    2. 34% one standard deviation above and below
    3. 14% one standard deviation above and below
  2. When data are not equally distributed around central tendency…
    1. Skewness: large numbers of individual scores at one end of the distribution…
    2. Kurtosis – refers to the peakedness or height or distribution…
      1. Less variation leptokurtosis
      2. Greater distribution platykurtosis

Norms and Ranks –  Standardized tests by nature are norm referenced.   Norms are established by administering the instrument to a standardization group and then referencing an individual’s score to the distribution of scores obtained in the standardization sample…can compare individual score to norms (111)

  1. Developmental Norms – There are two types of developmental norms, or comparison of an individual’s score
    1. The individual’s grade level or age group. Grade equivalents are often used on educational achievement tests to in comparisons.
    2. Age comparisons, of second type of developmental norms, refer to an individual being compared with others in his or her age group. (112)
  2. Rank – A person’s rank or standing within a group is the simplest norm-referenced statistic with s interpretation based on the size and composition of the group. It is used extensively for grades
  3. Percentile rank is more often used because it is not dependent on the size of the comparison group. Percentile scores are expressed in terms of the percentage of people in the comparison group who fall below them when the scores are placed in rank order.

Standard Scores are defined as a score expressed as a distance, in standard deviation units, between a raw score nd the mean. There are several common types of standard scores, including z scores, T scores, CEEB scores, Deviation IQs, and stanines. (113)

  1. Z SCORE – basic standard score is the z score, a score that allows us to estimate where a raw score would fall on a normal curve. (113)
    1. Z score results from subtracting the raw score from the mean and dividing by the standard deviation of the distribution. (114)
    2. Says how many SD’s above or below the mean a score is…
  2. T SCORE – is used on a number of the most widely used educational and psychological tests.
    1. By definition, the T score has an arbitrary mean of 50 and standard deviation of 10 & rounded to the nearest whole number…(114)
    2. Displayed on a table that gives t score, percentile rank and interpretation.
  3. CEEB SCORES: “College Enterance Examination Board”  – Are reported in standard scores that use a mean of 500 and a standard deviation of 100.  All scores are reported in increments of 10. The result is a scale that recognizable for these instruments, although the scores may be thought of simply as T scores with an additional zero added…(115)
  4. Deviation IQ scores: Deviation IQ standard scores have since been developed to replace ratio IQs.  Current results still report the mean at 100, but they report a standard score based on standard deviation units. Therefore, tests such as the Wechsler scales and foe Stanford-Binet established mean of 100 and a standard deviation of 15 or 16….(116)…
  5. A stanine (based on the term standard nine) is a type of standard score that divides a data distribution into nine parts.
    1. Each stanine, with the exception of Stanines 1 and 9, divide standard deviation unit on a normal curve in half….
    2. Stanines have a mean of five and SD of 2
    3. Rarely used, hard to translate meaning….

Standard Error of Measurement – yields the same type of information as does the reliability coefficient but is specifically applicable to the interpretation of individual scores. represents the theoretical distribution that would be obtained if an individual were repeatedly tested with a large number of exactly equivalent forms of the same test.

Share This:

NCE – Assessment Stuff (Validity)

  1. Validity – Whereas reliability is concerned with whether the instrument is a consistent measure, validity deals with the extent to which meaningful and appropriate inerences can be made from the instrument (96).
    1. Valid for what??? Validity of a test varies depending on the purpose and the target population. Its not a characteristic of the assessment but the meaning of findings for a sample…(96)
    2. Validity also asks if test scores measure what they are purporting to measure? Two types of invalidity…(96)
      1. Construct underrepresentation – refers to failing to include components or dimensions of a construct
      2. Construct irrelevant variance – too much noise or irrelevant things…
  • Types of validity
    1. Face validity – does the assessment appear to measure what it is supposed to (97)
    2. Content validity – do the items on assessment represent the domain it is trying to measure? (97)
    3. Criterion related validity – pertains to validity evidence that is obtained by comparing test scores with performance on a measure. (grades or diagnosis)…(98)
      1. Types of
        1. Concurrent validity – when test scores and the criterion performance scores are collected at the same time. Correlation coefficients are calculated between the scores…
        2. Predictive validity – the client’s performance or criterion measure is obtained some time after the test score… Does it predict what it is supposed to be predicting? (98)
      2. How useful??
        1. Base rates: proportion of people in population who represent the particular characteristic or behavior that is being predicted??? (98)
        2. Incremental validity: extent to which a particular assessment instrument adds to the accuracy of predictions obtained from other tests…. (99)
  • False negative/False positive – Incorrect predictions
  1. The accuracy of classification of individuals into different diagnostic categories or re- lated groups based on a particular cutoff score can be expressed in terms of sensitivity and specificity.
    1. Sensitivity refers to the accuracy of a cutoff score in detecting those people who belong in a particular category. By definition, testing procedures that are sensitive produce w false negatives.
    2. Specificity indicates the accuracy of a cutoff score in excluding those without that condition.
  2. Construct Validity Another type of validity evidence asks the question, Are the test results related to variables they ought to be related to and not related to variables that they ought not to be? ((Evidence of the theoretical basis of a test’s measures…)) – page 100
    1. CONVERGENT VALIDITY – On the one hand, tests scores should be expected to show a substantial correlation with other tests and assessments that measure similar characteristics (convergent validity).
    2. DISCRIMINANT VALIDITY – On the other hand, test scores should not be substantially correlated with other tests from which they are supposed to differ; that is, they should show discriminant validity. A test
    3. FACTOR ANALYSIS – can determine whether the test items fall together in different factors the way that of theory suggests they should.
  3. Treatment Validity – Do the results obtained from the test make a difference in the treatment? (101)
  4. Postscript: Validity Scales –
    1. It is also important to determine the accuracy of a client’s responses. That is, patterns in responses that are irrelevant to the actual intention of the assessment (response sets) may emerge. (103)
    2. Validity scales – tools used to determine three types of response distortions:
      1. a client pretending to have some problem or disorder (faking bad),
      2. a client responding in a socially desirable manner appear more favorable or less symptomatic (faking good),
  • and a client responding randomly either intentionally or unintentionally (Van Brunt, 2009b).

Share This:

NCE – Assessment Stuff (Reliability)

  1. Reliability – refers to how consistently a test measures and the extent to which it eliminates chance and other extraneous factors in its results. Synonyms for reliability include dependability, reproducibility, stability, and consistency. (89)
    1. Conceptual Understanding –
      1. Measurement Score – The positive or negative bias within an observed score. That is, a score that a person receives on a test is made up of two elements: the person’s true score and an error. (89)….observed score (X) = true score (T) + error score (e).
        1. Individual error – “Include text anxiety, motivation, interest in responding in a socially desirable manner, heterogeneity of the group tested, and test familiarity.” (90).
        2. Also Test error & Testing condition error…
      2. Correlation and Reliability –
        1. The correlation statistic assesses the degree to which two sets of measures are related for each correlation coefficient contains two bits of information: (90)
          1. the sign of the correlation tells whether the two variables tend to rank individuals in the same order (+) or reverse order (-).
          2. And the magnitude of the correlation indicates the strength of the relationship
        2. EXAMPLES OF TYPES OF:
          1. The Pearson-Product Coefficient (r) is the most common and can range in value from +1.00, indicating a perfect relationship through .00, no relationship or -1.00 or an inverse relationship (page 90)
          2. Coefficient of Determination – The shared variance between two variables…calcnulated by squaring the correlation coefficient.
        3. Reliability coefficients usually run within the range of .80 to .95….
      3. Types of Reliability –
        1. Test-Retest Reliability – “measures consistency over time…Indicates the relationship between scores obtained by individuals within the same group on two administrations of the test.” (92)
        2. Alternate-Form Reliability – “compare the consistency scores of individuals within the same group on two alternate but equivalent forms of the same test…” (93)
        3. Split-Half Reliability – “dividing the test into two comparable halves and comparing the results of two scores…
          1. Decreasing the reliability estimate…
          2. Spearman-Brown Formula can be used to estimate how reliable
        4. Interitem Consistency – a measure of internal consistency that assesses the extent to which the items on a test are related to each other and to the total score. This measure of test score reliability provides an estimate of the average intercorrelations between all the items on a test. (94)
          1. Kuder-Richardson (KR) Formula
          2. Cronbach’s Alpha

Interrater Reliability – Refers to the degree of agreement between two or more independent judges. INterrater agreement is calculated by dividing the number of agreements that an event occurred by the number of possible agreements (95).

Share This:

NCE – Assessment Stuff (Cognitive Assessments)

Hays, (2013) discusses cognitive assessment in two separate chapters.  Chapter nine provides a review of intelligence tests.  Chapter nine discusses academic and assessment testing in education.  Testing methods and differences in these concepts are reviewed below:

Intelligence Testing

Our textbook introduces several theories of intelligence that are helpful in providing a clear description of this concept.   In general, intelligence can be thought of as an ability to “act purposefully, think rationally…. and adapt to one’s environment” (Hays, 2013, p168).   While master’s level therapists don’t perform the assessments of intelligence, they use this information to assist in “educational and vocational decision making” (Hays, 2013, p180).   Knowledge of intelligence testing instruments is important for this reason.  Well-known individual intelligence tests include the Stanford Binet test and Weschler scales (Hays, 2013).  Group intelligence testing, giftedness, and creativity tests are also reviewed in this chapter.

Ability Testing

Chapter ten provides a review of aptitude and academic assessments often utilized in the educational system (Hays, 2013).  Counselors need to be familiar with these assessments since they often assist clients with their educational plans.    Whereas aptitude tests reflect a student’s ability to learn, assessment tests measure what they have learned (Hays, 2013).  Examples of aptitude tests include the SAT, ACT, and GRE.  These tests are often utilized for college admission purposes.   An example of assessment tests includes the TerraNova test.  This test is locally throughout the Bellevue School System and provides a review of a student’s academic progress.  Results are provided to parents periodically over the course of a year and are compared to state scores. High Stakes Testing is utilized to evaluate educational curriculum and instruction (Hays, 2013).  Utilized on behalf of the “No Child Left Behind Act,”, this information monitors student performance and holds schools to an educational standard (Hays, 2013).  This chapter concludes with a review of study habits tests that are helpful in allowing students to improve their performance (Hays, 2013).

References

Hays, D. (2013). Assessment in Counseling (5th ed.).  Alexandria, VA: American Counseling Association.

Share This:

NCE – Assessment Stuff (Midterm Review Notes???)

MIDTERM REVIEW NOTES

What are the five basic steps in the problem solving model: Pages 6-8

“Because performing an assessment is similar to engaging in problem solving, the five steps in a problem solving model can be used to describe a psychological assessment model.” (Page 6).

  1. Problem Orientation: stimulate counselors and clients to consider various examples….

    1. Instruments promoting self-awareness & self-exploration
    2. Group survey’s used to identify problems/concerns
  2. Problem Identification: Clarify the nature of a problem or issue

    1. Screening inventories or problem checklists assess the type and extent of client concerns
    2. Personal diaries and logs identify situations in which problems occur.  Personality inventories can help counselors understand personality dynamics in relationships and work.
  3. Generation of Alternatives: Suggest alternative solutions for a client’s problems, and help them view problem differently….

    1. Assessment interview used to determine what techniques have worked in the past to solve a problem…
    2. Checklists or inventories can also yield data that can be used to generate alternatives….
  4. Decision Making: Determine appropriate treatment for the client…

    1. Expectancy tables can show success rate of people with different types of scores or characteristics…
    2. Balance sheets or decision making grids enable clients to compare the desirability and feasibility of various alternatives
  5. Verification: evaluate the effectiveness of a particular solution:

    1. Readministration of tests
    2. Satisfaction surveys

The definition of an assessment procedure

  1. PURPOSE OF ASSESSMENT – (page 6) – “serve diagnostic purposes, help evaluate client progress and are useful in promoting awareness”

    1. Classification (program placement, screening and certification)
    2. Diagnosis and Treatment Planning
    3. Client self-knowledge
    4. Program evaluation
    5. Research to guide theory and technique development
  2. DEFINITION OF ASSESSMENT:

    1. “Assessment is an umbrella term for the evaluation methods counselors use to better understand characteristics of people, places, and things….”(Morrison, page 4)
    2. Assessment is any systematic method of obtaining information from tests and other resources used to draw inferences about characteristics of people, objects, or programs” (Morrison, p4)
  3. EXAMPLE OF ASSESSMENT PROCEDURES: “standardized tests, rating scales, observations, interviews, classification techniques, records. (Morrison, p.4)

What makes a test standardized?

  1. Non-standardized Assessment Programs

    1. “Use self-ratings to help clients organize their thinking about themselves and various opportunities and include computer-based programs and career education workbooks.”
    2. “include rating scales, projective techniques, behavioral observations and biographical measures…(26) less reliable and valid…
  2. Standardized tests:

    1. Standardized tests must meet certain standards during the testing process. These standards include:
      1. Uniform procedures for test administration
      2. Objective scoring
      3. Use of representative groups for test interpretation.
    2. Most standardized tests have clear evidence of reliability/validity
    3. Standardized tests can include the following:
      1. Intelligence tests
      2. Ability tests
      3. Personality inventories
      4. Interest inventories
      5. Value inventories…
  3. “A test is said to be standardized when it has clearly specified procedures for administration and scoring, including normative data…” (page 117)

    1. test developer defines a target population for test use
    2. This target pop has an observable characteristic that varies and needs to be measured
    3. Then developer administer tests to the population to a sample
    4. This is administered in accordance with specific instructions
    5. Then developer provides descriptive statistics against which to compare results including measures of central tendency, standard deviation, and variability…

What is the difference between qualitative and quantitative assessments? What kind of information does each type yield? 

  1. Qualitative Assessments – Qualitative procedures provide a verbal description of a person’s behavior or situation and place the results in a category. More open-ended and adaptable in counseling…Collects data that does not lend itself to quantitative methods but rather to interpretive criteria (EXAMPLES)

    1. Nominal scales – does not possess magnitude, equal intervals or an absolute zero. Nominal scales provide descriptive criterion… Utilized to classify and name…
    2. Ordinal scales: Refers to the rank ordering of nominal categories..Likert scale responses are an example.  Can’t average or create a mean for these…with not zero or equal intervals…
  2. Quantitative Assessments – Quantitative procedures yield a specific score on continuous scale. They provide greater reliability and validity Collects data that can be analyzed using quantitative methods, i.e. numbers, statistical analysis

    1. Interval measures – possess magnitude and equal measures. You can add and subtract but not divide or multiply since there is no absolute zero.
    2. Ratio scales have magnitude, equal intervals, and an absolute zero. Can utilize all statistical techniques in this…

Measures of Central Tendency: the average score for a distribution of scores

  1. MEAN = the average, it is equal to the sum of the scores divided by the number of individuals in the group

  2. MEDIAN = middles core below which one half or 50% fall above and below

  3. MODE = the score that appears most frequently in a set of scores.

Know the normal curve and standard:

  1. MEASURES OF VARIABILITY: indicate the extent of individual differences around a measure of central tendency.

    1. RANGE – indicates distance between highest and lowest
    2. INTERQUARTILE RANGE: range around the median.
  2. STANDARD DEVIATION: most frequent measure of variability and represents a standardized number of units from a measure of central tendency.

    1. The larger the value, the greater the dispersion of scores and variability.
    2. Popular because basis for standard scores and helps represent scores accurately.
    3. Calculated by dividing the sum of squares from the sample size minus one and taking the square root of the value…
  3. NORMAL CURVE: In a perfect world and well-distributed set of scores around measures of central tendency would yield a bell curve.  (P110)

    1. The value of the standard deviation divides the raw score range into approximately six parts, with 3 above and 3 below the mean..
    2. Scores outside these 3 deviations above/below rare
      1. 34% of sample between the median and 1SD above
      2. 34% also occurs between median and 1SD below
      3. An additional 14% occurs between with SD above /below
  4. NO SKEWNESS – titling to one side

  5. NOT TOO MUCH KURTOSIS – narrow or broad (see SD’s above)

What are twelve broad factors of test user competencies? (page 44)

  1. Avoid errors in scoring and recording
  2. Refrain from labeling people with personally derogatory terms like dishonest on the basis of a test score that lacks perfect validity
  3. Keep scoring keys and test materials secure
  4. Seeing that every examinee follows directions
  5. Using settings for testing that allow optimal performance
  6. Refraining from coaching or training individual’s/groups on test items.
  7. Being willing to give interpretation and guidance to test takers in counseling situations
  8. Not making photocopies of copyrighted materials
  9. Refraining from using homemade answer sheets that do not align properly with scoring keys.
  10. Establishing rapport with examinees to obtain accurate scores
  11. Refraining from answer questions from test takers in greater detail than the test manual permits
  12. Not assuming that a norm for one job applies to a different job.

Explain what is meant by “grade equivalent”

  1. Utilized in educational assessments to compare a child’s scores against criterion-referenced scores that indicate how a child measures up against an expected level of performance according to either age related or grade reldated criteria…
  2. A grade equivalent (GE) score is described as both a growth score and a status score both. As is common with scores that can be used in both major categories GE do not do a very good job in either category. However, what GE does is indicate where a student’s test score falls along a continuum. The GE is expressed as a decimal number (4.8). The digit(s) to the left of the decimal represent the grade. The digit(s) to the right of the decimal represent the month. We assume 10 months per school year. The GE of a given raw score on any test indicates the grade level at which the typical student earns this raw score. For example, if a seventh grade student earned a GE of 8.4 her raw score is like the raw score the typical student would likely earn on the same test at the end of the fourth month of the eighth grade.

Types of Reliability

  1. Definition of Reliability – Definition of Reliability – how consistently a test measures and the extent to which it eliminates the chance of error. (Dependability/Reproducability)

    1. TRUE SCORE + ERROR = OBSERVED SCORE
    2. ERRORS CAN INCLUDE: (a) r/t individual; (b) r/t test itself; (c) r/t test conditions
  2. Test- Retest Reliability – Measures the consistency over time. The correlation coefficient in this case indicates the relationship between scores obtained by individuals within the same group in two administrations of the test (92)

  3. Alternate/Parallel Form – comparing consistency scores of individuals within the same group on two alternate but equivalent forms of the same test (92)

  4. Internal Consistency Measures of Reliability;

    1. Split-half Reliability – a popular form of establishing reliability because it can be obtained from a single administration by dividing the test into comparable halves and comparing the resulting scores for each individual (94)
      1. test cut in half
      2. compare results from each half
      3. Spearman-Brown Formula
    2. Interitem Consistency = is a measure of internal consistency that assesses the extent to which the items on a test are related to each other and to the final score.
  5. Interrater Reliability –refers to the degree of agreement of two or more independent judges of a test.

What does validity mean and what are the different types of validity?

  1. DEFINITION OF VALIDITY: – Whereas reliability is concerned with whether the instrument is a consistent measure, validity deals with the extent to which meaningful and appropriate inferences can be made from the instrument.  (Page 96) Does the measure what it intends to measure?

    1. Face validity – not really evidence of validity, is determined if the assessment ‘looks like’ it is measuring what it is supposed to measure.
      1. Content validity – representativeness of items from a population of items…do sample items represent/reflect all major components of the domain they are trying to measure.
      2. Criterion-Related – degree of prediction of a client’s performance on a criterion assessed at the same item (concurrent) or sometime in the future (predictive). …validity of evidence obtained by comparing test scores with performance on a criterion measure…(job satisfaction, grades, diagnosis, etc as comparison)
      3. Concurrent validity test score and criterion measure same time
      4. Predictive validity test score predicts results of future criterion measure given later…
    2. Construct Validity Are the test results related to the variables they ought to be related to and not related to the variables that they ought not be? Degree to which it is r/t theoretical construct
      1. Discriminant Validity – test scores do not correlate with tests that measure something different.
      2. Convergent Validity – correlation with tests and assessments that measure the same characteristics
    3. Treatment Validity – Do results from the test make a difference in terms of treatment?

Interpret the meaning of a correlation.

  1. Correlation and Reliability:

    1. Correlation statistic assesses the degree to which two measures are related. Each correlation coefficient contains two bits of information:
      1. The sign tells whether the two variables tend to rank individuals in the same (direct relationship) or reverse order (inverse relationship)
      2. The value indicates the strength of the relationship….
    2. PEARSON PRODUCT-MOMENT( r ) is the most common and can range from +1.00, indicating a perfect positive; through .00, not relationship; to -1.00 a perfect inverse relationship
  2. Standard reliability coefficients = usually run within a range of .80 to .95 but what is considered to be acceptable varies depending on the test circumstances and type of reliability…

Four models of helping and coping.

  1. COMPENSATORY MODEL – people are not responsible for problems but are responsible for solutions = NO BLAME
  2. MEDICAL MODEL – People not responsible for problems or solutions = VICTIM OF DISEASE
  3. ENLIGHTMENT MODEL – People are not responsible for solutions but are responsible for problems = UNDERSTANDING
  4. MORAL MODEL= people responsible for problems and solutions = ATTRIBUTION OF RESPONSIBILITY

What are some instruments frequently used by community mental health counselors?

  1. WHAT IS COMMUNITY MENTAL HEALTH COUNSELOR? ((say you learned about this a bit through your experience in the particum))

    1. Community counseling is a type of counseling that is used to help communities that are suffering from psychological or social discord, for one reason or another.
    2. Professionals in this field will often try to treat individuals in the community for whatever psychological problem that ails them.
    3. They will also attempt to prevent future problems as well….Community counselors attempt to solve widespread community problems that are social or psychological in nature. In order to do this, they will often work with individuals as well as a community as a whole.
    4. There may be a number of different problems that can plague individuals in dysfunctional communities. Many of these problems are often related. Community counselors will often speak with several individuals, offering guidance, therapy, and counseling. While trying to help these individuals overcome their challenges in life, though, a community counselor will also attempt to get to the root of the problem.
  2. WHAT ASSESSMENTS ARE UTILIZED?

    1. Firstly, would need to figure out what is wrong with people that come and visit so they can determine what services they need.

      1. This would require an intake interview
      2. Mental status exam
      3. Inventories for assessing mental disorders:
        1. Psychiatric diagnostic screening questionnaire
        2. Patient health questionnaire (129)
      4. Consult DSM manual
    2. If they have a mental health or substance abuse problem they can utilize assessments to determine this. Examples can include the following:

      1. Substance abuse assessments
      2. Depression inventories
      3. ADHD test
      4. Anxiety and fear measures
      5. Self-injury assessments
    3. Also need survey’s occasionally, to determine how the community as a whole is from time to time. Can administer these survey’s and create data accordingly.

What ten topics should most intake interviews cover?

  1. General appearance and behavior
  2. Presenting problem
  3. History of current problem and related problems
  4. Present level of functioning in work, relationships, and leisure activities
  5. Use of alcohol or other drugs, including medications
  6. Family history of mental illness
  7. History of physical, sexual, or emotional abuse
  8. Risk factors including the urge to harm self/others
  9. Previous counseling
  10. Attitude of client towards the counseling process.

What should be included in a problem checklist in counseling?

  1. INTAKE FORMS – The intake form should be kept relatively short so that it does not become an imposition in counseling. As counseling progresses the form can be supplemented with additional questionnaires designed for particular issue, such as career planning, study skills, or relationships.

  2. SCREENING INVENTORIES: “counselors often utilize brief, self-report screening instruments to obtain a preliminary overview of a client’s concerns.” (123)

    1. INVENTORY OF COMMON PROBLEMS = Assess for the Nature and intensity of concerns (PAGE 126)
    2. SYMPTOM CHECK LIST-90-REVISED – Describes client’ s symptoms and severity of them…(compulsive, sensitivity, depressed, anxiety, hostile, phobic, paranoid, psychotic)
    3. Suicide risk assessment.

    How do clients differ when they enter counseling? Differences in the degree of openness and readiness for change

    1. Precontemplation – individual not aware of problems and has no intention to change behavior in the foreseeable future
    2. Contemplation – individuals are aware of their problems but have not made a serious commitment to do anything
    3. Preparation stage – individuals have begun to make small changes in their problematic behaviors with intention of making more changes in one month
    4. Action Stage – successfully changed their behavior for periods of time
    5. Maintenance Stage – goal is to maintain changes

What is the difference between Aptitude & Achievement? Give an example of each type.

  1. Assessment of aptitude is generally thought of as an ability to acquire a specific type of skill or knowledge; aptitude tests are typically used for prediction purposes. Academic and scholastic aptitude is related to education program evaluations and admission (SAT / ACD / GRE)
  2. Assessment of achievement attempts to measure what learning has taken place under a relatively standardized set of conditions or as a result of a controlled set of experiences. Designed to measure what has already been learned. Whereas aptitude r/t learning ability, achievement r/t what is known.  (TerraNova / Iowa Basic Skills)

What steps have been taken to make sure only competent users administer tests? What are the qualifications for purchasing tests? Competence in Testing: page 49

  1. Test Publisher set guidelines for level of competency that determines who is able to utilize a test….
    1. A LEVEL – NO QUALS
    2. B LEVEL MASTERS LEVEL
    3. C LEVEL PHD OR EDUCATION RELATED FIELD
    4. Q LEVEL –OTHER SPECIFIED
  1. Professional associations also create their own ethical codes.
  2. States have their own guidelines
  3. Fair Access Coalition Testing

What are the guidelines for test interpretation –

  1. Tests are not used by others to make decisions for or against a client.
  2. Are to maintain confidentiality.
  3. Test users are to ensure that information is not misused by others.
    1. Is the person receiving info qualified to understand and interpret the information
    2. Should make sure interpreted in a way the person understands
  4. Clients have right to know and understand results.

What is the role of career assessment and what are the types of career assessment? What are the types and role of educational assessments? See page 202…

  1. Role of career assessment: to help clients explore both the process and content of career development. There uses of career assessments:

    1. Prediction – future career performance
    2. Discrimination – evaluate ability and interests
    3. Monitor – assess progress
    4. Evaluate – measure goals and how well met
  2. Types of career assessments:

    1. Career readiness assessments (maturity and adaptability)
    2. Assessment of an individual’s values, interests and aptitudes
    3. Educational and career planning

Share This:

NCE – Assessment Section (MMPI)

QUESTION: “Prepare a summary describing the content, use, validity, limitations, etc of each tool…compare and contrast the two tools.” 

(MMPI-2) “Minnesota Multiphasic Personality Inventory-2”

Purpose & Use

The purpose of the MMPI-2 is to assess for patterns of personality indicative of a mental disorder in order to aid in treatment planning (Butcher, et al, 1990; Hays, 2013).  The MMPI-2 is the one of the most popular tests utilized by clinicians today and is the subject of extensive research (Hays, 2013).  It is intended for use in adults over the age of 18, may be administered individually, or in a group and takes approximately 90 minutes to complete (Butcher, et al, 1990).

MMPI-2 Content

The MMPI-2 represents the first revision of this test since 1942.  Completed in 1989, the MMPI-2 revisionary goals included: (1) updating the normative sample, (2) preserving the generalizability of past research, and (3) removing antiquated items (Butcher, 1942).  Ninety items were deleted from the original test and 85% of all items total were modified in some way, (Butcher, 1990).  The test contains clinical scales that indicate the presence mental health issues, and validity scales that measure a client’s attitudes while taking the test (Hays, 2013, p257).  The ten clinical scales in the MMPI-2 include: (1) hypochondriasis, (2) hysteria, (3) depression, (4) psychopathic deviate, (5) masculinity/femininity, (6) paranoia, (7) psychasthenia, (8) schizophrenia, (9) hypomania, and (10) social introversion (Hays, 2013, p257).  In contrast the validity scales measure the distortions in a client’s response and the degree to which they are either “faking bad…[or] faking good” (Hays, 2013, p101).

Normative Sample

The normative sample consists of 1,462 women and 1,138 men ranging in age from 18-84.  The sample is representative of the U.S. population in terms of age, relationship status, race/ethnicity, and geography (Butler, et al, 1990).  Nonetheless, Hays, (2013) states minority groups have obtained scores that vary significantly from the white population in research.  This is likely due to key cultural differences that must be accounted for in the interpretation of results (Hays, 2013).

Validity & Reliability

The MMPI-2 was designed utilizing a “logical content method” (Hays, 2013, p256).  This method involves the identification of items that appear to relate to the characteristics being assessed (i.e. hypochondriasis, depression, etc) (Hays, 2013).  The limitation of this method is the assumption of validity on “face value” (Hays, 2013, p256).  Additionally, efforts were not taken to re-evaluate the external validity of the MMPI-2 measures (Butler, et al, 1990).  However, the test consists of series of scales that are said to contain high levels of internal consistency, and face validity (Butler, et al, 1990).  Research is required to determine the level of external validity of the revised MMPI-2 (Butler, et al, 1990).

MMPI-2 Limitations

Changes in MMPI-2.

The cutoff T-score utilized to indicate the presence of psychological problems was dropped from 70 to 65 in the new MMPI-2 (Hays, 2013). This change is the result of substantial variations between the normative samples in the MMPI and MMPI-2.  According to the standards of the original MMPI, individuals from the MMPI-2 normative sample were appearing “too normal” (Butler, et al, 1990).  Therefore, while the MMPI-2 is very closely related to the MMPI, this degree of variation indicates they should not be treated as equivalent measures (Butler, et al, 1990).  Consequently further research is needed to determine how much of previous research is generalizable to the MMPI-2.

Limitations in the Normative Sample.

While the MMPI-2 normative sample is a significant improvement from the old version, it isn’t without flaws.  In the MMPI-2 normative sample, 50% of males and 42% of females have a bachelor’s degree or higher  (Butler, et al, 1990).  In contrast to the 1980 U.S. Census shows that 20% of males and just 13% of females were similarly educated (Butler, et al, 1990).    Since the 1980 U.S. Census, was a guideline upon which the normative sample was designed, this is a huge oversight (Butler, et al, 1990).    It presents a key limitation in the generalizability of results to certain segments of the population.

Diagnostic Limitations.

Hays, (2013) states that the MMPI-2 test cannot be used for diagnostic purposes to “classify individuals into psychiatric categories with a high degree of accuracy” (p257).  It provides a description of personality dimensions and typical behavioral patterns in an individual (Hays, 2013).

(DAPP-BQ) Dimensional Assessment of Personality Pathology

Purpose & Use

            The DAPP-BQ is a self-report questionnaire that is utilized to assess basic personality disorder in the clinical population.  The DAPP-BQ contains a normative sample of 2,726 individuals that allows for measurement adaptive personality traits.   Additionally, the DAPP-BQ contains a clinically diagnosed sample of 656 individuals recruited from both outpatient and inpatient settings (Livesley & Jackson, 2009).  This also allows for the assessment of maladaptive personality measures (Livesley & Jackson 2009).  Finally, since personality pathology is a frequent vulnerability factor of psychiatric disorders, it is useful in providing additional descriptive detail to an individual’s underlying psychopathology (Livesley & Jackson, 2009).

DAPP-BQ Content

DAPP-BQ consists of 290 items, developed based on a review of literature and review from a panel of experts (Livesley & Jackson, 2009).  Test items were designed to represent descriptive features of personality disorders and traits (Livesley & Jackson, 2009).    The 18 dimensions measured in the DAPP-BQ were developed to cut across an array of dimensions, based on the DSM-3 Manual (Livesley & Jackson, 2009).

DAPP-BQ Validity

Overall, the DAPP-BQ is a fairly valid measure of personality disorders, containing approximately 20 years of research evidence to support it (Livesley & Jackson, 2009).  The validity scales utilized in the MMPI-2 to assess for a client’s attitudes while taking a test are not available in this instrument. Since this test measures, only major elements of personality disorders from the DSM-3 manual, the “content validity [of the DAPP-BQ] should be taken with a grain of salt” (Livesley & Jackson, 2009, p9).   Nonetheless, this test does have good criterion validity, as determined by comparing this test with other measures utilized to assess similar constructs (Livesley & Jackson, 2009).

DAPP-BQ Limitations          

Little detail is given regarding the representativeness of the normative sample for the DAPP-BQ (Livesley & Jackson, 2009).  This naturally limits the generalizability of results.  Additionally, the DAPP-BQ doesn’t include general criteria for various personality disorders, and should not be utilized for diagnostic purposes (Livesley & Jackson, 2009).  Instead it is useful for fur\ther conceptualizing and understanding elements of individual personality pathology, once diagnosis is already made (Livesley & Jackson, 2009).

Comparison of MMPI-2 & DAPP-BQ

The MMPI-2 is utilized to assess for personality patterns indicative of a personality disorder and measures in ten broad areas including depression, paranoia, etc Butler, et al, 1990)  In contrast, the DAPP-BQ is designed to measure adaptive and maladaptive personality traits, and is based on information from the DSM-3 manual (Lively & Jackson, 2009).  Additionally, key differences can be found in the normative samples between these two tests.  the MMPI-2 normative sample was designed to reflect demographic data from the 1980 census (Butler, et al, 1990. The DAPP-BQ provides test users little information about its normative sample (Lively & Jackson, 2009).  Additionally, the DAPP-BQ utilizes a normative sample and clinically diagnosed group as a reference point for assessing adaptive and maladaptive personality traits (Lively & Jackson, 2009).  Finally, when comparing the validity of these two tests, the MMPI-2 appears to have a greater history of research in support of its validity (Butler, et al, 1990; Lively & Jackson, 2009).

References

Butcher, J. N., Dahlstrom, W. G., Graham, J. R., Tellegen, A., & Kaemmer, B. (1990). Minnesota Multiphasic Personality Inventory-2.
Hays, D.G. (2013). Assessment in counseling a guide to the use of psychological assessment procedures (5th Ed.). Belmont, CA:  Brooks/Cole, Sengage Learning.
Livesley, W. J., & Jackson, D. N. (2009). Dimensional Assessment of Personality Pathology–Basic Questionnaire

Share This:

NCE – Assessment Section (SASSI)

Purpose of Assignment

The purpose of this assignment is to evaluate the results of a SASSI Report for a 38-year-old male by the name of Jim (The SASSI Institute, 2008).  Jim has been referred to a therapist for a substance abuse evaluation after an arrest for domestic violence (The SASSI Institute, 2008).  Since Jim isn’t here of his own volition, the therapist utilizes a the Substance Abuse Subtle Screening Inventory “SASSI-3”.  Substance abuse consists of a failure to fulfill major role obligations, while alcohol dependence refers to symptoms of withdrawal and increased tolerance (Hays, 2013, p143).  Miller, et al, (2001) describe substance abuse as an especially “cunning” (p3) disorder that can often dominate someone’s life, yet remain unnoticed.  The SASSI-3 is useful in Jim’s case, since it can detect substance abuse even in individuals who likely to deny having a problem (Hays, 2013; Miller, et al, 2001).  In the next section, I will briefly review the results of Jim’s SASSI report.

Information Provided

Collateral Information

Collateral information indicates that this isn’t Jim’s first arrest for domestic violence (The SASSI Institute, 2008).  Additionally, not only does Jim have an extensive history of substance use, his mother is an alcoholic as well. (The SASSI Institute, 2008).

FVA & FVOD SCORES  

The SASSI-3 contains 26 direct questions that clients respond to utilizing a Likert-type scale (Miller, et al, 2001).  Responses indicate how often individuals have experienced a some “specified event” in their lifetime (Blackwell & Toriello, 2005).  Collectively these events indicate not only the presence of substance misuse, but a client’s willingness to acknowledge their problem.   The “Face Valid Alcohol” (FVA) score is comprised of 12 items about an individual’s past alcohol use experiences (Miller, et al, 2001).  The “Face Valid Other Drug” (FVOD) score is comprised of 14 items and asks about an individual’s past drug use history (Miller, et al, 2001).  Jim FVA & FVOD scores are quite low, indicating that he doesn’t acknowledge a substance use history (The SASSI Institute, 2008).

Measures of Defensiveness

The “Defensiveness Score” (DEF) is a measure of an individual’s willingness to acknowledge a substance abuse problem.  Individuals with high DEF scores are often said to be “Faking good” (Miller, et al, 2001), and often reflects an attempt to conceal evidence of a problem.  T scores greater than 40 indicate the presence of defensiveness (The SASSI Institute, 2015).  Interestingly, Jim’s score is 70, well above this cut-off point. This score indicates he is unwilling to acknowledge a problem and that therapists should proceed with caution when providing feedback on test scores (The SASSI Institute, 2015; Miller, et al, 2001)  Feedback entails walking a fine line between confrontation and enabling, where an individual can be begin to information provided (Miller, et al, 2001).  Avoid judging and labeling such clients in order to work at building an alliance. Since these individuals tend to be highly resistant to change, assist them in acknowledging its value is a primary therapeutic goal (Hays, 2013; Miller, et al, 2001)

Family History Measures

SYM Score.  Two family history measures can be found in Jim’s SASSI results and are useful in contextualizing scores all discussed thus far.  The Symptom Score (SYM) assesses the correlates and causes of substance abuse (Blackwell & Torelli, 2005).   Miller, et al, (2001) state it is useful in determining if individuals are part of a social system that focuses strongly on alcohol (p2).  Jim’s score is well below the T-Score of 7, and indicates he acknowledges no family history of substance abuse (Miller, et al, 2001).

FAM Score. The Family Versus Control Subjects Scale (FAM) indicates a response patterns that are similar to those with family members who uses substances (Miller, et al, 2001).   Individuals with a high FAM score display a high degree of focus on others for one’s own well being (Miller, et al, 2001).  This other-focused tendency frequently results in a high need of control, inability to trust, and inability to maintain healthy boundaries.   While Jim’s SYM score indicates he does not acknowledge a family history of substance abuse, his high FAM score contradicts this.  High FAM scores are indicative of a sense of worth and happiness that are focused on others (Miller, et al 2001).  These traits appear to correlate with his domestic abuse history (The SASSI Institute, 2008).  The contradiction between his FAM and SYN Scores indicate a low level of insight into this history (Miller, et al, 2001).

Obvious Attributes Score (OAT)

OAT scores indicate an ability to acknowledge traits commonly associated with substance abuser such as impulsiveness, self-pity, and low frustration tolerance (Miller, et al, 2001).  In this respect, the OAT score is a measure of an individual’s level of insight.  Jim’s OAT score is a 2, which places him in the 15th percentile (The SASSI Institute, 2008).  Low OAT scores such as this indicate a low level of insight and high degree of denial (Miller, et al, 2001).

 Information Needed

Overall Jim’s scores provide a series of conflictions.  While Jim’s FVA and FVOD scores indicate low probability of substance use his DEF Scores indicate an unwillingness to admit to a problem. Additionally, while his SYM score indicates he does not acknowledge a family history of substance use, his FAM score prove otherwise. Finally, Jim’s OAT score indicates a low level of insight.  When viewing this information alongside collateral reports, a therapist should be suspicious of the accuracy of his the information he provides.  Whether or not test result conflicts are the result of low insight or intentional denial remains to be seen.  Further assessments are required determine the presence of a substance abuse disorder.

Substance Abuse Assessments

If a substance abuse disorder does indeed exist, the class handout suggests toxicology screens and a no-use-contract (The SASSI Institute, 2008). Further assessments into his substance abuse history can start with a motivational interview.  This tool is useful with individuals who are highly resistant to change and is helpful in building an alliance with Jim in moving forward (Hays, 2013).   A review of information in the SASSI Report can be a jumping of point for further discussion in this respect.  In order to do so, it will be important to assess the degree of discrepancy between Jim’s perceptions and reality.  The ultimate goal in this case, would be to help Jim to develop a greater awareness and insight.

Abuse History & Intimate Partner Violence

The class handout suggests referring to a practitioner that treats perpetrators of domestic violence (The SASSI Institute, 2008). Assessments into Jim’s recent history of interpersonal violence are problematic at best.   I would surmise that they might reflect deeply ingrained habits, he developed from his own childhood (The SASSI Institute, 2008).  A family genogram would be helpful in outlining Jim’s own abuse history, and allow him to explore the effects it has had on him (Hays, 2013).  A therapy group can be helpful in allowing Jim to begin acknowledging these issues (Hays, 2013)

References

Blackwell, T. L., & Toriello, P. J. (2005). Substance abuse subtle screening inventory-3. Rehabilitation Counseling Bulletin, 48(4), 248-250.
Hays, D.G. (2013). Assessment in counseling a guide to the use of psychological assessment procedures (5th Ed.). Belmont, CA:  Brooks/Cole, Sengage Learning.
Miller, F.G, Renn W.R. & Lawzowski. (2001) Sassi scales: Clinical feedback. Springville, IN: The Sassi Institute.
The SASSI Institute (2008).  Defensiveness and non-voluntary clients: The importance of additional assessment data.  Retrieved from:  https://www.sassi.com/wp-content/uploads/2013/12/Defensiveness-and-Non-voluntary-Clients.pdf
Tbe SASSI Institute (2015). Adult SASSI-3 Guidelines. Retrieved from: sassi.com

Share This:

NCE Study – Assessment Section (basic skills)

(((FYI))) – This information is taken from notes for a class. I’m re-reading it for the NCE Exam

ASSIGNMENT QUESTION:  What are the five basic steps in the problem solving model:

“Because performing an assessment is similar to engaging in problem solving, the five steps in a problem solving model can be used to describe a psychological assessment model.” (Page 6).

  1. Problem Orientation: stimulate counselors and clients to consider various examples….
    1. Instruments promoting self-awareness & self-exploration
    2. Group survey’s used to identify problems/concerns
  2. Problem Identification: Clarify the nature of a problem or issue
    1. Screening inventories or problem checklists assess the type and extent of client concerns
    2. Personal diaries and logs identify situations in which problems occur.
  3. Generation of Alternatives: Suggest alternative solutions for a client’s problems, and help them view problem differently….
    1. Assessment interview used to determine what techniques have worked in the past to solve a problem…
    2. Checklists or inventories can also yield data that can be used to generate alternatives….
  4. Decision Making: Determine appropriate treatment for the client…
    1. Expectancy tables can show success rate of people with different types of scores or characteristics…
    2. Balance sheets or decision making grids enable clients to compare the desirability and feasibility of various alternatives
  5. Verification: evaluate the effectiveness of a particular solution:
      1. Readministration of tests
      2. Satisfaction survey
  6. ASSIGNMENT QUESTION – The definition of an assessment procedure:

    THE PURPOSE OF AN ASSESSMENT – “serve diagnostic purposes, help evaluate client progress and are useful in promoting awareness” (page 6) LIST OF USES…

    1. Classification (program placement, screening and certification)
    2. Diagnosis and Treatment Planning
    3. Client self-knowledge
    4. Program evaluation
    5. Research to guide theory and technique development

    DEFINITION OF ASSESSMENT: “Assessment is an umbrella term for the evaluation methods counselors use to better understand characteristics of people, places, and things….”(Morrison, page 4)

ASSIGNMENT QUESTION:  List Some Core Assessment Skills

Assessment is defined as a  “systematic method of obtaining information…to draw inferences and characteristics of people” (Hays, 2013, p4).   The assessment process consists of five steps: (1) test selection, (2) test administration, (3) interpretation, (4) communication of findings, and (5) outcome assessment (Hays, 2013).   Our textbook provides a review of four core skills that are vital when conducting mental health assessments.  These are discussed below:

Problem Solving

In Chapter one Hays, (2013), provides an introduction to the concept of assessment by describing it as a problem-solving process (p6).  The initial steps in this process involve identification of the problem.   With a clear understanding of issues at hand, the next step involves the consideration of alternatives.   Choosing amongst these alternatives and evaluating results conclude the process.   While discussed just briefly, this concept appears critical to the process of mental health assessment.  Therefore, I’ve included it as a core assessment skill.

Basic Counseling Skills

Chapter two provides an overview of assessment by reviewing counseling skills critical to the process (Hays, 2013).  The assessment process involves “engaging and collaborating with clients throughout the counseling relationship” (Hays 2013, p40).  In order to begin, the therapist must first assess a client’s level of readiness to change (Hays, 2013).  This can provide guidance for use of motivational skills and psychoeducation to initiate the assessment process.  Working closely with the clients when selecting and administering tests encourages active participation.  Contextualizing results and communicating findings allows the client make use of this information.

Statistical Measures

Chapters five and six review basic statistical measures as they pertain to mental health assessment (Hays, 2013).  Raw scores are meaningless by themselves.  In order to understand these findings, therapists need to utilize basic statistical measures interpret these raw scores (Hays, 2013).  An understanding of basic measurement concepts such as reliability, variability and scales of measurement is critical to mental health assessment (Hays, 2013).   Knowledge of basic statistical concepts such as measures of central tendency and variability, are also important (Hays, 2013).

Initial Assessment Process

“The initial phases of counseling require several types of assessment to evaluate overall function and plan interventions” (Hays, 2013, p138).  To do this, counselors need to understand the intake interview process, mental status exams, suicide risk assessments and DSM-5 diagnosis (Hays, 2013).  Understanding these components of an initial assessment is the final core skill reviewed in our textbook (Hays, 2013).

ASSIGNMENT QUESTION:  Utilize the case example in your textbook.  Then discuss information needed and assessments you would utilize to obtain it.

“Jeffery is a 16-year-old White male in 11th grade.  He lives with his father James (age 49), his mother Linda (age 48), an older brother Keith (age 18), and a younger brother Max (age 14).  His parents are both teachers in the same high school as Jeffery.  Jeffery’s parents made the appointment with you, a professional school counselor at the same school as Jeffery and his parents.  Jeffery is currently in danger of failing 11th grade, with his grades mostly D’s and F’s, except for a B in computer class.  His parents are frustrated because they do not know how to motivate him.  In addition, a teacher found a notebook with written song lyrics with references to guns and dying.  Asked for an explanation, Jeffery just shrugged and said he was bored in class.  Jeffrey previously saw a counselor during elementary school after he seemed to be having trouble fitting socially in class” (Hays, 2013, p37).

Information Needed –

The first essential bit of information, is Jeffery’s level of readiness for change.  Since he already displays low motivation and is only here at his parent’s request, this is an initial concern (Hays, 2013).  Another consideration would include reviewing his past academic history, to determine if he has experienced drops in performance like this before.  Additionally, since he has had trouble fitting in elementary school, this might be something to look into.  Finally, I would also assess for depression as a potential cause for his low motivation.

Recommended Assessments

An unstructured motivational interview may be a good place to start to assess Jeffery’s level of readiness for change.   Biographical measures, defined in our textbook as client reports and/or historical records, can be another useful source of information (Hays, 2013).  These historical records can include Jeffery’s past academic history and any previous assessments (Hays, 2013).  Behavioral observations from teachers, his parents, and other staff may also provide a useful perspective.   Finally, if depression appears to be an underlying issue, it may be important to refer him to a private therapist for further evaluation.

Charlotte, a 19-year-old female, presents to a college counseling center to seek help for increased anxiety she has had since attending college.  She states she is having difficulty understanding the course materials and is failing her courses.  Charlotte reports that she seldom finishes class assignments or tests, as she ‘runs out of time.’  She notes that she has no previous history or academic problems” (Hays, 2013, p53).

 Information Needed

This scenario provides a realistic picture of how client’s can present their problems to a therapist.  In order to assist Charlotte, it would be important to better understand her anxiety and any associated life stressors.  Other information needed includes a review of her academic history, study habits, as well as any previous aptitude tests.

Recommended Assessments

A primary goal in Charlotte’s case includes an assessment of her anxiety and clarification of underlying issues.  It will also be important to understand how this relates to her academic performance.   Based on the case scenario description, Charlotte does not to appear to have a great deal of insight.  In order to address this, chapter one discusses several problem-solving steps that can be helpful (Hays, 2013).  Problem orientation is described as processes of self-exploration in which assessments can allow the client recognize and accept underlying issues (Hays, 2013).  Problem identification can utilize screening inventories to further clarify issues (Hays, 2013).  These steps can provide a good starting point that can help establish a rapport and mutual understanding of underlying problems.

Alongside these efforts, a review of biographical data can also be helpful.  This can start with an informal interview that can assess for any recent life stressors, underlying her anxiety.  Formal assessments, such as the Beck Anxiety Inventory or Multidimensional Anxiety Questionnaire, should be discussed (Hays, 2013).

In order to address the issue of her recent academic performance a review of her academic background could be helpful alongside an array of study habit inventories mentioned in our text (Hays, 2013).

“Nicolas is a 41-year-old African American male presenting to counseling at the request of his neighbor.  The neighbor reports to you that Nicholas has mentioned he has been depressed since he lost his job six months ago and ended a long term relationship four months ago.  Nicholas reports that he has increased his drinking to help him escape and states he doesn’t want to be here anymore….’” (Hays, 2013, p135).

Information Needed

As a mandatory reporter a thorough suicide risk assessment would be critical.   It will be important to thoroughly review Nicolas’s psychiatric history and assess for a past history of attempts (Hays, 2013).  Since Nicolas acknowledges his suicidal ideation, a therapist should assess for any plan and intent (Hays, 2013).  Other information needed includes an assessment of substance use, depression history, and related environmental stressors.

Throughout this process it would be important to contextualize findings in terms of Nicolas’s culture and backstory.  A person’s backstory includes the context within which recent observed symptoms are occurring.   In other words, how can observations and findings be contextualized in terms of recent life events and cultural background?

Recommended Assessments

Key assessments critical in Nicolas’s case include a thorough risk assessment and mental status exam.  Additionally, the therapist should review his psychiatric history and assess for any recent life stressors.  Other tests that should be considered include an assessment of alcohol abuse, depression inventory.

ASSESSMENT REPORT OUTLINE 

Name:

Date of Birth:

 Age:

 ]Date of Report:

Assessment conducted by:

Client Description:

Reason for referral: what question needs to be answered, background information that is relevant such as marital status, health information, developmental information=

Background Information:

Evaluation Procedures:

Behavioral Observations:  describe behaviors during the test, response to test situation, physical description if relevant, anything unusual that happened during the testing, location/testing environment

Findings:  start by reporting each score for each test given and explain what the score means, include examples of questions to illustrate, integrate scores with background and observations, connect all of information-as a whole what do they mean.

Recommendations:  Brief summary of findings and answer the referral question.  Provide specific recommendations for services, treatment, etc.

References

 Hays, D.G. (2013). Assessment in counseling a guide to the use of psychological assessment procedures (5th Ed.). Belmont, CA:  Brooks/Cole, Sengage Learning.

Share This:

What is WHODAS???

How to Administer it…

What is in the WHODAS?

How To score it?

WHODAS Scoring Tutorial from Dr. Anthony J Hill on Vimeo.

Finally A Copy of The WHODAS 2 Assessment

Here is a copy of the self-administerd 36-Item WHODAS-2

Here are instructions for the self-administered 36-item WHODAS-2

Here is a copy of the self-administered 12-Item WHODAS-2

here is how u score self-administered 12-item

Here is a copy of 12-item interiewer-Administered WHODAS-2

 

Share This: