NCE – Assessment Stuff (Validity)

  1. Validity – Whereas reliability is concerned with whether the instrument is a consistent measure, validity deals with the extent to which meaningful and appropriate inerences can be made from the instrument (96).
    1. Valid for what??? Validity of a test varies depending on the purpose and the target population. Its not a characteristic of the assessment but the meaning of findings for a sample…(96)
    2. Validity also asks if test scores measure what they are purporting to measure? Two types of invalidity…(96)
      1. Construct underrepresentation – refers to failing to include components or dimensions of a construct
      2. Construct irrelevant variance – too much noise or irrelevant things…
  • Types of validity
    1. Face validity – does the assessment appear to measure what it is supposed to (97)
    2. Content validity – do the items on assessment represent the domain it is trying to measure? (97)
    3. Criterion related validity – pertains to validity evidence that is obtained by comparing test scores with performance on a measure. (grades or diagnosis)…(98)
      1. Types of
        1. Concurrent validity – when test scores and the criterion performance scores are collected at the same time. Correlation coefficients are calculated between the scores…
        2. Predictive validity – the client’s performance or criterion measure is obtained some time after the test score… Does it predict what it is supposed to be predicting? (98)
      2. How useful??
        1. Base rates: proportion of people in population who represent the particular characteristic or behavior that is being predicted??? (98)
        2. Incremental validity: extent to which a particular assessment instrument adds to the accuracy of predictions obtained from other tests…. (99)
  • False negative/False positive – Incorrect predictions
  1. The accuracy of classification of individuals into different diagnostic categories or re- lated groups based on a particular cutoff score can be expressed in terms of sensitivity and specificity.
    1. Sensitivity refers to the accuracy of a cutoff score in detecting those people who belong in a particular category. By definition, testing procedures that are sensitive produce w false negatives.
    2. Specificity indicates the accuracy of a cutoff score in excluding those without that condition.
  2. Construct Validity Another type of validity evidence asks the question, Are the test results related to variables they ought to be related to and not related to variables that they ought not to be? ((Evidence of the theoretical basis of a test’s measures…)) – page 100
    1. CONVERGENT VALIDITY – On the one hand, tests scores should be expected to show a substantial correlation with other tests and assessments that measure similar characteristics (convergent validity).
    2. DISCRIMINANT VALIDITY – On the other hand, test scores should not be substantially correlated with other tests from which they are supposed to differ; that is, they should show discriminant validity. A test
    3. FACTOR ANALYSIS – can determine whether the test items fall together in different factors the way that of theory suggests they should.
  3. Treatment Validity – Do the results obtained from the test make a difference in the treatment? (101)
  4. Postscript: Validity Scales –
    1. It is also important to determine the accuracy of a client’s responses. That is, patterns in responses that are irrelevant to the actual intention of the assessment (response sets) may emerge. (103)
    2. Validity scales – tools used to determine three types of response distortions:
      1. a client pretending to have some problem or disorder (faking bad),
      2. a client responding in a socially desirable manner appear more favorable or less symptomatic (faking good),
  • and a client responding randomly either intentionally or unintentionally (Van Brunt, 2009b).

Share This: