NCE – Assessment Stuff (Reliability)

  1. Reliability – refers to how consistently a test measures and the extent to which it eliminates chance and other extraneous factors in its results. Synonyms for reliability include dependability, reproducibility, stability, and consistency. (89)
    1. Conceptual Understanding –
      1. Measurement Score – The positive or negative bias within an observed score. That is, a score that a person receives on a test is made up of two elements: the person’s true score and an error. (89)….observed score (X) = true score (T) + error score (e).
        1. Individual error – “Include text anxiety, motivation, interest in responding in a socially desirable manner, heterogeneity of the group tested, and test familiarity.” (90).
        2. Also Test error & Testing condition error…
      2. Correlation and Reliability –
        1. The correlation statistic assesses the degree to which two sets of measures are related for each correlation coefficient contains two bits of information: (90)
          1. the sign of the correlation tells whether the two variables tend to rank individuals in the same order (+) or reverse order (-).
          2. And the magnitude of the correlation indicates the strength of the relationship
        2. EXAMPLES OF TYPES OF:
          1. The Pearson-Product Coefficient (r) is the most common and can range in value from +1.00, indicating a perfect relationship through .00, no relationship or -1.00 or an inverse relationship (page 90)
          2. Coefficient of Determination – The shared variance between two variables…calcnulated by squaring the correlation coefficient.
        3. Reliability coefficients usually run within the range of .80 to .95….
      3. Types of Reliability –
        1. Test-Retest Reliability – “measures consistency over time…Indicates the relationship between scores obtained by individuals within the same group on two administrations of the test.” (92)
        2. Alternate-Form Reliability – “compare the consistency scores of individuals within the same group on two alternate but equivalent forms of the same test…” (93)
        3. Split-Half Reliability – “dividing the test into two comparable halves and comparing the results of two scores…
          1. Decreasing the reliability estimate…
          2. Spearman-Brown Formula can be used to estimate how reliable
        4. Interitem Consistency – a measure of internal consistency that assesses the extent to which the items on a test are related to each other and to the total score. This measure of test score reliability provides an estimate of the average intercorrelations between all the items on a test. (94)
          1. Kuder-Richardson (KR) Formula
          2. Cronbach’s Alpha

Interrater Reliability – Refers to the degree of agreement between two or more independent judges. INterrater agreement is calculated by dividing the number of agreements that an event occurred by the number of possible agreements (95).

Share This: