Problem Identification: Clarify the nature of a problem or issue
Screening inventories or problem checklists assess the type and extent of client concerns
Personal diaries and logs identify situations in which problems occur. Personality inventories can help counselors understand personality dynamics in relationships and work.
Generation of Alternatives: Suggest alternative solutions for a client’s problems, and help them view problem differently….
Assessment interview used to determine what techniques have worked in the past to solve a problem…
Checklists or inventories can also yield data that can be used to generate alternatives….
Decision Making: Determine appropriate treatment for the client…
Expectancy tables can show success rate of people with different types of scores or characteristics…
Balance sheets or decision making grids enable clients to compare the desirability and feasibility of various alternatives
Verification: evaluate the effectiveness of a particular solution:
Readministration of tests
The definition of an assessment procedure
PURPOSE OF ASSESSMENT – (page 6) – “serve diagnostic purposes, help evaluate client progress and are useful in promoting awareness”
Classification (program placement, screening and certification)
Diagnosis and Treatment Planning
Research to guide theory and technique development
DEFINITION OF ASSESSMENT:
“Assessment is an umbrella term for the evaluation methods counselors use to better understand characteristics of people, places, and things….”(Morrison, page 4)
Assessment is any systematic method of obtaining information from tests and other resources used to draw inferences about characteristics of people, objects, or programs” (Morrison, p4)
EXAMPLE OF ASSESSMENT PROCEDURES: “standardized tests, rating scales, observations, interviews, classification techniques, records. (Morrison, p.4)
What makes a test standardized?
Non-standardized Assessment Programs –
“Use self-ratings to help clients organize their thinking about themselves and various opportunities and include computer-based programs and career education workbooks.”
“include rating scales, projective techniques, behavioral observations and biographical measures…(26) less reliable and valid…
Standardized tests must meet certain standards during the testing process. These standards include:
Uniform procedures for test administration
Use of representative groups for test interpretation.
Most standardized tests have clear evidence of reliability/validity
Standardized tests can include the following:
“A test is said to be standardized when it has clearly specified procedures for administration and scoring, including normative data…” (page 117)
test developer defines a target population for test use
This target pop has an observable characteristic that varies and needs to be measured
Then developer administer tests to the population to a sample
This is administered in accordance with specific instructions
Then developer provides descriptive statistics against which to compare results including measures of central tendency, standard deviation, and variability…
What is the difference between qualitative and quantitative assessments? What kind of information does each type yield?
Qualitative Assessments – Qualitative procedures provide a verbal description of a person’s behavior or situation and place the results in a category. More open-ended and adaptable in counseling…Collects data that does not lend itself to quantitative methods but rather to interpretive criteria (EXAMPLES)
Nominal scales – does not possess magnitude, equal intervals or an absolute zero. Nominal scales provide descriptive criterion… Utilized to classify and name…
Ordinal scales: Refers to the rank ordering of nominal categories..Likert scale responses are an example. Can’t average or create a mean for these…with not zero or equal intervals…
Quantitative Assessments – Quantitative procedures yield a specific score on continuous scale. They provide greater reliability and validity Collects data that can be analyzed using quantitative methods, i.e. numbers, statistical analysis
Interval measures – possess magnitude and equal measures. You can add and subtract but not divide or multiply since there is no absolute zero.
Ratio scales have magnitude, equal intervals, and an absolute zero. Can utilize all statistical techniques in this…
Measures of Central Tendency: the average score for a distribution of scores
MEAN = the average, it is equal to the sum of the scores divided by the number of individuals in the group
MEDIAN = middles core below which one half or 50% fall above and below
MODE = the score that appears most frequently in a set of scores.
Know the normal curve and standard:
MEASURES OF VARIABILITY: indicate the extent of individual differences around a measure of central tendency.
RANGE – indicates distance between highest and lowest
INTERQUARTILE RANGE: range around the median.
STANDARD DEVIATION: most frequent measure of variability and represents a standardized number of units from a measure of central tendency.
The larger the value, the greater the dispersion of scores and variability.
Popular because basis for standard scores and helps represent scores accurately.
Calculated by dividing the sum of squares from the sample size minus one and taking the square root of the value…
NORMAL CURVE: In a perfect world and well-distributed set of scores around measures of central tendency would yield a bell curve. (P110)
The value of the standard deviation divides the raw score range into approximately six parts, with 3 above and 3 below the mean..
Scores outside these 3 deviations above/below rare
34% of sample between the median and 1SD above
34% also occurs between median and 1SD below
An additional 14% occurs between with SD above /below
NO SKEWNESS – titling to one side
NOT TOO MUCH KURTOSIS – narrow or broad (see SD’s above)
What are twelve broad factors of test user competencies? (page 44)
Avoid errors in scoring and recording
Refrain from labeling people with personally derogatory terms like dishonest on the basis of a test score that lacks perfect validity
Keep scoring keys and test materials secure
Seeing that every examinee follows directions
Using settings for testing that allow optimal performance
Refraining from coaching or training individual’s/groups on test items.
Being willing to give interpretation and guidance to test takers in counseling situations
Not making photocopies of copyrighted materials
Refraining from using homemade answer sheets that do not align properly with scoring keys.
Establishing rapport with examinees to obtain accurate scores
Refraining from answer questions from test takers in greater detail than the test manual permits
Not assuming that a norm for one job applies to a different job.
Explain what is meant by “grade equivalent”
Utilized in educational assessments to compare a child’s scores against criterion-referenced scores that indicate how a child measures up against an expected level of performance according to either age related or grade reldated criteria…
A grade equivalent (GE) score is described as both a growth score and a status score both. As is common with scores that can be used in both major categories GE do not do a very good job in either category. However, what GE does is indicate where a student’s test score falls along a continuum. The GE is expressed as a decimal number (4.8). The digit(s) to the left of the decimal represent the grade. The digit(s) to the right of the decimal represent the month. We assume 10 months per school year. The GE of a given raw score on any test indicates the grade level at which the typical student earns this raw score. For example, if a seventh grade student earned a GE of 8.4 her raw score is like the raw score the typical student would likely earn on the same test at the end of the fourth month of the eighth grade.
Types of Reliability
Definition of Reliability – Definition of Reliability – how consistently a test measures and the extent to which it eliminates the chance of error. (Dependability/Reproducability)
TRUE SCORE + ERROR = OBSERVED SCORE
ERRORS CAN INCLUDE: (a) r/t individual; (b) r/t test itself; (c) r/t test conditions
Test- Retest Reliability – Measures the consistency over time. The correlation coefficient in this case indicates the relationship between scores obtained by individuals within the same group in two administrations of the test (92)
Alternate/Parallel Form – comparing consistency scores of individuals within the same group on two alternate but equivalent forms of the same test (92)
Internal Consistency Measures of Reliability;
Split-half Reliability – a popular form of establishing reliability because it can be obtained from a single administration by dividing the test into comparable halves and comparing the resulting scores for each individual (94)
test cut in half
compare results from each half
Interitem Consistency = is a measure of internal consistency that assesses the extent to which the items on a test are related to each other and to the final score.
Interrater Reliability –refers to the degree of agreement of two or more independent judges of a test.
What does validity mean and what are the different types of validity?
DEFINITION OF VALIDITY: – Whereas reliability is concerned with whether the instrument is a consistent measure, validity deals with the extent to which meaningful and appropriate inferences can be made from the instrument. (Page 96) Does the measure what it intends to measure?
Face validity – not really evidence of validity, is determined if the assessment ‘looks like’ it is measuring what it is supposed to measure.
Content validity – representativeness of items from a population of items…do sample items represent/reflect all major components of the domain they are trying to measure.
Criterion-Related – degree of prediction of a client’s performance on a criterion assessed at the same item (concurrent) or sometime in the future (predictive). …validity of evidence obtained by comparing test scores with performance on a criterion measure…(job satisfaction, grades, diagnosis, etc as comparison)
Concurrent validity test score and criterion measure same time
Predictive validity test score predicts results of future criterion measure given later…
Construct Validity Are the test results related to the variables they ought to be related to and not related to the variables that they ought not be? Degree to which it is r/t theoretical construct
Discriminant Validity – test scores do not correlate with tests that measure something different.
Convergent Validity – correlation with tests and assessments that measure the same characteristics
Treatment Validity – Do results from the test make a difference in terms of treatment?
Interpret the meaning of a correlation.
Correlation and Reliability:
Correlation statistic assesses the degree to which two measures are related. Each correlation coefficient contains two bits of information:
The sign tells whether the two variables tend to rank individuals in the same (direct relationship) or reverse order (inverse relationship)
The value indicates the strength of the relationship….
PEARSON PRODUCT-MOMENT( r ) is the most common and can range from +1.00, indicating a perfect positive; through .00, not relationship; to -1.00 a perfect inverse relationship
Standard reliability coefficients = usually run within a range of .80 to .95 but what is considered to be acceptable varies depending on the test circumstances and type of reliability…
Four models of helping and coping.
COMPENSATORY MODEL – people are not responsible for problems but are responsible for solutions = NO BLAME
MEDICAL MODEL – People not responsible for problems or solutions = VICTIM OF DISEASE
ENLIGHTMENT MODEL – People are not responsible for solutions but are responsible for problems = UNDERSTANDING
MORAL MODEL= people responsible for problems and solutions = ATTRIBUTION OF RESPONSIBILITY
What are some instruments frequently used by community mental health counselors?
WHAT IS COMMUNITY MENTAL HEALTH COUNSELOR? ((say you learned about this a bit through your experience in the particum))
Community counseling is a type of counseling that is used to help communities that are suffering from psychological or social discord, for one reason or another.
Professionals in this field will often try to treat individuals in the community for whatever psychological problem that ails them.
They will also attempt to prevent future problems as well….Community counselors attempt to solve widespread community problems that are social or psychological in nature. In order to do this, they will often work with individuals as well as a community as a whole.
There may be a number of different problems that can plague individuals in dysfunctional communities. Many of these problems are often related. Community counselors will often speak with several individuals, offering guidance, therapy, and counseling. While trying to help these individuals overcome their challenges in life, though, a community counselor will also attempt to get to the root of the problem.
WHAT ASSESSMENTS ARE UTILIZED?
Firstly, would need to figure out what is wrong with people that come and visit so they can determine what services they need.
This would require an intake interview
Mental status exam
Inventories for assessing mental disorders:
Psychiatric diagnostic screening questionnaire
Patient health questionnaire (129)
Consult DSM manual
If they have a mental health or substance abuse problem they can utilize assessments to determine this. Examples can include the following:
Substance abuse assessments
Anxiety and fear measures
Also need survey’s occasionally, to determine how the community as a whole is from time to time. Can administer these survey’s and create data accordingly.
What ten topics should most intake interviews cover?
General appearance and behavior
History of current problem and related problems
Present level of functioning in work, relationships, and leisure activities
Use of alcohol or other drugs, including medications
Family history of mental illness
History of physical, sexual, or emotional abuse
Risk factors including the urge to harm self/others
Attitude of client towards the counseling process.
What should be included in a problem checklist in counseling?
INTAKE FORMS – The intake form should be kept relatively short so that it does not become an imposition in counseling. As counseling progresses the form can be supplemented with additional questionnaires designed for particular issue, such as career planning, study skills, or relationships.
SCREENING INVENTORIES: “counselors often utilize brief, self-report screening instruments to obtain a preliminary overview of a client’s concerns.” (123)
INVENTORY OF COMMON PROBLEMS = Assess for the Nature and intensity of concerns (PAGE 126)
SYMPTOM CHECK LIST-90-REVISED – Describes client’ s symptoms and severity of them…(compulsive, sensitivity, depressed, anxiety, hostile, phobic, paranoid, psychotic)
Suicide risk assessment.
How do clients differ when they enter counseling? Differences in the degree of openness and readiness for change
Precontemplation – individual not aware of problems and has no intention to change behavior in the foreseeable future
Contemplation – individuals are aware of their problems but have not made a serious commitment to do anything
Preparation stage – individuals have begun to make small changes in their problematic behaviors with intention of making more changes in one month
Action Stage – successfully changed their behavior for periods of time
Maintenance Stage – goal is to maintain changes
What is the difference between Aptitude & Achievement? Give an example of each type.
Assessment of aptitude is generally thought of as an ability to acquire a specific type of skill or knowledge; aptitude tests are typically used for prediction purposes. Academic and scholastic aptitude is related to education program evaluations and admission (SAT / ACD / GRE)
Assessment of achievement attempts to measure what learning has taken place under a relatively standardized set of conditions or as a result of a controlled set of experiences. Designed to measure what has already been learned. Whereas aptitude r/t learning ability, achievement r/t what is known. (TerraNova / Iowa Basic Skills)
What steps have been taken to make sure only competent users administer tests? What are the qualifications for purchasing tests? Competence in Testing: page 49
Test Publisher set guidelines for level of competency that determines who is able to utilize a test….
A LEVEL – NO QUALS
B LEVEL MASTERS LEVEL
C LEVEL PHD OR EDUCATION RELATED FIELD
Q LEVEL –OTHER SPECIFIED
Professional associations also create their own ethical codes.
States have their own guidelines
Fair Access Coalition Testing
What are the guidelines for test interpretation –
Tests are not used by others to make decisions for or against a client.
Are to maintain confidentiality.
Test users are to ensure that information is not misused by others.
Is the person receiving info qualified to understand and interpret the information
Should make sure interpreted in a way the person understands
Clients have right to know and understand results.
What is the role of career assessment and what are the types of career assessment? What are the types and role of educational assessments? See page 202…
Role of career assessment: to help clients explore both the process and content of career development. There uses of career assessments:
Prediction – future career performance
Discrimination – evaluate ability and interests
Monitor – assess progress
Evaluate – measure goals and how well met
Types of career assessments:
Career readiness assessments (maturity and adaptability)
Assessment of an individual’s values, interests and aptitudes