University of the West of England
MODULE SPECIFICATION
Code: USPJEA-20-3 Title: PSYCHOMETRICS AND PSYCHOLOGICAL Version: 3
TEST CONSTRUCTION
Level: 3 UWE credit rating: 20 ECTS credit rating: 10
Module type: STANDARD
Owning Faculty: Applied Sciences Field: Psychology
Valid from: September 2003 Discontinued from:
Pre-requisites: USPJCN-20-1 Brain Mind and Behaviour
USPJDC-20-2 Research Design and Methods 2
Co-requisites: NONE
Excluded combinations: NONE
Learning outcomes:
By the end of the module students will have :
Examined classical and contemporary perspectives on test construction
Explored the role of relevant multi-variate techniques (in particular, factor analytic methods) to the design of tests, and other statistical approaches used in scale construction and test evaluation.
Learned how to critically evaluate the technical adequacy of psychometric measures and their suitability for particular applications
Gained familiarity with examples of significant test types (such as measures of ability, attainment, personality, attitudes and values) and of methods used for alternative applications (eg selection, guidance, educational, developmental and clinical assessment)
Critically appraised models of human characteristics informing the design of specific tests
Developed basic skills in test administration, scoring, norming and feedback as a means of gaining understanding of the ‘real-world’ issues in test choice and use, and to gain awareness of how test design and construction issues are necessarily influenced by these.
Considered contemporary debates on ethical usage of tests
Considered contemporary debates on cultural and equal opportunities issues relating to test design and use.
Syllabus Content:
Test standards and qualifications
Principles of standardised test administration
The experience of being tested - factors influencing test performance.
Norm-referenced, criterion-referenced, domain-referenced and self-referenced assessments.
Standard scoring systems. Standard Error of Measurement and Standard Error of Difference.
Requirements of norms. Sampling, representativeness. Standard Error of the Mean.
Concepts underpinning effective test evaluation.
Internal consistency, stability, inter-rater reliability. Restriction of range and attenuation effects.
Spearman-Brown formula; split-half, Cronbach a, Kuder-Richardson formulae.
Generalisability theory. Item Response Theory.
Concurrent, Predictive, Content, Construct Validity ; Concepts of Convergent and Divergent Validity.
Issues in Evaluating Validity; Selection Ratio and Utility Analysis.
Introduction to Meta-Analysis and Validity Generalisation
Appraisal of evidence concerning the effectiveness of alternative assessment methods for different applications. Occupational, Educational, Developmental, Clinical and other areas of test application.
Scale Construction and Design
Stages in designing questionnaires. Alternative design principles and methods. Advantages and disadvantages of alternative methods. Scale and Item analyses. Criterion-keying . Introduction to Factor-Analysis. Criticisms of factor-analytic based approaches to measurement. Ipsative Scales.
Ethical and Equal Opportunities Issues in Test Use
Test Interpretation and Feedback.
Interpretation of ability and personality constructs. Influence of theoretical models of human characteristics on test design. Normative, differential and ipsative interpretation.
Corroboration and cross-validation of results. Directive, non-directive, problem-solving and ‘expert’ approaches to feedback. Test choice and feedback as hypothesis testing. Client characteristics and feedback issues. Communication skills and test use. Written feedback. Implications of use of computer-generated test reports.
Teaching and learning methods:
The learning strategies adopted are chosen to ensure that students understanding of theoretical concepts is grounded in direct experience of test design and significant aspects of practice. Learning strategies are varied and informed by Kolb’s model of learning which emphasises the need for effective learning to incorporate opportunities for direct experience, reflection, theory building and experimentation. Strategies include :
practice in core skills, supported by guideline reading and student managed preparation for practical work
critical evaluation exercises using example questionnaires, short-versions of manuals etc.
critical reading of papers and discussion in student-led seminars
student managed project work on test design including use of computer technology as appropriate
review and discussion of other pertinent material such as EOC and BPS guidelines, test publishers manuals, brochures and guides, qualification framework documents, test standards etc.
contributions from appropriate specialist staff and outside speakers
Reading Strategy
All students will be encouraged to make full use of the print and electronic resources available to them through membership of the University. These include a range of electronic journals and a wide variety of resources available through web sites and information gateways. The University Library’s web pages provide access to subject relevant resources and services, and to the library catalogue. Many resources can be accessed remotely. Students will be presented with opportunities within the curriculum to develop their information retrieval and evaluation skills in order to identify such resources effectively.
This guidance will be available either in the module handbook, via the module information on UWEonline or through any other vehicle deemed appropriate by the module/programme leaders.
Jackson C (1996) Understanding Psychological Testing Leicester, BPS. (pb)
Kline P (1991) Intelligence : The Psychometric View London : Routledge (pb)
Kline P (1993) Personality : The Psychometric View London : Routledge (pb)
Kline P (1993) The Handbook of Psychological Test Construction London, Routledge
Cronbach L J (1990) Essentials of Psychological Testing (5th edition) New York : Harper and Row
Bartram Dave (ed) (1997) Review of Ability and Aptitude Tests (Level A)
Cook M (1993) Personnel Selection and Productivity (2nd edition). Chichester : Wiley
Kline P (1994) An Easy Guide to Factor Analysis London : Routledge. (pb)
Hambleton R et al (1991) Fundamentals of Item Response Theory Sage
Loewenthal K (1996) An Introduction to Psychological Tests and Scales Routledge
Spector E (1992) Summated Rating Scale Construction - An Introduction Sage Publications
Carroll J B (1982) The measurement of intelligence. In Sternberg R J (Ed) Handbook of Human Intelligence (pp29-120) New York : Cambridge University Press
Carroll J B & Horn J L (1981) On the scientific basis of ability testing American Psychologist 36, 1012-1020
Horn J (1986) Intellectual Ability Concepts. In Sternberg R J (Ed) Advances in the Psychology of Human Intelligence Volume 3. Hillsdale, N.J. Erlbaum
Sternberg R J (1988) Mental Self-Government : A theory of intellectual styles and their development Human Development, 31, 197-224
Sternberg R J (1979) The nature of mental abilities. American Psychologist 34, 214-230
Wolman B B (Ed) (1985) Handbook of Intelligence : Theories, measurements and applications New York : Wiley
Matthews G (1997) The Big Five as a Framework for Personality Assessment. In Anderson N & Herriott P International Handbook of Personality Assessment Wiley
Bartram D (1996) The Relationship Between Ipsatised and Normative Measures of Personality. Journal of Occupational Psychology 69. 25-39
Saville P & Wilson E (1991) The Reliability and Validity of Normative and Ipsative Approaches in the Measurement of Personality. Journal of Occupational Psychology 64. 219-238.
Murphy K R (1997) Meta-analysis and validity generalisation. In Anderson N R & Herriot P (Eds) International Handbook of Selection and Assessment Chichester : Wiley
Barrick & Mount (1991) The Big Five Personality Dimensions and Job Performance : A Meta-Analysis. Personnel Psychology. 41. 1-26
Blinkhorn S & Johnson C (1990) The Insignificance of Personality Testing. Nature. 348. 671-2
Tett R & Jackson D (1991) Personality Measures as Predictors of Job Performance. Personnel Psychology 44. 703-42
Reilly R R & Chao C T (1982) Validity and fairness of some alternative employee selection procedures. Personnel Psychology, 35, 1-62
Schmidt F et al. (1979) Impact of valid selection procedures on work-force productivity. Journal of Applied Psychology 64. 609-626
Feltham R et al (1994) Developing Fair Tests. The Psychologist. January. 1994. 30-31
Kellett D et al (1994) Fair Testing : The Case of British Rail. The Psychologist January 1994. 26-29
Assessment
Weighting between components A and B (standard modules only) A: 75 % B: 25%
ATTEMPT 1
First Assessment Opportunity
Component A Element weighting
TE |
Timed Essay |
1 |
|
| |
|
|
Component B
Description of each element Element weighting
PROJ |
Scale Design Project |
2 |
CW1 |
Test Evaluation Exercise |
1 |
|
| |
|
|
Second Assessment Opportunity (further attendance at taught classes) NO
Component A
Description of each element Element weighting
TE |
Timed Essay |
1 |
|
| |
|
|
Component B
Description of each element Element weighting
PROJ |
Scale Design Project |
2 |
CW1 |
Test Evaluation Exercise |
1 |
|
| |
|
|
SECOND (OR SUBSEQUENT) ATTEMPT Attendance at taught classes. YES
Specification confirmed by …………………………………………………Date ……………………………
(Associate Dean/Programme Director)