KNOWLEDGE OF TEST CONSTRUCTION PROCEDURES AMONG LECTURERS IN IGNATIUS AJURU UNIVERSITY OF EDUCATION, PORT HARCOURT, NIGERIA
Table Of Contents
Thesis Abstract
<p>
<b>ABSTRACT</b> </p><p>The study was conducted to assess the knowledge of lecturers on test construction
procedures. The study adopted an analytical descriptive survey design. One research
question and four hypotheses guided the study. It involved a sample of 200 lecturers
drawn from 440 teaching members of staff of the university. A self-structured
instrument was used for data collection. The research question was answered using
mean scores while independent t-test and ANOVA were used to analyze the
hypotheses at 0.05 level of significance. Results revealed high knowledge of test
construction procedures by the lecturers. It was also found that lecturers’ knowledge
of test construction procedures did not differ significantly based on gender, years of
experience, professional training and educational qualification.
Keywords Test construction Procedures, Item Analysis, Test Blue Print,
Knowledge
<br></p>
Thesis Overview
<p><b>1.0 INTRODUCTION </b></p><p>The business of teaching and learning cannot be complete without a periodic examination of
the learners to determine if set objectives are being achieved. In the university each lecturer
is expected to quantify how much the students have achieved from a course of instruction,
this is done through the administration of tests by the lecturers who may not have adequate
knowledge of test construction procedures, hence most often one encounters question papers
that lack the basic psychometric properties (i.e validity, reliability, and usability). The most
common tests used by lecturers are teacher–made achievement tests as against standardized
tests which have the psychometric properties established. For achievement test, the most
important validities to establish are face and content validities. Face validity is concerned
with level of English used, if the items are ambiguous, if it is multiple choice you check if
they are properly keyed, if the keys come in a pattern, and if there are overlapping items. It is
also very important to establish content validity of an achievement test as it is crucial that the
test covers the content area the learners have been exposed to; reliability and usability of the
tests are also established as achievement is a latent trait. All these are incorporated in test
construction procedures which each lecturer should be aware of and follow to be able to set
good tests.
As the power to assess the students rest on the lecturers. One would expect adequate
measures to help lecturers acquire the skills in test construction but this is not the case. To
confirm this, Izard (2005) observed that most teacher-made tests assess mainly the lower
level processes as Bloom’s taxonomy of educational objectives specified for the cognitive
domain. It becomes pertinent to guide lecturers on test construction procedures, which
involves three major steps, (a). Test planning, where you plan the type of test you want to
construct, this encompasses things like test format, the number of items to construct,
determining the objectives to be assessed and drawing the test blueprint. (b) Item writing:
Items are written out bearing in mind ways of improving essay or objective test items, after
which the test is given out to other content specialists to establish face and or content validity.
The test is then given to an equivalent group to the people the test is intended for trial testing,
thereafter item analysis is by calculating the difficulty and discrimination indices of the items.
Items are then selected based on the appropriate levels of these indices for norm and
criterion-referenced tests.
The importance of teachers setting appropriate tests for their students is inarguable
considering the value of test scores given by teachers. Researchers have stressed that
teacher’s competence greatly impacts on the quality of tests constructed (Chan, 2009,
Darling-Hammond, 2012). Marso and Figge (1989) investigated the extent to which
supervisors, principals, and teachers agree in assessing their proficiency in testing. The study
demonstrated proficiency in assessment skills of the participants. Results showed teachers
rating themselves higher than principals while principals rated themselves higher than
supervisors. Generally, it was found that they all needed more training in test construction
skill.
<br></p>