CIPP Model to Evaluate the Use of Computer Technology in the Curriculum of the Urban Elementary School

CIPP Model to Evaluate the Use of Computer Technology in the Curriculum of the Urban Elementary School

Evaluation of the programs and procedures in the curricula of any level of students assumes utmost importance as it contributes positively to the further improvement of such programs and procedures. Evaluation is commonly understood as the process of collecting data on the performance of something in terms of the contribution it makes to the students’ learning and development. Evaluation of any programme in the educational curricula is performed in terms of the results achieved as a result of the programme performance and effectiveness. Many evaluation approaches are in practice to ensure that the objectives set in the programme are attained through the programme and long term goals of student learning and development can be accomplished. The Eight-Year Study Model, The (Tyler)-Newton Metfessel-William Michael Model, Michael Provus’s Discrepancy Evaluation Model etc. are some of the popular models rigorously applied in the education sector for programme evaluation (Ralph, 1988; Newton and William, 1988). Apart from other models, CIPP model is found to have been accepted as the most sought after one, especially for technology related programmes and procedures in the school and college curricula.

The CIPP is the acronym for Context, Input, Process and Product. The underlying rule of the model is that it provides for a rationale for the determination of objectives and it requires the evaluation of context, input, process and product in judging the value and performance of a programme. The model was propounded by Daniel Stufflebeam and his colleagues in the 1960s, out of their valuable experience of assessing education projects for the Ohio Public Schools District (Stufflebeam, D. et al. 1971).

In the CIPP model, information plays a key role in making better decisions and evaluation of programmes. Also, data collecting and reporting have significant roles in order to promote more effective programme management. This evaluation framework operates as a means of connecting evaluation process with programme decision-making. The fundamental principle of the model is based on a cycle of planning, structuring, implementing and reviewing and revising decisions, each of which assessed through different aspects of evaluation such as context, input, process and product evaluation. In fact, the main contention of CIPP model is that it is an effort to formulate evaluation, which is directly pertinent to the requirements of decision-makers during different stages and actions of a programme (Stake, 1975).

Context, Input, Process, and Product Evaluations

These are the core concepts of CIPP model as advocated by Daniel Stufflebeam. To explain various aspects of the model, context evaluations attempts to examine the requirements, troubles, resources, and possibilities to assist decision makers describe the objectives and priorities. The second aspect- evaluation of input tries to evaluate various alternative approaches, different but challenging programme plans, recruitment plans, and budgets for their viability and possible cost-effectiveness to attain cherished desires and attain objectives (Cronbach, 1982). In the third stage, i.e., the process evaluation consists of the examination of the activities of people, who are carrying out the activities, evaluating program performance and infer results. Through product evaluation, CIPP model attempts to recognize and evaluate the results—planned and unplanned, short term and long term (Fetterman, 1988).

The above discussed aspects of the CIPP Evaluation can be described better in terms of the following questions which are addressed in the model:

What should be done?

This question is addressed by properly gathering and analysing the requirements evaluation data to agree on objectives, priorities and goals. For example, a context assessment of a computer technology may consists of an study of the present goals of the computer programme, student test scores achieved, teachers problems, organisation’s policy and procedures (Feuerstein, 1986).

How should it be done?

This includes the stages and all inputs (resources) required necessarily to meet newly framed goals and objectives. In addition to that this involves recognising successful outside activities, resource and information.

Are we doing it as planned?

This aspect of the evaluation allows decision-makers by giving sufficient information as to how well the curriculum is materialised using computer technology. Afterwards, it is possible to continuously supervising the implementation process. The model enables the decision-makers to learn as to how well the programme does follow desired plans and guidelines.

Did the programme work?

This stage is referred to as the controlling stage as it tries to ensure that whether the actual results are in conformity to the panned (anticipated) results. This model measures the actual results and compares the same with anticipated results so that decision-makers are in a position to decide upon if the program should be continued, modified, or dropped at once.

The four aspects of evaluation of the CIPP model that facilitate diverse decisions and questions are illustrated as below:

 

Aspect of evaluation

Type of decision

Kind of question answered

Context evaluation

Planning decisions

What should we do?

Input evaluation

Structuring decisions

How should we do it?

Process evaluation

Implementing decisions

Are we doing it as planned? And if not, why not?

Product evaluation

Recycling decisions

Did it work?

 

Figure 1 The CIPP model of evaluation

References

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (1999), Standards for educational and psychological testing, Washington, DC: American Educational Research Association

Cronbach, L. J. (1982). Designing Evaluations of Educational and Social Programs, San Francisco: Jossey-Bass

Fetterman, D. M. (1988). ‘Empowerment evaluation’, Evaluation Practice, 15 (1), 1-15

Feuerstein, M-T, (1986), Partners in evaluation: Evaluating development and community programme with participants. London: Macmillan Press Ltd.

Newton S. Metfessel, and William B. Michael, A Paradigm Involving Multiple Criterion Measures for the Evaluation of the Effectiveness of School Programs,” Educational and Psychological Measurement”, Winter, 1967, pp. 931-943, in Ornstein and Hunkins, 1988, p. 256-257

Ralph Tyler. (1988), General Statement on Evaluation, Journal of Educational Research, 1942, pp. 492-501, in Ornstein and Hunkins, 1988, p. 256

Stake, R. E. (1975), Evaluating the arts in education: A responsive approach, Columbus,OH: Charles E. Merrill.

Stufflebeam, D. L., Candoli, C., & Nicholls, C. (1995). A portfolio for evaluation of school

superintendents. Kalamazoo: Center for Research on Educational Accountability and Teacher Evaluation, The Evaluation Center, Western Michigan University

Related Essays

Leave a Reply

Your email address will not be published. Required fields are marked *