Kirkpatricks+four+levels+of+evaluation

=ASSESSING TRAINING EFFECTIVENESS= @http://www.e-learningguru.com/articles/art2_8.htm

by Donald Kirkpatrick.
According to this model, evaluation begins at level one, and then, as time and budget allow, should moves sequentially through levels two, three, and four. Information from each prior level serves as a base for the next level's evaluation. Thus, each successive level represents a more precise measure of the effectiveness of the training program, but at the same time requires a more rigorous and time-consuming analysis.

Level 1 Evaluation-Reaction
Reaction. . .Just as the word implies, evaluation at this level measures how participants in a training programs react. It attempts to answer questions regarding the participants' perceptions - Did you like it? Was the material relevant to your work? According to Kirkpatrick, every program should at least be evaluated at this level to provide for the improvement of a training program. In addition, the participants' reactions have important consequences for learning (level two). Although a positive reaction does not guarantee learning, a negative reaction almost certainly reduces its possibility.

Level 1: Reaction
Evaluating employee reaction to training is the easiest: Ten minutes before the end of the last session, the instructor hands out an evaluation questionnaire to each participant. After each question, participants are given a choice of "poor" to "excellent," and an invitation to give a comment or suggestion. That evening, the training manager takes all 29 questionnaires home (five new managers were sick on the last day, and six had to leave early) and tallies the responses. The data, along with recommendations, are presented to headquarters the next day. The 11 absentees never give their opinions. The major limitation of trainee surveys is that they cannot tell you if the training actually worked. If the responses are highly negative, the program probably didn't do the job (though it might have). But favorable responses don't mean that trainees will retain or act on the material. You may enjoy outdoor adventure-based workshops, even when you know that the lessons won't directly transfer to the office. Cynical trainers have dubbed these surveys "Smile Sheets." They are necessary, but not sufficient.

Measuring Learning Outcomes
To measure learning outcomes, first revisit your objectives. Make certain that 1) you have met the objectives in your course and 2) You have devised ways for your learners to demonstrate their knowledge of the objectives. We immediately think "Test" or "Quiz," but many adult learners react poorly totesting situations, but may perform well. Consider the wide array of methods of measuring learning. There are a variety of classroom assessment techniques that can be adapted to training environments. Often Training Closers can be excellent ways to measure knowledge and performance while adding fun and excitement to the end of the program.

SAMPLE EVALUATION
What Matters Most Evaluation Facilitator: Date:

Strongly Agree Agree Neutral Disagree Strongly Disagree
The presentation was well organized. The topics covered in this session met my expectations. The support material was useful. Overall, I learned and benefited from this session. The facilitator presented the material in a clear and understandable way. The facilitator was prepared to teach this session. The facilitator moved at an appropriate pace.

Short Answer
What is your biggest challenge involving the subject matter? What is your learning goal for this session? As a result of this session, I plan to accomplish the following action items, incorporating the skills I’ve learned during the next 90 days: What did you like most about this session? What suggestions do you have to improve this session? Thank you for your feedback. Please return to Director of Organizational Development

Level 2 Evaluation - Learning
Assessing at this level moves the evaluation beyond learner satisfaction and attempts to assess the extent students have advanced in skills, knowledge, or attitude. Measurement at this level is more difficult and laborious than level one. Methods range from formal to informal testing to team assessment and self-assessment. If possible, participants take the test or assessment before the training (pretest) and after training (post test) to determine the amount of learning that has occurred.

Level II Learning
Measuring the learning that takes place in a training program is important in order to validate the learning objectives. Evaluating learning typically focuses on such questions as: 1. What knowledge was acquired? 2. What skills were developed or enhanced? 3. What attitudes were changed?

Demonstration
- a method where initially the trainer, or a trainee, under an instructor's guidance, shows how the performance in the performance objective is correctly done. This method employs drill and coaching. It is effective with smaller groups because of the involvement of students in the learning process.

Discussion -
a method characterized by 2-way communication, immediate feedback and peer interaction. Trainers serve as facilitators, mediators, mentors or "devil's advocates."

Role play -
this method provides a high level of student involvement. It allows trainees to experience scenarios with varied inputs and outcomes. Though written, structured, and controlled by trainers, trainees provide much of the stimuli.

Case study -
a method that provides trainees with a set of particular facts or representations to which they must apply their knowledge, experience and judgment to reach a solution.

Simulation -
an instructional method where trainees practice new skills in a realistic environment, but one that affords little or no consequences for incorrect actions. In this "safe" environment students are free to err and learn from their mistakes.

Hands on exercises -
a phrase rather than a specific instructional method. Normally characterized by trainees applying practical applications to training received through earlier methods, particularly when the learning objective has a strong psychomotor element. Settings include labs,n simulators, and training platforms.

//**Level 2 is not easy to measure**//
Measurements at level 2 might indicate that a program's instructional methods are effective or ineffective, but it will not prove if the newly acquired skills will be used back in the working environment.

Measuring learning is more difficult and time-consuming than asking trainees if they liked the training. Good tests are hard to construct, and can be time-consuming to take if they measure all aspects of the course content. Tests can be objective, with specific correct answers, or they can consist of practical demonstrations at the end of a training session such as a role play or simulation. The "subjective" tests, or role play situations must be judged objectively. The trainer might observe the role play situation, and using a check list, indicate which behaviors are demonstrated and mastered. Although the measurement of learning does not guarantee results, only learned behavior can be transferred to the work environment.

Level 3 Evaluation - Transfer
This level measures the transfer that has occurred in learners' behavior due to the training program. Evaluating at this level In Kirkpatrick's four-level model, each successive evaluation level is built on information provided by the lower level. To assess the amount of learning that has occurred due to a training program, level two evaluations often use tests conducted before training (pretest) and after training (post test).

//Attempts to answer the question -//
Are the newly acquired skills, knowledge, or attitude being used in the everyday environment of the learner? For many trainers this level represents the truest assessment of a program's effectiveness. However, measuring at this level is difficult as it is often impossible to predict when the change in behavior will occur, and thus requires important decisions in terms of when to evaluate, how often to evaluate, and how to evaluate.

Level 4 Evaluation- Results
Level four evaluation attempts to assess training in terms of business results. In this case, sales transactions improved steadily after training for sales staff occurred in April 1997.