Seels & Richey (1994) define evaluation as "the process of determining the adequacy of instruction and learning" (p. 54). The AECT (2001) has identified evaluation as one of five domains that define the knowledge base of and functions performed by instructional technologists.
The design domain includes four subdomains:
5.1 Problem Analysis: "[D]etermining the nature and parameters of the problem by using information-gathering and decision-making strategies" (Seels & Richey, 1994, p. 56).
5.2 Criterion-Referenced Measurement: "[T]echniques for determining learner mastery of pre-specified content" (Seels & Richey, 1994, p. 56). Criterion-referenced measurement can also be used to evaluate software, lesson plans, teacher effectiveness, journal articles, and many other items instructional technologists are called upon to evaluate.
5.3 Formative and Summative Evaluation: "Formative evaluation involves techniques for determining learner mastery of pre-specified content. Summative evaluation involves gathering information on adequacy and using this information to make decisions about utilization" (Seels & Richey, 1994, p. 57). Formative and summative evaluation can also be used to evaluate software, lesson plans, teacher effectiveness, journal articles, and many other items instructional technologists are called upon to evaluate.
5.4 Long-Range Planning: "[Focusing] on the organization as a whole is strategic planning ... usually defined as a future period of about three to five years or longer. During strategic planning, managers are trying to decide in the present what must be done to ensure organization success in the future" (Certo, et. al., 1990, p. 168).
Through the ITMA program, I learned evaluation is much more systematic than I had previously thought. In addition, as a classroom teacher, it was something I tended to associate with students rather than with myself. Through ITMA, I learned that the evaluation of instruction does not include just determining whether or not students learned the material, but also whether or not I taught it adequately enough for learning to take place. In my new role on the Technology Committee at my school, I will be using what I have learned to complete a problem analysis and to do some long-range planning with the administration at my school.
Many problems contribute to difficulties in using the World Wide Web in education, from access, to blocked sites, to websites that disappear from the Web. In Education and the Web, I considered some possible impediments to using the World Wide Web to meet educational needs and discussed how to minimize the effects of these impediments, an early problem analysis that encouraged me to think about the types of problems that arise in integrated Web technology in the classroom and how to overcome them. More recently, as a member of the Technology Committee at my school, I created a survey for our faculty. My studies in ITMA helped a great deal in the construction of the survey, for it was through courses such as Educational Research that I learned about the Likert scale. I created the survey based on several issues that seemed to me to be pervasive in our school community.
I had several opportunities to use criterion-referenced measurements in Instructional Media. In fact, I really liked the method used to create objectives (ABCD). I include as artifacts my criterion-referenced evaluations of a game called Grammar Ninja, a visual element found on the Web, and a lesson plan on Zora Neale Hurston's Their Eyes Were Watching God. Each of these evaluations was conducted according to criteria set forth in Smaldino, Lowther & Russell (2008): the appraisal checklist for simulations and games, the visual design checklist, and the ASSURE model for planning lessons.
For Education and the Web, I created a set of criteria to evaluate websites and conducted an evaluation of two websites: one I considered reliable, and one I considered unreliable based on the criteria I established. This criteria has been useful not just for me, but also for my students as they conduct research on the World Wide Web.
In Spring 2009, my second semester of studies in the ITMA program, I took Instructional Media, and I was asked to describe my experiences as a distance learner. It is interesting to me that I identified some common issues in distance learning in my own experience even though I hadn't taken Telecommunications and Distance Learning. Over time, my thoughts about the program would change somewhat (please see Reflections page), but in general, it is a good formative evaluation of the program as I had experienced it up until that point in my studies.
My professional development course project for Project and Report was evaluated both by some of my peers in the ITMA program and by myself. I created an evaluation report in which I discussed creating the evaluation criteria for my project, the results of my evaluations, and discussed possible reasons for rating criteria and comments I received.
As a final project for Software Evaluation, I selected software which I evaluated in light of its potential to help my students meet a course objective. I chose to evaluate Curio 6, a planning program that I hoped would help my students write a research paper. I discovered the software is available only for Mac, which precluded its use in our PC-based school, but I was intrigued enough by its possibilities to purchase the software for myself and to recommend it to students who owned Macs.
After my colleagues took the technology survey I created as part of my work on the Technology Committee at my school, a colleague on the committee compiled the results, including comments he extrapolated from the surveys, and sent a report to me. I created a reflection based on the results of the survey that I hope will be used as part of a long-range plan for alleviating some of the technology problems at my school.