Tag Archives: testing

A is for Assessment

ExamI have thought for some time that if I ever get myself together enough to write a book in the field of education, my subject would be assessment. It’s probably the issue I think about the most often. It truly bothers me that it’s done so poorly—not just with standardized tests, but also in classroom settings. It’s too big for a blog post, but I will put a few of my thoughts together.

Several years ago, and some of you have been reading this blog long enough to remember, I read Understanding by Design by Grant Wiggins and Jay McTighe. When I read that book, things really clicked for me. I cannot honestly say that I create UbD units for everything I teach, but one aspect of UbD that has really stayed with me is authentic assessment. I don’t give tests, even though UbD says tests are fine in addition to performance tasks. I give quizzes, but rarely with multiple choice, true/false, or other types of purely objective questions. I tend to ask more open-ended questions that require students to tell me what they know about a given topic. Aside from these types of quizzes, the main types of summative assessments I give are writing assignments, discussions, and projects.

Our school is incorporating more project-based learning. Project-based learning is not the same thing as doing projects. I have had to do plenty of projects in school that were more or less busy work and didn’t demonstrate much learning. Those old dioramas come to mind. Quite a few posters come to mind as well. However, I do recall doing some projects as a part of project-based learning that required deeper learning. For instance, in the sixth grade, I created a tour guide for Venezuela. I am sure that my social studies teacher required certain elements, such as tourist destinations, exchange rates, and the like, but what I remember is researching the country and creating the pages in my guide so that I my readers could learn everything they needed to know about the country in order to prepare for a visit. I still remember showing the project to my language arts teacher, who told me, “Oh, now I want to go to Venezuela.” I remember doing the work and what I learned because it was an authentic assessment that placed me in the role of a tour guide writer who needed to convince readers to visit a country, and it felt fantastic when my language arts teacher liked the project. My social studies teacher easily could have asked us to write a research report that included the same information, but I doubt I’d still be remembering the research report more than 30 years later, nor would I remember what I’d learned about Venezuela. The most important thing is that I did all the work. I did the reading and research. I created the tour guide. My teacher must have given me class time, but I recall sitting by myself in the library, with a copy of Fodor’s Travel Guide, encyclopedias, and other books.

One of the reasons I am an advocate for authentic, project-based assessment is that I have seen the students’ engagement in the learning, and I have seen how it helps students to learn and remember more of what they learn. There is a saying that has been bandied around to the point of cliché, but it’s worth sharing at this point:

Franklin Quote

Some years ago, a student gave me a card that I have cherished. In it, she wrote that she felt the work she did in my class was relevant. To be quite honest, the work I assigned, especially before I became thoughtful about designing for understanding and authentic assessment, was not always relevant. In fact, it often wasn’t. Students should understand why what they are learning is important and what they might do with it in the future. We’re not always great at communicating the importance of the work we assign. We need to reflect on the work we ask students to do. We need to determine what it is that we want students to learn, and we need to plan lessons and assessments that will help the students learn that information. We also need to give students agency and choices. Students should have a role in selecting reading and writing assignments. They should be given opportunities to discuss what they are learning in their reading and writing, too. It is in this way that we can involve students so that they learn.

None of that is to say that we do away with essays or tests, but we need to ask students to apply what they are learning in our classes so that they understand they’re not learning it for a test. I have only scratched the surface and don’t feel I’ve said a whole lot here, but please check out some of my other posts on assessment for more, and of course, more will come, as I can’t seem to leave this topic alone. (See tags and category links below for more on assessment.)

Chalkboard background: Karin Dalziel

What is This Test Measuring?

070305I have been studying for the Technology Education GACE (Georgia Assessments for the Certification of Teachers) test I will take next month. This test is the last step in obtaining certification to teach technology. However, I have some concerns about the test based on the study questions provided at the GACE website. Technology covers a wide range of courses and fields. Were I to teach robotics or electronics, it would be important for me to know how transistors work, which is one of the free response questions. However, I wonder, given the fact that my goals are to teach my colleagues and students about computers and similar devices, how important is it that I know the safety procedures for operating a lathe? Or that the process used to increase the density of concrete by removing air voids is called rodding? I suppose I might, at some point, need to understand economics of supply and demand and perhaps even the advantages of oxyacetalene cutting torches over plasma cutting torches. Fair enough. But the advantage of flat-sawed lumber over quarter-sawed lumber?

More troubling to me even than the inclusion of questions related to what I would term “industrial arts” are the exclusion of questions about what I might actually do. For instance, where are the questions about the instructional design process (emphasized so heavily in my master’s course work)? Where are the questions about evaluation of websites? Where are the questions about the process for evaluating tools such as software for purchase? Where are the questions about multimedia authoring? Digital audio? Instructional media? Even basic computer literacy?

I believe that this test is designed to test teachers from a variety of instructional backgrounds, whether that background is industrial arts, computers, construction, manufacturing technology, and several other disciplines, but that’s precisely the problem. This test, from all appearances, is spread out across too many different disciplines. When I took the Teacher Candidate Test to be certified as an English teacher, all the questions were related to my discipline. They were about literature, writing, vocabulary, and grammar.

This test appears to be about several things that I don’t believe are related to my discipline. If I successfully pass it, I will be certified to teach wood shop. Do I feel qualified to teach wood shop? Not in the slightest. There is too much I don’t know about the equipment and procedures to be successful in that position. This test would also determine whether or not I could teach computer science. Do I feel qualified to teach computer science? Certainly, and this test won’t change that.

I understand that all of these areas can be thought of as “technology,” but I think it’s understood that when we use the term “technology education,” we’re talking about teaching others how to use computers, interactive white boards, software, communication devices, and similar tools. We’re talking about which tools to use to accomplish certain tasks. We’re talking about 21st century skills. I’m not concerned about passing the test, but I am concerned that passing it doesn’t really communicate anything to anyone about how ready I am to teach the material covered on the test. I would propose that the test be rewritten to focus on the different disciplines that currently fall under technology education so that both the test-takers and the administrators who hire technology educators can be sure that candidates have the skills required for their particular discipline. But I invite you to take a look at the testing preparation materials and tell me what you think.

Creative Commons License photo credit: COCOEN daily photos

Georgia’s CRCT

When 40% of an individual teacher’s students fail a standardized test, I imagine the teacher would be scrutinized, and rightly so. Whatever I think of standardized tests, 40% of a teacher’s students shouldn’t fail one, or something’s wrong with the teacher’s instruction. If 40% of a school’s students failed a standardized test, the school might be sanctioned depending on other factors — part of making Annual Yearly Progress (AYP) for NCLB means schools must maintain or even improve their pass rates for standardized tests. If schools fail to make AYP, a series of sanctions will follow, from losing funds to faculty “reorganization.” Again, if 40% of students at a school fail a test, there is something wrong with the school’s instruction.

But what if 40% of students in an entire state fail a test that they must pass in order to go to high school?

Unofficial results indicate that 40% of Georgia’s 8th grade students failed the math portion of the Criterion-Referenced Competency Test (CRCT), the main standardized test used in Georgia to meet NCLB requirements regarding testing. Last year, about 19% of students failed the math portion of the test. Students must pass this section of the CRCT in order to proceed to high school. Some are blaming the new math curriculum, while others are saying the test must be poorly constructed. I can’t say, not having seen it. I asked my daughter, who took it, and she says she believed she passed, as she thought students at her school who didn’t were instructed to see the counselor, and she was given no such instruction. She has been an A-student in math all year, so I shouldn’t have cause to worry, but the fact that 40% of students failed the test worries me.

The news regarding social studies was even worse. Less than 30% of 6th and 7th graders passed the social studies portion of the CRCT. Again, results like this for one teacher or one school can be explained, but for a whole state? Especially troubling to me are reports from students that they were asked questions about material they hadn’t learned. How could that happen on a “criterion-referenced” test?

I know the perception exists that Georgia schools are universally backward, but after having graduated from a Georgia school and watching my children in Georgia schools, I have to say that like everywhere else, Georgia has good schools and poor schools. A pertinent quote from the New Georgia Encyclopedia entry on Public Education:

The Scholastic Assessment Test (SAT) is a college entrance exam often used to compare the performance of high school students among states and among school districts within a state. In 2003 Georgia students averaged 984 (combined verbal and math scores) on the SAT, compared with a national average score of 1026. When SAT scores are used to compare states, Georgia usually finishes near the bottom. The College Board, which administers the SAT, cautions against the use of SAT scores for this purpose, because the population of students taking the SAT in each state varies considerably. In some states, most students take a different test, the American College Testing [sic] (ACT). In those states, students who take the SAT generally have strong academic backgrounds and plan to apply to some of the nation’s most selective colleges and scholarship programs. For example, in 2002 there were nearly 54,000 Georgia students who took the SAT. In contrast, only 1,900 Iowa students took the SAT. (As a point of reference, Georgia had more than 72,000 high school graduates in 2002, while Iowa had nearly 34,000 high school graduates.)

My point in bringing this up is that I think it’s unfair to dismiss problems with the CRCT with a blanket generalization like “Georgia’s just got bad schools.”

So what happened, I wonder?