I had a great discussion with my students today. A couple of them asked me why I don’t grade participation in Socratic seminars. I used to. I stopped because I find that grading participation is slippery. If you quantify it, you run the risk of encouraging shallow participation for points. In their reflections, students share what they have learned as a result of the seminar. I think part of the concern the students shared is that their reflection must include a summary of ideas discussed in the seminar, and the students who raised the concern did not earn full points for their summaries. They argued that if they are trying to capture the discussion in their notes, they will not be as present in the discussion.
What I told the students is that grading is a means of communicating their learning, and if they would prefer to be assessed on participation because it helps them learn, then I will do what helps them learn. I asked that we have a discussion about it as a class. We had that discussion this morning, and I was really impressed with how the students were able to articulate what works for them in assessing seminars and why. They have a strong sense of what kind of assessment feels equitable and what does not. They were able to articulate why setting goals and assessing progress toward the goals was helpful, and why grading participation didn’t work for most of them.
I pointed out that the skills of note-taking and listening are important for success. Students need to listen to their teachers and peers—now and later in college—and be able to take notes on what they hear, so my rationale for assessing these skills is that they are skills that are important to practice. Yet, I understand their arguments as well. We cannot have a good seminar if students do not participate. On the other hand, their classmates insisted that participation was not a problem in our first seminar. At one point, they asked me to display our discussion map from last time (thanks, Equity Maps!). Did we actually have a problem that needed solving, or was our discussion working without grading participation?
The class consensus was to leave the assessment as is, particularly as they have only experienced one seminar so far and judgment based on one experience would not tell the whole story. I don’t think everyone was happy, and frankly, the discussion did become a bit heated. I don’t think that made the students feel comfortable. I asked them if they felt heard—not agreed with, because that’s not the same thing—but heard. I think the net result is that students appreciated the opportunity to share their ideas. I was super impressed with them, and I shared that feedback with them.
We have our second seminar tomorrow, and it will be interesting to see how this debate informs the discussion. In the end, the compromise/consensus seemed to be that students want to be assessed on making progress on their goals. Part of their reflection is to identify their goals for the next seminar. This means I need to go back into their last reflections and refresh my memory about what their individual goals are and ensure I give them feedback on their progress toward meeting their goals. They also asked for feedback on their contributions, though they recognized that one person’s idea of an insightful comment may differ from another’s.
The bottom line is that it’s important to engage students in the assessment of their learning. Some of the best discussions I have had with my students have centered on grading and assessment. They have a lot to say about assessment, but they are not always a part of the conversation about how they’ll be assessed. It was a good exercise for my students today to hear others’ perspectives on this topic and take those perspectives into consideration.
In June I successfully defended my dissertation at Northeastern University. My research focused on grading and assessment, which will likely not surprise anyone who has been reading this blog for a while, as I have written about grading and assessment frequently.
My dissertation was qualitative action research, a dissertation in practice grounded in the Carnegie Project on the Education Doctorate. Grading and assessment are ripe for qualitative action research because we have over a century of quantitative research in grading and assessment, and not as much positive change, at least with grading, as we might like to see. I might argue we are seeing more authentic assessment in schools, but grading remains, well, stuck. One of the reasons I think we’re stuck is that we believe persistent myths about grading.
Grades Communicate Students’ Proficiency
One of the most persistent myths about grading is that we agree on what grades mean. As long ago as 1888, researchers were raising questions about inter-rater reliability (Edgeworth, 1888). Study after study indicates that grades are highly inconsistent measures of students’ learning. Starch & Elliott (1912) conducted a study that examined consistency among graders and found that scores on student writing varied by 30-40 points out of 100, or a probable error of 4.5. You might be thinking, “yes, but isn’t writing a little subjective anyway? I’m sure that doesn’t happen in, say, math.” Well, the following year, Starch & Elliott (1913) found that scores on a geometry exam varied even more widely—as much as a probable error of 7.5. They ascribed the difference to several factors: the possibility that graders differently evaluate the students’ methods for reaching the solution, that they assess quality of the students’ drawings, and that they assign different values to problems.
Naturally, things have changed in a hundred years. What do more recent studies say? Brimi (2011) sought to answer that very question. Brimi (2011) engaged 73 participants working for the same school district trained to use the 6+1 Traits of Writing Rubric developed by Education Northwest to score the same argumentative essay using the rubric. The participants’ grades ranged from an A to an F on the traditional grading scale; furthermore, the range of scores assigned to the essay spanned 46 points (Brimi, 2011).
Grading is inconsistent for many reasons, but one of the chief reasons is that teachers evaluate different things when they grade. Some teachers offer extra credit or give students points for bringing supplies (Townsley & Varga, 2018). Teachers can be highly individualistic in selecting criteria for students’ performance (Bloxham et al., 2016). Other factors also impact how teachers evaluate students’ performance. For example, Brackett, et al. (2013) found that a teacher’s mood while grading can impact students’ scores—teachers in a bad mood tend to rate students’ performance lower. This holds true even when grading more objective criteria such as correct spelling (Brackett, et al. 2013). Think what this means as we are teaching in the midst of a pandemic and during a time when it feels as though teachers are being attacked from all sides.
One of the reasons traditional letter or number grades emerged is due to perceived inconsistency, inefficiency, and complication involved in narrative grade reports (Feldman, 2019). It was thought that letter grades could communicate learning both efficiently and plainly (Schneider & Hutt, 2014). By the 1940s, the A-F letter grade system had become the most popular grading system (Schneider & Hutt, 2014).
Traditional grades tend to be derived by averaging the performance on all assessments during a grading period; this average may not capture students’ eventual proficiency in learning and can place undue emphasis on performance anomalies rather than tendencies (Feldman, 2019). In addition, traditional grading sometimes incorporates assessment of student behaviors, such as participation, engagement, and effort (Feldman, 2019).
We might think that grades communicate students’ proficiency in learning, but there are simply too many variables to say this definitively.
Grades Motivate Students
One fear many educators express is that if students are not graded, they will not be motivated to do the work. At best, grades serve as extrinsic motivation for learning. When students care more about the grades than the learning, they are more likely to resort to academic dishonesty. In fact, pressure to earn high grades contributes to academic dishonesty and mental health problems (Rinn et al., 2014; Villeneuve et al., 2019). Grades affect students’ achievement, self-concept, and motivation (Casillas et al., 2012; Pulfrey et al., 2011). Students who earn low grades tend to achieve less and feel lower self-esteem over time (Klapp, 2018).
Fear of earning low grades or focus on earning high grades both serve as extrinsic motivators for learning rather than intrinsic motivators, which demonstrate more effectiveness in supporting learning (Froiland & Worrell, 2016; Hattie & Timperley, 2007). Intrinsic motivation is positively associated with both engagement and achievement (Froiland & Worrell, 2016; Hattie & Timperley, 2007). Helping students develop their intrinsic motivation to learn may increase students’ achievement (Froiland & Worrell, 2016). Extrinsic motivation to earn good grades or avoid the negative consequences of poor grades drives many students rather than the desire to learn, and over time, extrinsic motivation decreases students’ achievement (Hattie & Timperley, 2007). In addition, the reward of good grades tends to decrease motivation for otherwise engaging learning (Hattie & Timperley, 2007).
It’s worth noting that motivation appears to change depending on the grading system used. When students are graded using a 100-point system in which the sum of all student work is worth a total of 100 points, students tend to view each point deducted as a loss (Smith & Smith, 2009). Bies-Hernandez (2012) describes such grading systems as “loss-framed grading” (p. 179). However, when students are graded using a total points system tallying all points earned, they tend to view grades as opportunities to improve and build toward a desired grade (Smith & Smith, 2009). Students who are graded with a system weighting assignment categories by percentage fell in between students in the other grading groups (Smith & Smith, 2009). Even if controls ensure that the resulting grade is the same regardless of the calculation system, students’ responses on a Likert scale questionnaire indicate they still perceive greater risk in 100-point systems and were less motivated and self-assured (Smith & Smith, 2009). Bies- Hernandez (2012) replicated these findings and further found that students’ performances in courses with a loss-framed grading system also decreased. Thus, the framing of the grading system not only has an impact on students’ perceptions of their performance but also on their actual performance (Bies- Hernandez, 2012). The implication is that teachers’ approaches to grading may affect students’ academic achievement (Brookhart et al., 2016).
However, proficiency-based grading (sometimes known as competency-based grading, standards-based grading, or mastery-based grading) has the potential to make grades more meaningful and purposeful (Buckmiller et al., 2017; Guskey, 2007). Proficiency-based grading practices may also lead to greater academic achievement, particularly if the grades are paired with formative feedback (Hattie & Timperley, 2007). Proficiency-based grading practices may also foster more cooperation and less competition (Burleigh & Meegan, 2018). Taking academic risks, weighing differing conclusions, and considering varied points of view are all necessary for developing critical thinking skills, but if students must risk failing grades in order to do so, they are much more likely to take the safer route to earning a higher grade (Hayek et al., 2014; McMorran et al., 2017). Knowing that they could continue to learn, revise, and reflect on their work may increase students’ motivation to learn (Hattie & Timperley, 2007; McMorran et al., 2017).
100-point Grading Scales are More Precise than A-F or 4-Point Grading Scales
Do you know why we use the 100-point scale? It’s not because it’s more precise. It’s because it’s the scale in the gradebook software (Guskey, 2013; Guskey & Jung, 2016). The 100-point scale is terrible, and that’s a hill I’m willing to die on. The 100-point grading scale has become one of the most common scales for reporting students’ grades, but it is one of the most unreliable scales in use (Guskey, 2013).
The 100-point scale is inaccurate and inequitable because the scale is skewed toward failing grades (Feldman, 2019). Passing grades comprise only 40 points of the grading scale, spanning typically from 60 points to 100 points (or from 70-100 points in some systems!), while failing grades comprise the remaining points possible spanning from 0 to 59 (or even 0-69). Serious mathematical errors arise when teachers input zeros in the gradebook when students are missing work (Feldman, 2019). While this practice ostensibly holds students accountable for handing in work, it can make it impossible for students to recover academically (Feldman, 2019). The literature suggests that teachers may compensate for the 100-point scale’s mathematical errors by artificially raising grades in a number of ways (Schneider & Hutt, 2014), including grading formative assessments and executive function skills (Bowers, 2011; Brookhart et al., 2016; Townsley & Varga, 2018).
Unfortunately, a lot of educators perceive the 100-point grading scale to be more accurate (Brookhart & Guskey, 2019; Feldman, 2019). While using 100 points as opposed to four or five points may seem more accurate, it results in a probable error of five or six points; teachers find it difficult to distinguish levels of performance on a 100-point scale (Brookhart & Guskey, 2019). Some grading reformers advocate for the use of minimum grading, or inputting a minimum grade such as 50 percent, rather than inputting zeros for missing work; this practice reduces mathematical error (Carifio & Carey, 2013; Carifio & Carey, 2015; Feldman, 2019). Essentially what educators are doing when they use minimum grading, however, is compensating for the deficiencies of the 100-point scale by converting it to a rough approximation of the 4-point scale. In a four-point scale, failing grades span from 0-0.99 of a point, while passing grades span from 1-4 points (or 2-4 points in a system without a “D”).
Grades Reduce Bias
Variable and unreliable grading practices also introduce equity problems. Black students have less access to AP courses all over the United States (Francis & Darity, 2021). Schools that use gatekeeping methods (Francis & Darity, 2021), such as teacher recommendations and prerequisite grades, may be basing their decisions about students’ fitness for advanced coursework on subjective measures common in traditional grading (Feldman, 2019). Students of color are most impacted by teachers’ implicit bias (Feldman, 2019), especially if subjective, non-academic factors are included in assessment (Cvencek et al., 2018). Implicit bias may especially play a role in lower grades assigned to students of color when the criteria for proficiency are unclear or undefined (Quinn, 2020). Traditional grading’s subjectivity can harm all students, but students of color may be most impacted due to implicit bias (Feldman, 2019; Quinn, 2020).
However, proficiency-based grading can make grades more equitable and more reflective of students’ actual learning (Buckmiller et al., 2017). Proficiency-based grading may include using practices such as rubrics for evaluating student work and student-generated portfolios; however, it may also include traditional assessments such as tests (Baete & Hochbein, 2014; Buckmiller et al., 2017; Iamarino, 2014; Miller, 2013). Students’ grades are tied to their mastery of content, such as standards, knowledge, and skills, as opposed to an average of all the grades earned during a grading period or course (Iamarino, 2014; Miller, 2013). Teachers using proficiency-based grading typically provide students with feedback on formative assessments (Buckmiller et al., 2017). Students may revise and resubmit work in order to demonstrate their proficiency in learning (Buckmiller et al., 2017). Through revision, students demonstrate their learning of the content and skills. As a result, proficiency-based grades may more accurately reflect what students have learned rather than a snapshot of their performance on a single assessment.
We Have to Use Grades
Grades have actually not existed, at least not in the form we’re familiar with, for a very long period of time (Schneider & Hutt, 2014). One of the worst reasons to perpetuate any system is the notion that we’ve always done it that way, especially when it’s not even true that we have always done it this way. The A-F grading system gained popularity as late as the 1940s—as I mentioned before—as educators saw a need to establish more uniform methods for determining students’ proficiency (Schneider & Hutt, 2014). For many years preceding the establishment of “traditional grading,” we used all sorts of other systems (good and bad) for measuring learning. This system is entrenched, but it’s not as old as people might think, and if we decided, collectively, that it no longer worked for us, we could find a better system. The problem is, well, that it’s a system, and systems are notoriously hard to change.
I have heard many educators express anxiety that students will either not be prepared for college or will not get into college unless they are graded. Many schools, however, have successfully eliminated traditional grades. Colleges understand the transcripts these students send them, and these students are able to go to college. For example, the Watershed School, a member of the Mastery Transcript Consortium, does not issue traditional letter grades or test students through final exams and has a 100% college acceptance rate (Plaskov, 2019). A college counselor I worked with told me anecdotally that “colleges are fine with grading that’s ‘non-traditional.’ Parents usually get very concerned about going off the A-F standard, but college admissions folks are experts on grading scales, and what I’ve consistently heard from them is that the most-accurate/least-translated reporting is what they like.”
My own personal experience is that some schools’ grading practices are more entrenched, and while another system of evaluation would work, it wouldn’t be politically feasible. Proficiency-based grading shows additional promise here. Attaching grades to standards or competencies can make grades more accurate reflections of students’ proficiency in learning. Proficiency-based report cards have the potential to be more useful in understanding students’ learning than traditional report cards including only a letter grade (Blauth & Hajdian, 2016; Swan et al., 2014). Swan et al. (2014) found that parents and teachers generally find proficiency-based reports more helpful and easier to understand, in addition to having more and better information about students’ progress.
It’s worth noting that one study I examined indicated parents reported feeling less confidence in the standards-based grade reports because they were unfamiliar and felt the school had not taken their feelings as stakeholders into account before implementing standards-based grade reports (Franklin et al., 2016). These parents also reported finding the grade reports unclear (Franklin et al., 2016). Importantly, Franklin et al. (2016) indicate the parents in their study were all dissatisfied with standards-based report cards; these parents also described themselves as strong students who enjoyed school. Their study did not include parents who expressed satisfaction with the reports. (Franklin et al., 2016).
The Bottom Line?
I think it’s important for teachers to open dialogue with students and parents, read the research on grading and assessment, and work within the system they’re in to make grades more accurate and meaningful. I highly recommend the works referenced in this post, which is derived largely from my dissertation. For a good deep dive, Joe Feldman’s book Grading for Equity is excellent.
Baete, G. S. & Hochbein, C. (2014). Project proficiency: Assessing the independent effects of high school reform in an urban district. The Journal of Educational Research, 107(6), 493-511. https://doi.org/10.1080/00220671.2013.823371
Bies-Hernandez, N. J. (2012). The effects of framing grades on student learning and preferences. Teaching of Psychology, 39(3), 176-180. https://doi.org/10.1177/0098628312450429
Blauth, E. & Hadjian, S. (2016). How selective colleges and universities evaluate proficiency-based high school transcripts: Insights for students and schools. New England Board of Higher Education. https://www.nebhe.org/info/pdf/policy/Policy_Spotlight_How_Colleges_Evaluate_PB_HS_Trans cripts_April_2016.pdf
Bloxham, S., den-Outer, B., Hudson, J., & Price, M. (2016). Let’s stop the pretence of consistent marking: Exploring the multiple limitations of assessment criteria. Assessment & Evaluation in Higher Education, 41(3), 466-481. https://doi.org/10.1080/020602938.2015.1024607
Bowers, A. J. (2011). What’s in a grade? The multidimensional nature of what teacher-assigned grades assess in high school. Educational Research and Evaluation, 17(3), 151-159. https://doi.org/10.1080/13803611.2011.597112
Brackett, M. A., Floman, J. L., Ashton-James, C., Cherkasskiy, L., & Salovey, P. (2013). The influence of teacher emotion on grading practices: A preliminary look at the evaluation of student writing. Teachers and Teaching, 19(6), 634-646. https://doi.org/10.1080/13540602.2013.827453
Brimi, H. M. (2011). Reliability of grading high school work in English. Practical Assessment, Research & Evaluation, 16(7). http://pareonline.net/getvnasp?=16&n=17
Brookhart, S. M., & Guskey, T. R. (2019). Reliability in grading and grading scales. In T. R. Guskey & S. M. Brookhart (Eds.), What we know about grading: What works, what doesn’t, and what’s next (pp. 13-31). ASCD.
Brookhart, S., Guskey, T. R., Bowers, A. J., McMillan, J. H., Smith, J. K., Smith, L. F., Stevens, M. T., Welsh, M. E. (2016). A century of grading research: Meaning and value in the most common educational measure. Review of Educational Research, 86(4), 803-848. https://doi.org/10.3102/0034654316672069
Buckmiller, T., Peters, R., & Kruse, J. (2017). Questioning points and percentages: Standards-based grading (SBG) in higher education. College Teaching, 65(4), 151-157. https://doi.org/10.1080.87567555.2017.1302919
Burleigh, T. J. & Meegan, D. V. (2018). Risky prospects and risk aversion tendencies: does competition in the classroom depend on grading practices and knowledge of peer-status? Social Psychology of Education, 21(2), 323-335. https://doi.org/ 10.1007/s11218-017-9414-x
Carifio, J. & Carey, T. (2013). The arguments and data in favor of minimum grading. Mid-Western Educational Researcher, 25(4), 19-30.
Carifio, J. & Carey, T. (2015). Further findings on the positive effects of minimum grading. Journal of Education and Social Policy, 2(4), 130-136.
Casillas, A., Robbins, S., Allen, J., Kuo, Y. L., Hanson, M. A., & Shmeiser, C. (2012). Predicting early academy failure in high school from prior academic achievement, psychosocial characteristics, and behavior. Journal of Educational Psychology, 104(2), 407-420. https://doi.org/10.1037/a0027180
Cvencek, D., Fryberg, S. A., Covarrubias, R., & Meltzoff, A. N. (2018). Self-concepts, self-esteem, and academic achievement of minority and majority North American elementary school children. Child Development, 89(4), 1099-1109. https://doi.org/10.1111/cdev.12802
Edgeworth, F. Y. (1888). The statistics of examinations. Journal of the Royal Statistical Society, 51(3), 599-635.
Feldman, J. (2019). Grading for equity: What it is, why it matters, and how it can transform schools andclassrooms. Corwin.
Francis, D. V. & Darity, W. A., Jr. (2021). Separate and unequal under one roof: The legacy of racialized tracking perpetuates within-school segregation. RSF: The Russell Sage Foundation Journal of the Social Sciences, 7(1), 187-202. https://doi.org/10.7758/RSF.2021.7.1.11
Franklin, A., Buckmiller, T., & Kruse, J. (2016). Vocal and vehement: Understanding parents’ aversion to standards-based grading. International Journal of Social Science Studies, 4(11), 19-29.
Froiland, J. M. & Worrell, F. C. (2016). Intrinsic motivation, learning goals, engagement, and achievement in a diverse high school. Psychology in the Schools, 53(3), 321-336. https://doi.org/10.1002/pits.21901
Guskey, T. R. (2007). Multiple sources of evidence: An analysis of stakeholders’ perceptions of various indicators of student learning. Educational Measurement: Issues and Practice, 26(1), 19-27. https://doi.org/10.1111/j.1745-3992.2007.00085.x
Guskey, T. R. (2013). The case against percentage grades. Educational Leadership, 71(1), 68-72.
Guskey, T. R. & Jung, L. A. (2016): Grading: Why you should trust your judgment. Educational Leadership, 73(7), 50-54.
Hattie, J. & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112. https://doi.org/10.3102/003465430298487
Hayek, A., Toma, C., Oberlé, D., & Butera, F. (2014). The effect of grades on the preference effect: Grading reduces consideration of disconfirming evidence. Basic and Applied Social Psychology, 36(6), 544-552. https://doi.org/10.1080/01973533.2014.969840
Iamarino, D. L. (2014). The benefits of standards-based grading: A critical evaluation of modern grading practices. Current Issues in Education, 17(2), 1-11.
Klapp, A., (2018). Does academic and social self-concept and motivation explain the effect of grading on students’ achievement? European Journal of Psychology of Education, 33(2), 355-376. https://doi.org/10.1007/s10212-017-0331-3
McMorran, C., Ragupathi, K., & Luo, S. (2017). Assessment and learning without grades? Motivations and concerns with implementing gradeless learning in higher education. Assessment & Evaluation in Higher Education, 42(3), 361-377. https://doi.org/10.1080/02602938.2015.1114584
Miller, J. J. (2013). A better grading system: Standards-based, student-centered assessment. English Journal, 103(1), 111-118.
Plaskov, J. C. (2019, October 23). Reimagining college admissions season. The Mastery Transcript Consortium. https://mastery.org/reimagining-college-admissions-season/
Pulfrey, C., Buchs, C., & Butera, F. (2011). Why grades engender performance-avoidance goals: The mediating role of autonomous motivation. Journal of Educational Psychology, 103(3), 683-700. https://doi.org/10.1037/a0023911
Quinn, D. M. (2020). Experimental evidence on teachers’ racial bias in student evaluation: The role of grading scales. Educational Evaluation and Policy Analysis, 42(3), 375-392. https://doi.org/10.3102/0162373720932188
Rinn, A. N., Boazman, J., Jackson, A., Barrio, B. (2014). Locus of control, academic self-concept, and academic dishonesty among high ability college students. Journal of the Scholarship of Teaching and Learning. 14(4), 88-114. https://doi.org/10.14434/josotl.v14i4.12770
Schneider, J. & Hutt, E. (2014). Making the grade: A history of the A-F marking scheme. Journal ofCurriculum Studies, 46(2), 201-224. https://doi.org/10.1080/00220272.2013.790480
Smith, J. K. & Smith, L. F. (2009). The impact of framing effect on student preferences for university grading systems. Studies in Educational Evaluation, 35, 160-167.
Starch, D. & Elliott, E. C. (1912). Reliability of the grading of high-school work in English. The School Review, 20(7), 442-457.
Starch, D. & Elliott, E. C. (1913). Reliability of grading work in mathematics. The School Review, 21(4), 254-259.
Swan, G., Guskey, T., & Jung, L. (2014). Parents’ and teachers’ perceptions of standards-based and traditional report cards. Educational Assessment, Evaluation, and Accountability, 26(3), 289-299. https://doi.org/10.1007/s11092-01409191-4
Townsley, M. & Varga, M. (2018). Getting high school students ready for college: A quantitative study of standards-based grading practices. Journal of Research in Education, 28(1), 92-112.
Villeneuve, J. C., Conner, J. O., Selby, S., & Pope, D. C. (2019). Easing the stress at pressure-cooker schools. Phi Delta Kappan, 101(3), 15–19. https://doi.org/10.1177/ 0031721719885910
I shared some student work on Twitter, and it seemed as though some folks were interested in learning more about the concept. First of all, I didn’t come up with this concept at all. I’d seen one-pagers floating around for a while. Some time back, I tweeted asking for help with instructions, and Dianna Minor and Glenda Funk graciously shared their instructions with me. I also found Betsy Potash’s instructions via Cult of Pedagogy and these instructions at Ms. D’s English Fury helpful. I adapted my instructions from these sources. All credit goes to the fine educators who generously shared their ideas and their students’ work. I am indebted to them, and I’m sharing what I did only as a means of paying it forward in case it helps other people.
You can use one-pagers to assess lots of things. I am an English teacher, but I imagine they could be used in just about any subject and at pretty much every grade level, with some adaptations.
What is a one-pager?
A one-pager is a kind of project in which you share your most important takeaways from a text on a single page using text and artwork. You take what you have learned from a text and put the highlights on the page accompanied by art that represents, sometimes symbolically, these highlights and themes.
Why create a one-pager?
One-pagers allow you to mix media, text, and images, which helps you remember details better. It’s brain science. According to Allan Paivio’s dual coding theory, the brain has two ways of processing: the visual and the verbal. The combination of the two leads to the most powerful results. You will remember more when you’ve mixed language and imagery. One-pagers also offer variety—another way to share your interpretation and analysis of a text. You might be surprised what you will come up with! Plus, they’re fun. [All credit to this rationale goes to Betsy Potash.]
But I am not good at art/don’t like art…
I will share some templates with you that may help, but the important thing to remember here is that you ARE good at art. You might want to draft your one-pager in light pencil before coloring it in, but you will create something pretty amazing. I feel it in my bones. Also, do not use clip art or computer art. Trust me. One-pagers look so much better when they’re your own art.
Okay, so what are the parameters?
A single piece of letter-size paper (or A4 if you can’t get letter where you currently are located). You may use colored paper if you have access to it and want to, but it is NOT required.
Work only on one side of the page in portrait or landscape mode.
Include color and patterns*. Think symbolically here. Texture is fine, too.
Fill the entire page with your work. If you have blank space, repeat an element or fill it with one of the optional elements (see below).
Put your first and last name on the back.
Try to be neat with lettering. It helps to draft first. Definitely make sure handwriting is legible.
*I had markers and colored pencils to lend students who needed them.
What kinds of elements should I include?
The following elements are REQUIRED:
The title and author of the book.
Illustrations or symbols that represent the reading. This could be a character, a scene from the text, symbols that convey ideas expressed in the work.
Choose two or three notable quotes that stand out to you from the text. It could be quotes that make you think or wonder or remind you of something important from the text. Write the quotes on your paper using different colors and/or writing styles. Include the page number and a short analysis of the quote.
Make a personal connection to what you read. What did it mean to you personally? (Examples: “I feel…I think…I know…I wonder…”).
The following elements are options, but pick at least 2:
Create a border that reflects a theme. This can include words, pictures, symbols, or even quotes.
Draw a word cluster around your image. Use these words you highlight the importance of your chosen image. The word cluster may also artistically symbolize the subject matter.
Write a poem about the book, a character, or the theme. If this is particularly challenging, you may choose to compose an acrostic poem using a one-word theme.
Create a hashtag that relates to the text.
Explain how the setting shapes a character in the text.
The extent to which the one-pager demonstrates textual analysis.
Art and text demonstrate textual analysis that offers insightful interpretations and understanding of the text with analysis that goes well beyond a literal level.
Art and text demonstrate textual analysis that offers clear and explicit interpretations and understanding of the text with analysis that goes beyond a literal level.
Art and text demonstrate textual analysis that offers partially explained and/or somewhat literal interpretations and understanding of the text with some analysis.
Art and text demonstrate textual analysis that offers few or superficial interpretations and understanding of the text with little analysis.
The extent to which the one-pager follows the “rules.”
All the “rules” are followed: the work is on a single side of letter or A4 paper, the page is filled, color is used, first and last name are on the back, and the lettering is neat and legible.
Most of the “rules” are followed: one or two minor omissions (see exemplary column).
Some of the “rules” are followed. There are two or more omissions (see exemplary column).
Few or none of the rules are followed. There are more than three omissions (see exemplary column).
The extent to which all required elements are included.
All required elements are included and addressed in a thoughtful way that demonstrates symbolic thinking, analysis and/or synthesis of ideas, and thoughtful interpretation of the text. Two or more optional elements add depth to the piece.
All of the required elements are included. Elements demonstrate symbolic thinking, analysis and/or synthesis of ideas, and interpretation of the text. Two optional elements add depth to the piece.
Most of the required elements are included. Elements demonstrate developing symbolic thinking, analysis and/or synthesis of ideas, and interpretation of the text. Two optional elements are included.
Some of the required elements are included. Elements demonstrate emerging symbolic thinking, analysis and/or synthesis of ideas, and interpretation of the text. Optional elements may be missing or incomplete.
Back in the day, I sometimes reflected on professional reading on this blog, and sometimes, book clubs resulted. Blogging has fallen by the wayside in favor of Twitter, which makes me sad because sometimes the long-form reflection is better than a tweet thread. The UbD Educators wiki grew out of the reflection I did, and until Wikispaces went defunct, it was a promising project, though I confided to Grant Wiggins that it was hard to find teachers to commit to adding to the wiki. He wasn’t surprised because lack of time makes it difficult. I always say that we make time for the things that are important to us, and this blog is pretty important to me, but I hadn’t made a lot of time for it for some years. I’m going to try to change that, and one thing I want to do is document my thinking as I read Joe Feldman’s Grading for Equity. I joked to a couple of colleagues that I am finally making time to actually read this book, which has been on my radar for a long time, and I realize I should have made the time to read it as soon as it was released because Feldman is citing much of the same research as I am citing in my dissertation. I could have saved myself a lot of searching through the library database!
First of all, I encourage educators to take the quiz How Equitable is Your Grading? on Feldman’s website. If, in the wake of George Floyd’s murder, you are examining your curriculum’s diversity, equity, and inclusion, I think that’s great. I think it’s great if you are engaged in movements to #DisruptTexts and #TeachLivingPoets. You also need to take a hard look at your grading practices, too. If, as Feldman says, you are implementing some equitable practices, such as “responsive classrooms, alternative disciplinary measures, diverse curriculum—but meanwhile preserve inequitable grading,” you are perpetuating inequity in schools.
I’m going to start by using Feldman’s “Questions to Consider” at the end of chapter 1. I’ll just answer the first two and update tomorrow with responses to the remaining three questions. Otherwise, this post will be way too long. Maybe it already is!
What are some deep beliefs you have about teenagers? What motivates and demotivates them? Are they more concerned with learning or their grade?
After over 20 years of teaching mostly teenagers, I have concluded that a lot of adults expect them to be more “adult” because they tend to look more adult. What I mean is they expect teenagers have developed an internal locus of control. Not even all adults have an internal locus of control. Teenagers tend to still mostly have an external locus of control, which means they are more likely to attribute a poor grade to a teacher’s lack of regard for them instead of a lack of proficiency on their part. I think we need to remember that when we are grading. As such, they might be motivated to earn good grades (carrot) or avoid bad ones (stick), but grades in an of themselves don’t motivate them to learn. I think they do help give students some kind of yardstick they can use to judge their performance, but I didn’t think grades had even this utility until I started doing research. Grades might not communicate what we think or wish they would, but they communicate something. I think students are much more concerned with grades rather than learning when they are in classes in which all high-stakes assessments result in grades that cannot be improved through revision and in which all earned grades are averaged together. If, however, they are in a classroom that encourages revision and focuses on proficiency, they focus a lot more on learning. Teenagers actually love to learn things, but the trick is that teachers need to communicate the relevance, and the wrong answer is “I’m the adult, so I say it’s relevant.” And if what you are teaching isn’t relevant, you need to figure out how to Marie Kondo the curriculum.
What is your vision for grading? What do you wish grading could be for students, particularly the most vulnerable populations? What do you wish grading could be for you? In which ways do current grading practices meet those expectations, and in which ways do they not?
Before I started my research, I wanted to eliminate grades a measure of student learning. There is a movement to do just that, and many schools successfully use other methods for reporting learning, and yes, their students still get into college. I no longer think grades are entirely useless. I think we have just perpetuated inequitable grading for so long that I couldn’t figure out another way aside from burning the whole system down. Now I advocate for proficiency-based grading, and that means that students might revise their work, sometimes several times, in order to reach a level of proficiency in learning content and skills. In almost any aspect of life, we have chances to practice a skill until we master it, and no one says it is unfair. There was a time when every musician we know didn’t know how to play their instrument, when every athlete didn’t know how to play their sport. But we don’t judge their current competence by where they started. I think grading based on reaching proficiency, whenever it happens or however it happens, is much more equitable.
My dissertation is a dissertation in practice, meaning I need to take an action step and evaluate its success. My action step is to create a proficiency-based grading and authentic assessment guide for a pilot group of faculty, to implement the practices therein (along with a focus group), to evaluate the guide’s success and revise it accordingly, and to present the findings to my colleagues. Feldman’s ideas will be invaluable in framing the guide, grounded also in my own research. I am hoping implementing this action step will make grading less of a chore for me, too—I related so much to Feldman’s argument that teachers don’t like grading (p. 5).
What I need to do is figure out a system that is more mathematically sound and use it. I am doing fairly well on most equitable grading practices according to Feldman’s quiz, with the exception of that one. For example, I already:
Don’t weigh homework much. Homework is preparation for class, such as reading and writing. I don’t even really use the homework category in my online grade book for graded work.
Don’t calculate behavior and executive function skills in my grade.
Allow students to revise their work and replace the grade entirely with the new grade.
Don’t subscribe to the idea that grades need to fall on a bell curve or that I need a certain distribution of grades.
Don’t count participation as a grade category. It is part of the rubric in a Socratic seminar.
I do not have students asking me to create homework assignments, and they mostly do the preparation I ask them to do. Students sometimes turn work in late for me, but it doesn’t bother me. Other than that, I don’t feel I miss anything by excluding executive function skills. Students actually work harder knowing the grade can entirely be replaced if the work improves. I don’t subscribe to fears about grade inflation or worries that students have too many high grades, and I find conversations with others who are still hung up here to be maddeningly frustrating. I have long felt participation was too slippery to calculate, and sometimes students are super engaged but don’t say as much. I still get excellent participation from students without grading it.
More tomorrow on the first chapter reflection questions. Let me know if you want to “book group” this book.
One of the many reasons I haven’t had much time to blog lately is the fact that I went back to grad school in September. I’m working on my doctorate at Northeastern University. Working full time and going to school has meant all the writing I’ve had time to do has mostly been for school, but it’s been a fantastic learning experience so far. I have learned so much from the reading and writing I have done. I can’t even compare my experience with earning my master’s degree to my experience working on my doctorate, and I’m only sorry I wasted so much tuition money and time on the master’s. Here I’m showing my ignorance, but I didn’t realize one could go right into a doctoral degree program with a bachelor’s degree.
My dissertation in practice is an action research investigation on grading and assessment practices. If you’ve been reading this blog for a while, it’s perhaps not a surprise, as assessment has been an interest of mine for a long time. I have come to the conclusion that grading impedes not only motivation but also learning, as students tend to focus on the grade at the expense of the learning. It’s true that some students don’t find grades to be a motivator, and those students tend to view them more as a stick than a carrot. Whether grades motivate students or not, however, they do encourage students to focus on the wrong thing, and even students who truly want to learn find grades demotivating. Students have told me they are afraid to take risks. They select “easier” options. They try to figure out what the teacher wants to hear and parrot it back rather than think for themselves. All of this is anecdotal—I’ve seen it many times over the years; however, I see no reason why students would be dishonest about their feelings regarding grades.
Going back to school has put me in the same position as my students. The anxiety I have experienced over my grades has been difficult to manage at times. Of course I want to learn, and I’d be lying if I said I didn’t want to please my professors. Even though I’m actually studying the effects of grading and know exactly what is happening to me, I find myself unable to focus only on the learning. I want to earn good grades too badly. It’s utterly ironic on a few levels. I’m actually doing very well, for one thing, and for another, the research is quite clear that grades are subjective, demotivating, and even contribute to poor performance (Bloxham, et al., 2016; Brackett, et al., 2013; Cvencek, et al., 2018; Klapp, 2015). My hunch is it has to do with mindset. I noticed my students relaxed quite a bit once I instituted a liberal revision policy.
One of my classmates mentioned that a professor I will have for a summer course is a hard grader. So naturally, I’ve already started worrying about a class I won’t start for nearly a month. It made me reflect a little bit on reputation. I don’t think I have a reputation for being a hard grader. One person told me my reputation was my expectations are “reasonable,” and I’ll take it. My students this year seemed to be happy in my classes, and my course surveys revealed they felt cared a for and that the choice and agency they had was important for their growth. I relaxed a lot on my own grading practices as a result of the research I have done and because of my own experiences as a student. I truly do not understand the need for a graduate program to use grades.
We know what to do about grading and assessment. I think one reason I was not accepted to another graduate program to which I applied is that my research does not examine a gap in the research. On the contrary, there is plenty of research on grading and assessment, and going all the way back to the 1800s, the research has been fairly clear. And yet, we keep reporting learning by using grades. So even though there is no gap in the research, it’s clear to me that classroom practices haven’t changed as a result of the research, and that’s what I’m interested in: change. We need to do right by our students and fix this problem that has plagued education for far too long.
Bloxham, S., den-Outer, B., Hudson, J., & Price, M. (2016). Let’s stop the pretence of consistent marking: Exploring the multiple limitations of assessment criteria. Assessment & Evaluation in Higher Education, 41(3), 466-481. doi:10.1080/02602938.2015.1024607
Brackett, M. A., Floman, J. L., Ashton-James, C., Cherkasskiy, L., & Salovey, P. (2013). The influence of teacher emotion on grading practices: A preliminary look at the evaluation of student writing. Teachers and Teaching, 19(6), 634-646. doi:10.1080/13540602.2013.827453
Cvencek, D., Fryberg, S. A., Covarrubias, R., & Meltzoff, A. N. (2018). Self‐concepts, self‐esteem, and academic achievement of minority and majority North American elementary school children. Child Development, 89(4), 1099-1109. doi:10.1111/cdev.12802
Klapp, A. (2015). Does grading affect educational attainment? A longitudinal study. Assessment in Education: Principles, Policy & Practice, 22(3), 302-323. doi:10.1080/0969594X.2014.988121
I know I am really late to this party, but I just discovered The Great British Bake Off. I have been catching up on each of the seasons available on Netflix. It’s rare for me to actually be able to binge-watch something, but I can watch The Great British Bake Off all day. I find it helps me destress a bit. I love seeing what the contestants come up with. I admit I haven’t watched the American versions. If you have, feel free to chime in here, but my feeling is that it couldn’t quite work the same way with American contestants because one of the best things about The Great British Bake Off is the fact that even though contestants are competing against one another, they support each other, show each other kindness, and even seem happy for others when they are named Star Baker or win the competition and sad to see contestants go. I’m not sure Americans are like that in a competition.
I don’t have this idea fully formed in my head yet, but for the past couple of weeks, I have been wondering what educators can take away from this show. I don’t mean the competition aspect, necessarily, but the structure of the show intrigues me as a learning model.
If you haven’t seen it, each week has a different focus: Bread Week, Pastry Week, French Week, etc. Some of these themes repeat each season, while others don’t necessarily. For example, the most recent season available on Netflix included a Vegan Week. Bakers have to display a wide variety of skills and apply what they know about baking to several challenges.
The first challenge in each episode (or week) is the Showcase Challenge. This challenge sets a goal, such as making 24 identical buns, that allows contestants to demonstrate their skills. They know the Showcase Challenge in advance and are allowed to practice recipes at home. The second challenge is the Technical Challenge. For this challenge, contestants do not know the recipe, and often, the judges set really difficult baking tasks for the contestants. They must apply what they know about baking to the challenge because in some cases, they are not given full, precise directions. It’s not uncommon, for example, for the baking directions to just say “bake” without offering baking time or temperature. The final challenge each week is the Showstopper Challenge. For this challenge, contestants must impress by going all out to create something truly amazing that fits the theme. For example, if it’s Cake Week, the judges might ask for a landscape cake with a whole scene in edibles.
I am a bread baker, somewhat new to baking bread as I had always thought it too intimidating. I’ve been baking bread about a year and a half or so. Not too long. I love baking bread. It tastes good, and it provides just the right amount of challenge coupled with simplicity—after all, it’s mostly just flour, water, salt, and yeast. I have had a sourdough starter going for about 15 months. I started watching The Great British Bake Off thinking I would find it entertaining since I like to bake. I didn’t really expect to learn anything from the show, and not because I’m an expert or anything, but mainly because I don’t usually learn much from television or video. I generally have to read books. I actually will read cookbooks cover to cover. However, aside from learning a few things about baking that I didn’t expect to learn, I also noticed the show teaches a few important skills and competencies that it would be good for all students to learn.
First, the show asks contestants to apply their knowledge about a variety of baking skills, from cookies (or biscuits) to cakes to bread to pastries. All aspects of baking are important: the appearance, the flavors, the ability to follow instructions and deliver what is asked. Each week’s three challenges offer an opportunity to demonstrate different skills:
What can you produce within the confines of certain expectations with time to practice?
What can you produce bringing to bear what you know about baking when you are giving a challenging task?
What can you make that will really impress?
These skills could be applied to other kinds of learning. What if an art class tried these three different challenges? A Showcase with a chance to paint something you know well? A Technical that challenges you to apply a skill, such as stippling, to create a painting? A Technical that challenges you to apply an array of painting skills to create something.
What if a writing class gave students a Showcase challenge that allowed them to write in a genre of their choice about a topic? A Technical that gave a topic and challenged students to write in a specified genre? A Showstopper that asked students to write in several different genres on a topic?
These ideas are obviously not fully formed, but I must admit when I watch this show, several things impress me. The contestants take feedback really well and learn from it. They demonstrate a great deal of resilience and dedication to learning. They have to display a wide array of baking skills, probably far more than the average home baker usually knows. As I mentioned before, they are really supportive of each other. I have actually seen several contestants help others when they’re struggling.
I can’t help but wonder what might happen in a classroom that looked a little bit like The Great British Bake Off.
After we viewed the digital stories my students had created this year, I asked students to evaluate themselves using the rubric I had given them. Next year, I will definitely make time to create the rubric with the students in advance. The rubric I have is good, but the students could make it better. On the back of the rubric, I asked students to give me feedback about the project. I wanted to collect some of their feedback here for those who might be thinking about this project and are feeling on the fence. This feedback represents what the students actually said (warts and all).
Don’t change this from being the final exam because it’s an absolutely great way to end the year and it’s really fun. I don’t think anything needs to be tweaked, the timing is perfect, the spacing for due dates is good and the help given is great.
I loved the project and how we could all pick whatever we wanted and got to watch everyones. Don’t have to change anything, it was great.
In all honesty, I think this project is a lot of fun to put together and all the criteria make sense, even when you don’t think you have a story to tell. It fits for everyone, especially with all you can choose from.
I think the idea of this project is awesome. I had a lot of fun with it and finally learned how to use iMovie. I didn’t find anything wrong with the project.
I liked this project. It was very fun and I enjoyed watching the videos at the end. I liked being able to pick your own idea instead of being told what to do. I wouldn’t take anything out. I liked where you checked our script too. It really helped me at least with knowing it was ok.
The project is great! I enjoyed every part and was excited to do it every step of the way. The one part I had difficulties with was the sound aspect. The sites are great [sites I provided for finding public domain and Creative Commons media] with so many options, but I’m not good at picking things like that. Thank you for helping me find the “perfect” one (better than I could have done).
I don’t know how you could improve it. I thought it was well explained and fun. I would keep everything the same.
I don’t think there should be many changes to the project at all. It’s a really good and fun project. I enjoyed making my video and going back to find everything.
You should keep this project next year. I really enjoy doing the digital story.
The project was very clear and I really like how our final was a project. The project helped me become more creative and engaging. Personally, I really like it and nothing should be changed. Also, I learned a lot in this class, and thank you for a great year, Mrs. Huff!
This project was very fun. I enjoyed our own choice of theme. It was even fun looking back at old pictures and reliving my little league life. One thing that did frustrate me was learning to use different applications on my computer. If I was taught throughout the year to use these different sources this project would have been much more enjoyable. Overall a great project.
I have to point out that last feedback came from a student who struggled with the technology to the point of wanting to give up and take a zero. He persevered, and he did a fabulous job in the end. He was very proud of his work. His feedback about using the software earlier and more often is legitimate. Many students tell me this project is the first time they have opened the iMovie and GarageBand applications on their school-issued computers.
I had a lot of fun doing the project, I enjoyed showing where I’m from and I hope my video would inspire someone to visit one day.
I like the project and we have enough time to do it.
A few trends emerge for me from this feedback:
Students seem to love this project, and even those who struggled said it was a great project and should be kept in the curriculum.
Students seemed to feel they had enough time to complete it. I was worried about that because I gave them more time last year.
Students appreciated the agency they had as they created the project: picking the topic and telling the story they wanted to tell was an important reason why they enjoyed the project.
Student felt proud of their work. They didn’t exactly say so in so many words of feedback to me, but it shone through in the feedback they gave themselves. Here are some snippets:
I am very happy with my music choice and the amount of pictures I chose.
I had a lot of good pictures.
I liked how I had the music start after I said the title.
I liked the pictures.
I thought I had the perfect music and well placed pictures.
I did not have many pictures, but I was able to think of ways to get around lacking pictures.
I paid lots of effort on it and I really enjoy this project.
I did well with the pictures as well as the story.
This project was very challenging for me from the start. After figuring it out things began to come together. Once my voiceover came in I started to enjoy the project.
I think my video has pretty good background music and photos that go along with the voice.
All these comments tell me that the students feel good about what they were able to do. They offered fair criticisms as well. Most of them didn’t feel 100% confident their voiceovers were as good as they could be, but that could also be they are not used to hearing their voices and worry about how they sound (most of us feel that way when we hear ourselves on a recording).
This project makes for a great culminating narrative. They worked on narrative writing, and putting their personal narratives together with image and music to tell a story using video was a great way to see what they had learned about telling a story. And as it turns out, they learned a lot. I’m really proud of them.
I have thought for some time that if I ever get myself together enough to write a book in the field of education, my subject would be assessment. It’s probably the issue I think about the most often. It truly bothers me that it’s done so poorly—not just with standardized tests, but also in classroom settings. It’s too big for a blog post, but I will put a few of my thoughts together.
Several years ago, and some of you have been reading this blog long enough to remember, I read Understanding by Design by Grant Wiggins and Jay McTighe. When I read that book, things really clicked for me. I cannot honestly say that I create UbD units for everything I teach, but one aspect of UbD that has really stayed with me is authentic assessment. I don’t give tests, even though UbD says tests are fine in addition to performance tasks. I give quizzes, but rarely with multiple choice, true/false, or other types of purely objective questions. I tend to ask more open-ended questions that require students to tell me what they know about a given topic. Aside from these types of quizzes, the main types of summative assessments I give are writing assignments, discussions, and projects.
Our school is incorporating more project-based learning. Project-based learning is not the same thing as doing projects. I have had to do plenty of projects in school that were more or less busy work and didn’t demonstrate much learning. Those old dioramas come to mind. Quite a few posters come to mind as well. However, I do recall doing some projects as a part of project-based learning that required deeper learning. For instance, in the sixth grade, I created a tour guide for Venezuela. I am sure that my social studies teacher required certain elements, such as tourist destinations, exchange rates, and the like, but what I remember is researching the country and creating the pages in my guide so that I my readers could learn everything they needed to know about the country in order to prepare for a visit. I still remember showing the project to my language arts teacher, who told me, “Oh, now I want to go to Venezuela.” I remember doing the work and what I learned because it was an authentic assessment that placed me in the role of a tour guide writer who needed to convince readers to visit a country, and it felt fantastic when my language arts teacher liked the project. My social studies teacher easily could have asked us to write a research report that included the same information, but I doubt I’d still be remembering the research report more than 30 years later, nor would I remember what I’d learned about Venezuela. The most important thing is that I did all the work. I did the reading and research. I created the tour guide. My teacher must have given me class time, but I recall sitting by myself in the library, with a copy of Fodor’s Travel Guide, encyclopedias, and other books.
One of the reasons I am an advocate for authentic, project-based assessment is that I have seen the students’ engagement in the learning, and I have seen how it helps students to learn and remember more of what they learn. There is a saying that has been bandied around to the point of cliché, but it’s worth sharing at this point:
Some years ago, a student gave me a card that I have cherished. In it, she wrote that she felt the work she did in my class was relevant. To be quite honest, the work I assigned, especially before I became thoughtful about designing for understanding and authentic assessment, was not always relevant. In fact, it often wasn’t. Students should understand why what they are learning is important and what they might do with it in the future. We’re not always great at communicating the importance of the work we assign. We need to reflect on the work we ask students to do. We need to determine what it is that we want students to learn, and we need to plan lessons and assessments that will help the students learn that information. We also need to give students agency and choices. Students should have a role in selecting reading and writing assignments. They should be given opportunities to discuss what they are learning in their reading and writing, too. It is in this way that we can involve students so that they learn.
None of that is to say that we do away with essays or tests, but we need to ask students to apply what they are learning in our classes so that they understand they’re not learning it for a test. I have only scratched the surface and don’t feel I’ve said a whole lot here, but please check out some of my other posts on assessment for more, and of course, more will come, as I can’t seem to leave this topic alone. (See tags and category links below for more on assessment.)
One of my favorite aspects of Grant Wiggins and Jay McTighe’s book Understanding by Design is the real-life unit plan model they describe for a health class. In order to help students learn more about healthy foods and healthy eating, the performance task asks them to design a balanced meal plan that allows for dietary restrictions (such as diabetes) for campers. This problem is a real world problem that students might encounter in that each camp employs a real person who plans menus in the same way. It requires students not only to think about healthy food, but also variety and appeal as well as certain health issues that may (or perhaps already do) affect them. It’s a great assessment. I think it’s in the same book that students are asked to design the best form of packaging for candy so that the most amount of candy can be transported while maximizing space in the truck transporting it while still ensuring the packaging is convenient. I have left my copy of the book at school, so you’ll have to forgive me if I don’t remember this exactly right, but I seem to remember that spherical packages would maximize the space in the truck and enable the most amount of candy to be transported, but for obvious reasons, spherical packages are inconvenient.
It reminded me of a real world problem I heard about when I visited Carolina Day School in Asheville, NC not too long ago. The middle school was considering replacing the long tables in the cafeteria with round tables, but the administration was concerned that they would not be able to fit enough round tables to seat all the students in the cafeteria. The assistant principal knew the seventh graders had been learning about area in math, so he gave the problem to them to solve. I don’t know what they decided, but I think it’s a great way for students to learn about real world applications for math. I always hear students complain, often about math, that they can’t see how they will use the skills in “the real world.” Of course, I know they will use the skills in all kinds of ways they may not be able to imagine, but I think sometimes teachers don’t always give students enough real world problems so that students understand the relevance of what they’re learning. In his last blog for The Huffington Post entitled “Best Ideas for Our Schools,” Eric Sheninger argues for authentic learning: “In my opinion there is no other powerful learning strategy than to have students exposed to and tackle problems that have meaning and relevancy.”
The Weber School’s students recently won first place in the Moot Beit Din competition. Moot Beit Din asks students to apply Jewish texts to current problems. The competition offers students an opportunity to determine in what ways Jewish texts are still relevant as a guideline for modern life and also how they can use these texts to grapple with issues in our society today. In terms of Jewish studies, it’s about as authentic as it gets: not unlike Model U.N. or Mock Trial. Once students participate in these types of activities and describe their experiences, they make connections between what they’re learning and the “real world,” and their excitement is palpable. Just take a look at this video (which features some of Weber’s students):
In many ways, just approaching an assignment differently can turn an activity that may not ask students solve a real world problem into one that does. The other day, I was in our school’s Learning Center, and I found an assignment left behind by one of our tenth graders. It was based on the chapter of The Great Gatsby in which Nick attends Gatsby’s party for the first time. Students were asked to write an article as the gossip columnist for the local New York newspaper in which they describe the party, including some of the rumors about Gatsby and speculations of their own. It’s a great approach to a traditional summary. Students are asked to recall and predict, which are not necessarily the highest order critical thinking skills, but are good skills for reading comprehension. If they had been asked to write a summary of the chapter, they wouldn’t have enjoyed it nearly as much, nor would they have produced work that was half as fun to read or that approached a real world situation they might encounter—how to write for the kind of authentic audience that reads a newspaper and is relying on the writer for information. Students see the relevance of this kind of assignment much more readily than the see the relevance of writing a summary, yet both assignments essentially ask students to use the same summary writing skills. The main difference is in their approach.
The headmaster of Carolina Day School told me that he felt students should be blogging because there was a ready-made authentic audience in a blog that gave a writer a reason to write beyond earning a grade for a class. They are no longer writing just for their teacher, but also for a larger audience, and more importantly, for themselves. Assessments that ask students to grapple with real world problems don’t necessarily require a huge shift in the kinds of skills and learning that are assessed so much as they require a shift in thinking about how we approach teaching and assessing skills and learning.
Feel free to share some of your ideas for authentic assessments in the comments.