Friday, 29 October 2010

What's in a Grade?

Today's weather: High = 17 Low = 10
Partly cloudy

As the November report card marks are rapidly approaching, so are the students' application deadlines to various US and Canada universities. They are enthusiastic, or should we say obsessed, with getting the high marks to enter the institutions of their choice. The first reporting period is the most important, as these marks are the basis of what they use for early admissions.

So this begs the question of what's included in their 1st term grade -- which is more like an interim grade. In other words it's gotten me very curious, once again, about the process of assessment and evaluation and the details of what goes into that all-important number on their report card.

The term 'grade inflation' is used a lot these days, in various contexts. The Vancouver Sun newspaper from my hometown has much to say about grade inflation as it relates to the English marks of the school sytem (in general) I teach in -- BC offshore system. Obviously this is a hot-button issue with a lot of controversy, but thankfully as a math teacher it doesn't affect me as much.

Nonetheless I thought I'd do some internet research on what exactly grade inflation is, and more importantly, what are the criteria that go into making a grade. As usual, Wikipedia to the rescue

http://en.wikipedia.org/wiki/Grade_inflation

Another website called grade inflation a dangerous myth!

http://www.alfiekohn.org/teaching/gi.htm

From what I could conclude out of these articles:

(1) Grade inflation is better desribed as grade compression. That is to say, "inflated" grades mean the marks in a sample set are closer together and it is more difficult to tell who is a top student from an average student, and so forth. In other words, the "inflated" grades don't fit a normal distribution. Rather, they are skewed to the right.

(2) There is mixed statistical evidence on whether grade inflation / compression is really happening.

(3) If we accept that it's happening, then nobody has been able to prove whether the standards for getting an "inflated" A are lower than the standards for a "true" A. In other words, nobody talks about the criteria that is used to judge an A or how a mark develops. Unbelievably, the debate rages with that crucial part of information left out.

(4) Marks that are supposed to fit a prescribed standard, i.e. a normal distrubution or a limited number of As ensure there will be "winners" and "losers" in the class. In other words, if the marks are altered to fit some sort of desired pattern, then the grades describe relative standing in a class as opposed to how well students meet an objective set of criteria.

(5) Some educators argue that grades themselves are the problem and should be done away with and replaced with qualitiative comment. I couldn't quite agree with that -- and besides, that scenario is not possible in my world.

My own take on things is that the student marks should reflect, as much as possible, how well they perform to an objective criteria set. The best source of that is the PLOs, or the prescribed learning outcomes, as set by the government.

The idea of altering marks, doctoring them, etc. goes against the entire scientific method and should make any math or science teacher cringe. If the report cards come out and the marks are "too low" or "too high" and as a result they are doctored, it is the same as doctoring the data from a scientific lab because the results didn't fit the ideal model. Obviously I'm not in favor of scaling marks or grading on a curve.

Students need to know ahead of time what the criteria for assessment is, and how they can plan to meet the expectations. I put that information in a course outline at the beginning of the year, for example:

Quizzes -- 15%
Test -- 40%
Projects -- 10%
Homework -- 5%
Term exams - 30%

Going into more details, the tests themselves are based on PLOs and the lesson plans are geared to cover those PLOs and in essence, prepare for the tests. There is usually a review package or some other sort of practice test as well.

Even so, there are multiple factors for why a grade isn't exactly objective. It can be as simple as when the test is scheduled. Shockingly, I gave one math quiz before lunch for one class, and the other one after lunch. The one before lunch netted an 82% average, while the one after lunch only fetched 70%. It was the exact same quiz.

Several students complained afterwards and said they were 'too sleepy' when the test was scheduled in the afternoon.

Another thing is how two tests can be designed on exactly the same PLOs yet one is much harder than the other. It may not even be that one test is harder, but the order of questions or the pscyhology of the test can influence the final mark. For example, a multiple choice test that has a string of B B B B B choices and no D or C can throw people off, even if it is a simple computer error where the choices are not randomized.

So while the grade may not be completely objective, it should try to be as much as possible. I think the key here is to say what the goals are ahead of time, let students achieve them, and don't switch the goalposts midway through the game.

No comments:

Post a Comment