Tuesday, June 10, 2008

Grading exams: Some reflections

'Tis the season, and evaluating students is more or less what I do this week. Beside the PhD defense I'm administrating I've evaluated three different exam forms this semester - oral, take-home and written, and it's apparent to me that there are some recurring flaws in the way students are approaching these different exams. In addition to the ways in which different aspects of standard pedagogics favor certain exam forms, that is. Moreover, it's quite obvious that the students in Norway have a different approach to different exam forms than, US undergrads do.

Oral exams:
As part of my mandatory pedagogic training, I performed a survey among undergraduate and PhD students in my field as to their experience with and attitudes towards oral examinations compared to written exams. 70% of the undergrads preferred written exams over oral exams, while the PhD students didn't really care about which evaluation form was used. However, the majority of students (both categories) stated that oral exams provided a better vessel with which to demonstrate their skills. Not only that, but some 40% of the undergrads felt that oral exams are easier than written ones, and more than 90% (both categories) had higher grade expectations for oral exams than for a written equivalent.

So hold up; undergraduate students are of the opinion that oral exams a) allow them to better demonstrate their knowledge, b) are easier and c) yield higher grades but d) they prefer written exams? Say whuut?

Turns out it can be traced back to another set of questions in the survey, related to the level of nervousness/anxiety. Among PhD students it wasn't really an issue, but among undergrads ~70% stated that they were way more nervous before an oral exam than before a written one. How about the effect of anxiety on the performance? According to the student panel used here, less than a third thought that the pre-exam anxiety (which for some 85% grew less as the exam started and progressed) negatively affected their performance.

So the major reason students prefers written exams is because they're less nervous before the exam, despite expectations of an easier test, better grades and more efficient means to demonstrate their acquired skills. Are oral exams really that scary? Now I don't have equivalent data in survey form from my days of teaching in the US, but no way were the NC undergrads as nervous before having to present something orally. If a fear of a very limited form of public speaking is the problem, then it's apparent to me that the US tradition of show&tell starting in kindergarten isn't such a bad idea.

Not saying that oral exams can't be horrible, though. I remember that once upon a time in a place either far away or very close, a post doc or newly hired prof was conducting an oral exam and quite late in the exam, the student was asked to derive an expression from some given initial conditions. The student glibly threw down the initial and final expressions and stated that what was between was "just math". For the next hour or so, the student bitterly regretted this statement as he was forced to make good on the promise and do the actual derivation, seeing as how it was "just math".

Take-home exams:
I'm not such a big fan of these within the framework of PAMS (Physical And Mathematical Sciences), as they tend to get fuzzy. Not to say that they can't be hard - one of the hardest exams I've ever had was the final take-home exam for a PhD course in molecular spectroscopy, aka. poorly hidden quantum mechanics. What I'm saying is that the exam form introduce two factors which can be difficult to correct for; teamwork and ................... (insert your favorite toned-down synonym for plagiarism here). Teamwork can be pretty hard to spot, especially if it involves a quantification problem in which there is only one solution, although I've seen plenty of it from my student days. For more essay-like problems or assignments involving literature surveys, plagiarism can be pretty transparent. If you're reading section upon section with epic broken English followed by some sentences or paragraphs with perfect grammar and sentence structure, then odds are that some sections have been "borrowed" from other sources. How the hell do you correct for that, especially in the cases where it's not that obvious, but there is merely a suspicion of plagiarism? One is supposed to grade mostly based on the knowledge demonstrated by the student, but if it's a case of copy&paste, how much of that knowledge has been assimilated by the student? I hate take-home exams.

Written exams:
By far the most used exam form here, and it's what I'm dealing with at the moment. It's interesting to note how certain catchphrases I've used often during the semester finds their way into the exam answers - not necessarily within the right context but still. If there's a phrase I've used more than ten times during the semester, it will find its way onto the answers. Often in quotation marks.....don't know how I should feel about that.

Another thing I'd really like to know is whether there's a correlation between the the frequency with which people show up for the lectures and the grade they get. Because while it's really cool to read a good exam paper, it massively sucks to read a compilation of wild guesses and bambi-esque reasoning, 'cause I can't help but wonder if I could've done a better job of explaining the material for the student. Provided, of course, that the student showed up for the lectures and actually put in hours and hours on self-study. Problem is; unless I have data on this correlation, I can't tell if I should approach things differently or if the student is simply a lazy bastard, and without this info, any changes I do to my teaching are going to be more or less blind guesses. And the student evaluations are anonymous, as they should be, so I guess I'm screwed. Oh well.

How 'bout that; I'm reflecting on my personal teaching philosophy. My pedagogic instructors probably couldn't give a damn though, as they certainly weren't interested in any suggestions towards how to quantify the efficacy of their course......


Anders said...

It's hard to judge your own performance, yes. Also, if all/most of the students do a decent to great job on the exam, does that mean that you have a particular bright class, that you've done a great job teaching or that you've made the exam way too easy?

You've basically screwed anyway. One way of deciding whether you've done a good job or not, is the see what kind of questions you get from the students. Some questions shows that the student has gotten the main point but are unsecure, some shows that the student have no clue what you've been doing up there at the black board/ powerpoint slide show. Yes, there is such a thing as a stupid question!

Wilhelm said...

Also, if all/most of the students do a decent to great job on the exam, does that mean that you have a particular bright class, that you've done a great job teaching or that you've made the exam way too easy?

Very good question. The way I approached this problem was to confer with the previous teacher of the course when I made exam sets the first two years, plus compare my exam sets with his previous ones, both with respect to anticipated workload and - for lack of a better term - conceptual difficulty. This ain't exactly a failsafe method, but it's what i came up with. Moreover, one of the questions on the department standard student survey is "how difficult is this course compared to other courses at the department". From these indicators I'm about where I want to be, but maybe I'm kidding myself - the available data set isn't THAT reilable after all.

One way of deciding whether you've done a good job or not, is the see what kind of questions you get from the students.

Absolutely. By far the majority of questions that I get belong to the depth-learning category, but I also get questions which belong in the "moron" category. You bet there's such a thing as stupid questions.

I'm not complaining about the students though; I just wish I had a better way of knowing how to improve my style of teaching. Although a little more crowd participation than what's rooted in the Norwegian student culture would be a good thing.