Friday, March 27, 2009

To reform or not to reform

From some recent posts, It would certainly appear as though I'm vehemently opposed to teaching reforms. Also; Anders' comments to the aforementioned posts have done little to diminish this impression :-)


Just to clarify my position; my default position to teaching reforms is very positive. There is no system that doesn't have room for improvement, and a conscious effort towards improving teaching and an aspiration of concomitant added learning and understanding for the students is definitely something worth striving for. Despite how my posts may appear at times, I'm no gatekeeper of teaching traditions. Traditional teaching practices are valuable in that they represent a standard set of well-defined input and output variables by which to measure alternative approaches. I am very sceptical towards any claim of methodological perfection; the best case scenario for traditional teaching methods is that they represent the best practices of the time, and there's no logical reasoning to back any reluctance towards change on the basis that we simply can't do any better.


So; if I'm pro teaching reforms, then what's my problem?


In short, it's the apparent complete absence of scientific method behind recent reforms, and the reluctance or inability to objectively quantify results and thus compare - for lack of a better term - teaching paradigms, in time leading to an iterative improvement of teaching. If a parameter change leads to measurably lower student learning, then don't continue in that direction. One variable at a time is a very simplistic approach to such a complex set of correlated variables as teaching and learning, but even this would be a massive improvement compared to the "reform by elimination method" we've seen. With all the data available, it would probably be a good thing to use multivariate analysis, but the strong inverse correlation between "politics + soft sciences" and math all but ensures a massive fail if this would be put into effect.


For all I know, this could be more of a political problem than a methodological one, in that various amendments not rooted in anything but rhetoric and party programs are piled onto the proposals after the teaching professionals are done with it. This, however, would probably be indistinguishable from bad science. The most recent reform is the so-called "Quality Reform" which was rolled out in 2003 under a center/conservative government coalition. Some notable changes which were rolled out as a result of this reform at the university level are: (1) the introduction of mandatory pedagogic training for academic staff, (2) more focus on student evaluation throughout the semester rather than by one single yardstick (the exam) via portfolio assessment, and (3) a change in the way education is funded.


(1) Mandatory pedagogic training for faculty members - the so-called PEDUP program. I've written about my experiences with PEDUP in several posts, like here and here, and although I recognize it as a good idea on paper, I'm not sure if we'll ever be able to measure the impact. During my PEDUP tenure, the staff refused to answer any questions pertaining to how much the implementation of PEDUP has improved teaching and concomitant student learning. The basis for their refusal to answer any such question was their firm and expressed belief that the quality of teacher performance could not be quantified. Not the ideal attitude towards an implementation based on a measured discontent with teacher performance and the assumption that an improvement of this would lead to better student performance, which is measured, averaged and evaluated like there's no tomorrow.


(2) Portfolio assessment isn't exacly a new concept, despite being touted as such by various politicians. Fact of the matter is; most traditional teaching schemes can be renamed as portfolio assessments without any change whatsoever, as long as more than one element is used in the evaluation. Interestingly, the number of cases of students cheating has increased tremenduously after the Quality Reform was introduced - predictably mostly for any other form of assessment than exams, such as written assignments etc., where students can lift phrases and even entire texts off of the interweb. However, it wouldn't be fair to put the entire blame for this on the reform, as recent years have also brought more attention to these phenomena, as well as educated the teachers in use of tracking software to combat fraud. The observation still remains, and this needs to be taken into account when comparing evaluation forms. After all, a grade is supposed to reflect the performance on an individual basis.

(3) The funding of education at universities and university colleges has changed from being a fixed amount based on the number of students to following each student. In other words; universities get paid according to how many students pass. And wouldn't you know it; the number of failing students has dropped significantly since the introduction of the Quality Reform. You think there might be some conflict of interest here? Also, this effectively introduces a serious covariance in the data set which makes it impossible to determine any positive effect of mandatory pedagogic courses etc. Not even with a neural network would you be able to untangle the effects of a) introducing changes meant to improve the quality of teaching, learning and evaluation and then b) roll out a system wherein the universities get a cash incentive to let students pass while at the same time c) let the university decide how many students pass or fail.

This is not very scientific. Not at all. As a matter of fact, I'd wager that if hereditary factors have any influence on intelligence at all, and the scientific and engineering communities were made up by the same people who constructed Reform 94 and the Quality Reform, humanity would still be sitting in caves, trying to figure out if rocks are edible.

2 comments:

Anders said...

Just curious: Why this post? Some feedback or similar?

And just to state the obvious: though I've never attended any of your lectures or talks (apart from your PhD defence) as far as I can remember, but I've always considered you a guy who puts a lot of effort and thought into your teaching and service to the students. And even though there is a humorous tone in here from time to time, I do believe that it is possible to see that your problem isn't with change in teaching, but with quality.

(1) Mandatory pedagogic training for faculty members

Good on paper, but it all depends how it's executed. The universities educates scientist, and there is little or no focus on communication skills (at least it used to be back in the days). And since there aren't that many gifted speakers out there to begin with, I'm all for measures that try to improve teaching skills. But they have to be good and add value. Which from your earlier post the PEDUP (or is it FED-UP?) doesn't seem to do at all. So I'm with you on this one.

(2) Portfolio assessment ...

I do agree with you that you can't blame increased cheating on the reform. As you said, the increasing load of information on the internet and such makes cheating simpler and more accessible. And the tools to expose cheaters have become more sophisticated. These two factors may have way more impact then the reform. As well as accounting for a general moral decline (which may or may not be there).

An exam is a one-point evaluation, and for several reasons could be a "performance outlier" for the student. Sure, in most cases it would even out after several exams and years, but with the introduction of the lower resolution letter grading system, one bad or good grade may carry more impact now then before. So I do see the need for a new methods that evaluate more of the overall performance, but it's not easy to come up with a good solution. Also, exam isn't really representative for what you will encounter in you career after university.

(3)...universities get paid according to how many students pass...

No sh*t that opens up for a helluva bias! The negative impact in reputation for the Universities must be minimal compare to the financial gain of sending out a bunch of E-average students rather then flunking half of them.

Wilhelm said...

Just curious: Why this post? Some feedback or similar?

In a rare moment of self-reflecting I realized that my posts on the subject don't exactly come across as reform-friendly. Just thought I'd clarify my position.

And even though there is a humorous tone in here from time to time, I do believe that it is possible to see that your problem isn't with change in teaching, but with quality

If so, good. I'm all about doing what works, and I carry no particular torch for Ye Olden Ways. Problem is; every time a reform is implemented, it's not based on science, and the changes are not implemented in such a way that they can be properly measured. But the inability to measure the changes comes not from inherent difficulties, but rather from poor experimental design.

The universities educates scientist, and there is little or no focus on communication skills (at least it used to be back in the days).

..the problem is recognized and improvements are expected, but little is in reality done on making us better. I honestly wouldn't consider PEDUP to have contributed much towards bringing the level up where it ought to be.

As well as accounting for a general moral decline (which may or may not be there).

..I'll take "Stuff that's hard to quantify for 2000, Alex"

Sure, in most cases it would even out after several exams and years, but with the introduction of the lower resolution letter grading system, one bad or good grade may carry more impact now then before. So I do see the need for a new methods that evaluate more of the overall performance, but it's not easy to come up with a good solution.

The use of letters rather than, say, percentages is a real problem when it comes to properly distinguishing students for a PhD position, for example. I believe an assessment portfolio is very useful in that it adds more data points and protects the student against outlier performances. However, I also believe that the forms of assessments must be overlapping, otherwise you've just transferred the problem from individual performance to assessment divergence.

Also, exam isn't really representative for what you will encounter in you career after university.

True. Writing reports would be much more constructive in that regard. ..enter the plagiarism issues...

No sh*t that opens up for a helluva bias!

You can pretty much interpret the results any damn way you want after this covariance was introduced.