Apr 152011
 

This morning I spent 3 hours watching my students write their final exam.  It’s a strange experience, as I want them all to do well, but know that some will and some won’t, for a whole host of reasons.  You might think that spending 3 hours pacing around a room might be dull (and I admit sometimes it can be), but I usually find myself a bit on edge.  Stressed is too strong a word, but I definitely have some nervous tension.  Why?  One reason is that the students are stressed, which tends to rub off on me.  Another reason is that I always wonder if I have made the test too difficult or too easy.  In a 3 hour exam, if all of the students are still there at the very end, some on the verge of tears, then I can only conclude that I made it too long and/or too difficult (when I first started teaching, this happened more than once).  If they all leave after the first hour of a three-hour exam, with big smiles and a spring in their step, then I probably made it too short and/or too easy (this doesn’t happen too often).  I always remember chatting with a senior colleague, when I was still a rookie and he was on the verge of retirement, who told me that after all his years of teaching he still could never be sure how a particular group of students would do on a test.  At this stage, I usually have a fairly good idea, but you still never know for sure.  I design my exams to take about 2-2.5 hours to complete and I give them 3, so that time is not a factor in their performance.  Others may think differently, but I feel I can adequately survey their knowledge of the course in that time, and I know that students appreciate this approach.

Another source of tension comes from the fear that I have made some egregious and undiscovered error when I created the exam.  Sure enough, two minutes after the students started this morning, one of them politely pointed out to me that questions 2 and 3 were identical – aaagh!  I hate it when I do that!  I must have proofread that exam five times before submitting it for duplication, but still managed to miss the mistake.  It wasn’t a huge problem, as excluding one of the duplicates made the exam out of 77 instead of 80, but it still bugs me.  Fortunately, those errors are rare (really!).  At this point, I should mention that, if you are a former, current, or future student of mine, I don’t want you to think that I’m on the verge of a nervous breakdown during every exam – I do manage to keep myself together.  :-)

On a more positive note, the best part of a final exam (for me at least) is when I know I’ve set a reasonable exam, and a student finishes, confidently hands it in, thanks me for the course, and wishes me a great summer.  I love it when students do well, and it’s so satisfying when it’s clear that they liked the course, learned the material, and did well enough on the exam that they have a smile on their face at the end.

Feb 152011
 

This morning, students in my intermediate GIS course wrote their midterm test.  While they were writing, I started thinking about the evaluation process and wondering about ways I could improve it.  In my three lecture-oriented undergraduate courses, students are evaluated using a midterm test, a final exam, and a series of lab assignments.  My traditional approach has been that the test and exam mainly focus on concepts and theory discussed in class, and the lab assignments are meant to evaluate their understanding and use of GIS software.  I find that the test and exam work fairly well, in that I am confident that I am able to accurately measure the level of mastery each student has of the material.  However, the lab assignments are another story.

Typically, my students get 2-3 weeks to complete each assignment.  The assignments are completely digital, including submission and marking (via Blackboard).  At the start of each term, I tell students that it is easy to cheat on the assignments, but then go on to explain why that is such a bad idea.  While I do mention the university’s policy on academic integrity, and mention ethics, the satisfaction that is gained from doing something on your own, and the penaltes if they’re caught, what I actually emphasize most is the practical argument, which I hope will appeal to their logical side if the ethical approach fails.  First, I point out that many of them are taking my GIS courses to gain so-called marketable skills.  I then explain that many job interviews for GIS jobs include a practical test, where they sit you down in front of a computer and ask you to complete some GIS tasks.  I then ask what they will do at that point if they don’t have their friend there to help them?  Even if there isn’t such a test at the interview and they manage to land the job, how far will they get if they have never actually done the work themselves?  Beyond this argument, my main incentive for students is that I do ask lab-related questions on the tests, to try and mitigate any possible benefits student may gain if they cheat on the labs.  A well designed exam will reward those who have done their own work and certainly will not reward those who haven’t.

This all brings me back to the question of testing.  When I took my first GIS course many years ago, part of my final mark was based on a practical lab exam.  It was a very nerve-wracking experience, as it consisted of sitting one-on-one with the professor at a computer for 15 minutes while he asked me to complete a set of tasks.  I can tell you that I worked really hard to prepare for that test!  I did quite well, and have never forgotten it, as I know that it was a very effective testing method.  Unfortunately it is not a very efficient testing method.  I was lucky enough to go to a small university with small classes.  There were perhaps 15 people in my class, so it was not a huge time commitment for the professor (although still not insubstantial – say about 4 hours).  In the introductory GIS course that I teach, I had 157 students last term.  At 15 minutes per student, it would take almost 40 hours to test this way, not including any time in between.  Clearly, this is not a practical evaluation method in this situation.  I have thought about using Blackboard in the lab to test them using multiple choice questions but, considering my students are organized into six different lab sections spread over 3 days each week, it would mean having quite a large pool of possible questions if I were to be able to offer six different versions of the test (how many ways can I test their ability to create a buffer?).  For now, I am sticking with my approach of putting questions on the written tests that cover the practical component, but I am always looking for something more effective.  One approach I now use is to make the assignments more open-ended and self-driven, with a full lab report, which really cuts down on cheating (I plan to blog about assignment design in the future).  If you have any ideas, or other evaluation methods that you use, I would love to hear about them.