so what did happen ...
UPDATE plans SET standard = 'better and better' WHERE year = current_year + 1 AND module = 'databases';
This is a tricky business. How do we know how well things went?
I've always been particular about gathering feedback and evaluating how things went. I once observed that there was no point in teaching a module unless we planned some ways to do it better the next time. But once it was becoming clear that this could well be my last attempt at this module I found myself losing this enthusiasm. Still, evaluation must go on, if only for the benefit of those who follow.
As we have seen, this module came with a "mock" exam. And that is what this section will dwell on. This exam serves (or perhaps I mean "can be made to serve") many purposes. The most obvious is that the students get an indication of how well they are doing in the module. From this, those who teach can find out how effective the learning activities have been.
The results of the mock were not exactly encouraging. The ordering of the material meant that most of the mock was on the database part of the module, and the results were not good. An inqusition followed.
One suspicion was that all the students did not take mock exams all that seriously. No-one was going to admit this, but no-one was going to deny it either. Happily, no permanent harm had been done on either side, so it was possible to stage a lengthy debrief.
This by its nature ended up being rather one sided, but some interesting findings emerged.
- The students wanted things to be precise. This applied
specifically to normalisation, but also to other aspects of
the teaching. For example:
- Answers to normalisation examples from lectures depended to an extent on assumptions made about the problem domain. Making a different set of assumptions could change the "correct" answer. There was no One True Way that would provide an answer.
- There are usually several ways to model a problem domain in a relational database. Again, the students would have much rather there be One True Way and therefore exactly one "correct" answer.
- The overloading of the word "relation" to mean both the fundamental structure in a relational database and something that was related to something else was difficult for many. This simple point (and who could have forecast it?) had given rise to much confusion.
- Terms such as ``lost update'' (intended to imply the standard problem in locking were also confusing; many students interpreted such questions to be about backup and recovery.
- Some aspects of databases, "deadlock" being the main example, had been covered in subtly different ways in other modules in other contexts. Again, this was found by many to be confusing.
These revelations (if that is not too strong a word, and I think it probably is not) showed up some fundamental and widespread gaps in understanding. The structure of the mock exam meant that there was plenty of time to arrange revision classes and to put the students straight on some of the basic ideas.
The results in the database component of the final exam paper were much better than in the mock. Now, of course cynics could argue that this was simply because these results "counted". But surely it is not too optimistic to hope that the mock, debrief and resulting revision helped things along.
Turning to the more formal evaluation requirement, there is the standard Lickert-scale-based questionnaire. This is passed round in a lecture towards the end of the module, and the results are duly tabulated and analysed.
My preference is for something less structured and less numeric (I was 4.5 out of 5 "audible" - what does that mean, really?). So I also ask the students to record on a separate sheet their answers to questions along the lines of:
- The best part of the module.
- The worst part of the module.
- The easiest part of the module.
- The hardest part of the module.
- Anything that they think should be left out next year.
- Anything that they think should be brought in next year.
- The advice they would give someone taking this module next year.
- Anything else they think I should know.
This can usually be relied on to produce some insights.
The formal part of the evaluation concludes when the feedback harvested is used as the basis for the formal module review. The module review also includes the statistics required by central QA. Happily these are all calculated automatically. Mean marks and other distributions are available on the (confidential) print-out of exam results. These numbers are not especially secret; students are sometimes told, but only if they ask.
The module review looks forwards and backwards, but this is often of limited use because modules change so frequently. In the past few years the database content has been in a module on its own, in a module with Java programming and software engineering, and now in a different module with computer graphics. It is quite hard to pick out what is special about databases!
artefact
The evaluation form [PDF] shows how the formal part of the process works.
university of leeds | school of computing | disciplinary commons | tony jenkins