On Nov. 12, '04, Stu and Neelam met with some senior COE (non-CSE) faculty including the dean, to talk about various aspects of the program, in particular our assessment mechanisms.
Some suggestions that came up were:
There was also a suggestion that the faculty evaluate how well the "X"s in the tables representing the relation between the individual courses and the program outcomes/Criterion 3 outcomes match the performance of the students in the final exams, etc. But this seems much more difficult since these outcomes tend to be at a very high level not in terms of specific skills or knowledge of specific topics; so trying to adjust them based on the performance of a handful of students in one quarter seems inappropriate. On the other hand, course objectives are typically in terms of such knowledge and skills, therefore it would seem appropriate to evaluate them by comparing with student performance. Of course, if there is a serious problem that shows that the course objectives need to be substantially rewritten, that may warrant revising the tables as well.
One could argue that the speed with which computing evolves almost inevitably means that interesting capstone projects will typically involve relatively new ideas. But nevertheless, introducing much new basic or foundational material certainly should not be part of these courses.