Assessment Results
- POCAT results
- CSE 2501/Phil 1338:
Rubric
Results: 1338 (Au '16);
2501 (Sp '17).
- Junior project course:
Rubric
Results: Spring '17 (3901);
- Capstone courses, assessment by course instructor:
Rubric
(pdf)
Results: Spring '17; Autumn '16;
- Capstone courses, assessment of poster session:
Rubric
Results: Autumn '17 (details);
Spring '17; Autumn '16;
Ideas for possible improvement based on the results thus far:
- Consider changing the scale for each dimension from "Strongly
Agree" etc. to 1 through 10 with 10 denoting, "exemplary performance"
through 5 denoting, "satisfactory (average?) performance", through 3
denoting, "minimally acceptable performance", through 2 and 1 being,
"unacceptable performance".
- Change the term "sponsor" to "client" in dimensions 1 and 2.
This is because projects may not have an external sponsor. For example,
in CSE 5914 (the Watson course), teams are required to come up with specific
proposals for building a system that would exploit Watson's capabilities
to solve an interesting/practical/novel problem; the teams are then required to
make pitches to the instructor and the class, as a whole, to convince them that
what they are proposing is worth doing. They may have to, based on the feedback
from the instructor and the class, revise their proposals appropriately. The
term, "client" seems a more accurate description of this. CSE 5912 (the Games capstone)
is similar in that, teams are required to come up with ideas for specific games
that are interesting/novel; and pitch their ideas to the instructor and class before
they are approved. A new version of the rubric has been created that includes these
changes and is available here.
- All the courses stress SE processes so the "Other factors"
dimension which mentions "standard process" seems reasonable (with the
understanding that "standard process" may mean slightly different
things in different courses).
- It was noted that the games projects have considerations
(such as "coolness factor") that may apply to the other courses
in varying degrees.
We will try to come up with language to add to the rubric in such a way
that it is "generic" (rather than applicable only to the games and Watson
courses).
- Teams need to be clearly informed of what to include in their
posters and how to prepare effective posters. To make this work well
in all sections of all capstone courses, in addition to the instructor
talking about this in class, it would really help if we could put
together a set of points (possibly as a set of slides that instructors
could, in fact, adapt as needed and use when they talk about this in
their classes) and a set of links to useful online resources.
Al Cline has volunteered to come up with a first version of this.
- Follow-up to the above: Al has come up with a
guidelines document that should help students prepare an effective poster and
to present their work to the poster session visitors in an effective manner;
a .pdf version is here.
- Doing an effective assessment of a poster and a team and its
project takes time; so an evaluator cannot be expected to do more than
4 or 5 assessments during the poster session. That means, it is not
possible to assess all the posters in this manner.
In Spring '17, we had each of the capstone course instructors assess
roughly ten posters; in our pilot assessment in Au '17, we had the two
faculty (Cline and Morris) who did the assessment, assess roughly
fifteen posters each. In the future, we will assign 4--5 posters to
each individual doing the assessment. That should result in roughly
one assessment per poster and should suffice.
- Exit-survey results:
- Alumni-survey results (these surveys are performed every other year):
- Annual forum: Reports on the annual student forum from the last serveral
years is avaliable
here.
- Some recent improvements: Complete details of improvements based on the assessment
results may be seen by going through the minutes of the meetings of the
Undergraduate Studies Committee (UGSC). Here we list some recent improvements: