Assessment and Feedback Mechanisms


1. Process for assessment and review of Program Educational Objectives (PEOs)

The main constituents for the program are current students, alumni, and the computing industry, represented by the CSE Department's Advisory Board (IAB). Input from current students is obtained on all aspects of the program, including the PEOs, at an open Undergraduate Forum that is held each year. The forum is held in the Spring semester, typically after the spring break, and is attended by interested students, key faculty members, and advisors from the Advising Office. It should be noted, however, that discussion of PEOs is, typically, only a very small component of any of the forums.

The IAB convenes for a day-long meeting every year on campus, typically in late April. The board gets a detailed update on various recent developments in the department related to research, graduate programs, and undergraduate programs. Once every two years or so, input is sought from the board on the PEOs.

Recent alumni are especially important in the assessment of PEOs since they have intimate knowledge of the program and, at the same time, also have experience in industry. Input from alumni is obtained by means of an alumni survey. The survey is sent to alumni who graduated either two or three years prior to the survey date. Thus the approach lets us gather input from alumni who graduated relatively recently and hence have more or less current knowledge of the program but also have some experience in the job market and hence can comment on how well the program prepared them for the profession. The key portion of the survey that relates to PEOs asks the respondent to rate, on a scale of very unimportant through extremely important, the importance of each of the PEOs. Next, the respondent is asked to rate, on a scale of strongly disagree through strongly agree, the extent to which they agreed with the statement, "the BS-CSE program adequately prepared me to achieve the PEO".

2. Process for assessment of Student Outcomes (SOs)

Student Outcomes (SOs) are assessed using a three-pronged approach for assessing the extent to which the various SOs are being attained. Two of the assessments are direct assessments, the third is an indirect assessment. The details of the direct assessments are considered in 2.1 and 2.2 below; details of the indirect assessments appear in 2.3.

2.1 Program Outcomes Achievement Test (POCAT)

POCAT (Program OutComes Achievement Test) is an exit test that all BS-CSE majors are required to take prior to graduation. When a BS-CSE major applies for graduation, generally the semester before the expected date of graduation, he or she is asked to sign up to take POCAT during the next semester. The test is offered once each semester, typically in the fifth or sixth week of the semester. POCAT helps assess outcomes (a), (b), (c), (e), (f), (k), (l), (m), and (n).

Although all CSE students are required to take the test, the performance on the test does not affect the grades of individual students in any courses, nor are records retained of how individual students performed on the test. When a group of students takes the POCAT, each student receives a unique code that appears on the student's test but only the individual student knows his or her code. Students are instructed not to write their names or other identifying information on their tests. Once the tests have been graded, summary results, organized by this code, are posted on electronic bulletin boards so an interested student can see how well he or she did and how his or her performance compared with that of others who took the test. This was a deliberate decision since we did not want the students to spend a lot of time preparing for the test. The goal of the test is to help assess the program, not individual students. Initially, there was a concern that if the individual students' performance on the test did not affect them in any tangible way, they would not take the test seriously. Our experience with the test since it was institued has eliminated that concern; most students seem to actually enjoy taking the test and take it quite seriously.

The questions on POCAT are based on topics from a number of required courses, many of the core-choice courses, and some popular elective courses related to a variety of key topics such as software engineering, formal languages and automata theory, databases, programming languages, operating systems, computer architecture, AI, etc. Each question on POCAT is a multiple-choice question. The ideal question not only has a specific correct answer but also a number of distractors that correspond to common misconceptions that students might have about the particular concept. It is for this reason that the summary results of a POCAT includes information not only about the percentage of students who answered a given question correctly but also the percentages of students who chose each of the distractors, in other words how many students harbored the particular misconceptions represented by the various distractors about the underlying concept(s). The key goal of the test is to help faculty use the results to identify specific weaknesses in particular courses and help improve the curriculum.

There is one other unusual feature of the test that is worth noting. Each question has, as one of the choices (typically the last one), an answer along the lines of "I don't know". The instructions for the test suggest that the student should pick that answer if he or she has no idea what the correct answer is. Since their performance on the test will have no impact on their record, students who do not know the answer to the question and know that they do not know, pick this answer. This means we do not have to worry about the student trying to make guesses and confounding our attempt to pin down misconceptions that the student may have.

The grading of POCAT and production of summary results is mostly mechanical. The faculty members responsible for each question also provide an estimate of the percentage of students who, in their estimate, ought to be able to answer the question correctly. All of this information is included in the summary results. This allows the Undergrad Studies Committee (UGSC) to have a well-informed discussion about the extent to which the particular group of students had achieved the outcomes related to the various quesions and identify potential problem spots in particular courses, indeed in particular topics, and bring them to the attention of the appropriate group of faculty.

The result page for each POCAT includes three tables. The first one lists, for each question, the particular answer (including the "I don't know" one) each student picked for that question, the percentage of students who answered any given question correctly, as well as the percentages of questions that a given student answered correctly. The second table lists, for each question and each possible answer for the question, the number of students who picked that answer; there is also a summary line that specifies what percentage of students answered that question correctly. The third table lists, for each of the SOs, the average level achievement of that SO. This is computed on the basis of the SOs that each question is related to and the number of students who answered that question correctly. Of these, the second table is perhaps the most valuable since it allows UGSC and relevant faculty to identify which particular misconceptions students most commonly hold concerning a given topic and, hence, arrive at possible improvements to address the problem. From the point of view of the individual student, the first table is more interesting since it allows the student to compare his/her performance on various topics with that of peers.

2.2 Rubrics in CSE 2501/Phil 1338 and Capstone Design Courses

The second direct assessment mechanism is a set of three rubrics that allow us to assess the extent of achievement of a number of outcomes. The first rubric is for assessing the extent to which CSE 2501/Phil 1338 contribute to communications skills, ethical and professional issues, etc. For each of these, several dimensions are defined by the rubric and, for each dimension, four levels of achievement specified. The second rubric is for use by instructors of the capstone design courses to help assess the extent of student achievement of the key student outcomes that these courses contribute to. The first four dimensions address problem formulation, design approach, implementation, and other factors such as use of appropriate tools etc. Again, for each dimension, the rubric specifies four levels of achievement. The last three dimensions deal with effectiveness of the teamwork, effectiveness of the team's written documentation, and effectiveness of oral presentations. One point worth noting is that the effectiveness of oral communication here is, in part, a reflection of the team's effectiveness since the presentations were all done by the team as a whole.

One important activity of each capstone course is the poster session. Each student team from each of the capstone courses from that semester is expected to present its work at a public poster session which also typically includes demo's of the team's prototype system. The rubric for use at the poster session includes essentially the same dimensions as the previous rubric but with two changes. First, given that the visitors will be looking at the team's poster and interacting with the team members at the same time, it seemed appropriate to combine the two dimensions of that rubric that are related to communication effectiveness into a single dimension for this rubric. Second, and perhaps more substantive, is that rather than specifying distinct levels of achievement for each dimension, we decided to specify a set of characteristics corresponding to a high-level of achievement for the given dimension and have the visitor determine to what extent he/she agreed that the given team and its poster demonstrated those characteristics.

2.3 Exit Survey

Prior to graduation, BS-CSE majors are required to complete an anonymous exit survey. In fact, students complete the exit-survey and take the POCAT in the same session. There are two parts to the survey. The first one asks the respondent, for each student outcome, to rank its importance on a scale of very-unimportant/somewhat-unimportant/somewhat-important/very-important; and asks how strongly the respondent agreed with the statement ``this student outcome has been achieved for me personally'' on a scale of strongly-disagree/moderately-disagree/slightly-disagree/slightly-agree/moderately-agree/strongly-agree. In averaging the responses, we attached weights of 0%, 33%, 67%, and 100% to the four possible importance ratings; and weights of 0%, 20%, 40%, 60%, 80%, and 100% to the six possible levels of achievement. The second part of the survey asks students to briefly respond to two questions. The first asks, "What single aspect of the CSE program did you find most helpful? Explain briefly." The second asks, "What single change in the CSE program would you most like to see? Explain briefly." These parts of the survey, although not directly related to specific student outcomes, are naturally important to students and provide us a good lens through which to view the program and help identify possible improvements.

3. Feedback Mechanisms

A primary feedback mechanism is the Annual Undergraduate Forum. The forum is held in the midde of spring semester and is attended by interested students, key faculty members, and staff from the Advising Office. Announcements about the forum are made widely, including via student mailing lists, to ensure wide participation. Following the forum, a summary of the discussion is posted on the newsgroups by the chair of the Undergraduate Studies Committee. This occasionally leads to further extended discussions where students (including those who could not attend the forum) further express their opinions and ideas on the program. Reports from forums of the last several years are maintained on a site available to students and faculty.

A second feedback mechanism is the departmental advisory board and its annual meeting. As noted earlier, the chair of UGSC makes a presentation about the program during this meeting and seeks feedback on all aspects of the program from the advisory board. Since some members of the board are alums of the program (or the department since they may have obtained their graduate rather than undergrad degree from the department), and several of them serve long terms on the board, they tend to have a reasonably good understanding of both the program as well as the culture in the department. These factors, combined with their long experience, allows them to offer well-considered suggestions for changes and improvements.

The results from the various assessment mechanisms as well as the ideas generated at the annual forum and the advisory board meetings are all discussed at regular UGSC meetings and ideas for possible changes in the program are considered. UGSC includes faculty members, especially those heavily involved with required/popular undergrad courses; members of the Advising Office; and one or more student representatives. Thus the discussions tend to account for a variety of viewpoints.

Last modified: Mon Apr 10 11:33:04 EDT 2017