Direct Assessment and Evaluation of BS-CSE Program Outcomes


This page is currently under major reconstruction.
The previous version of this page is here. The assessment model that is here is complete but a bit outdated.
Terminology and Background:
ABET stresses the importance of direct assessments and of regular evaluation of the assessment results to effect program improvements. ABET defines direct assessment of outcomes to mean the assessment, by qualified people, of the degree to which various program outcomes are achieved by students by the time of their graduation based on actual performance of students on specific tasks (such as answering specific technical questions, their work on specific projects, etc.) related to the various outcomes, rather than on the basis of opinion surveys, students self-assessments and the like (which are considered indirect assessments). Evaluation means the analysis/evaluation, typically by faculty, of the results of these (as well as indirect) assessments to identify potential problems in the program and to come up with possible changes in the program--including, possibly, in the assessments or the assessment methods--to address the problems. These definitions should be kept in mind when reading the details below and in related pages.

We use a carefully designed approach to the assessment and evaluation of the program outcomes for our BS-CSE program. This page describes our approach and contains links to the assessment tools, the assessment results, and evaluations. We first list our outcomes and explain their classification into three groups. Next we consider the direct assessment methods used for each group and how we evaluate the assessment results. Next we consider the indirect assessment methods we use and the evaluation of the results of those assessments.


Notes: Include a section toward the end summarizing the indirect assessments and their evaluation.
1. Program Outcomes: The BS-CSE page of the on-line brochure for CSE Undergrad Programs contains full details of the program, including both the program objectives and its outcomes as well as information about how the objectives and outcomes are established/revised. The current set of outcomes, based on the ones specified by the Engineering Accreditation Commission (EAC) and the Computing Accreditation Commission (CAC) of ABET, are as follows.

Students in the BS-CSE program will attain:

  1. an ability to apply knowledge of computing, mathematics including discrete mathematics as well as probability and statistics, science, and engineering;
  2. an ability to design and conduct experiments, as well as to analyze and interpret data;
  3. an ability to design, implement, and evaluate a software or a software/hardware system, component, or process to meet desired needs within realistic constraints such as memory, runtime efficiency, as well as appropriate constraints related to economic, environmental, social, political, ethical, health and safety, manufacturability, and sustainability considerations;
  4. an ability to function on multi-disciplinary teams;
  5. an ability to identify, formulate, and solve engineering problems;
  6. an understanding of professional, ethical, legal, security and social issues and responsibilities;
  7. an ability to communicate effectively with a range of audiences;
  8. an ability to analyze the local and global impact of computing on individuals, organizations, and society;
  9. a recognition of the need for, and an ability to engage in life-long learning and continuing professional development;
  10. a knowledge of contemporary issues;
  11. an ability to use the techniques, skills, and modern engineering tools necessary for practice as a CSE professional;
  12. an ability to analyze a problem, and identify and define the computing requirements appropriate to its solution;
  13. an ability to apply mathematical foundations, algorithmic principles, and computer science theory in the modeling and design of computer-based systems in a way that demonstrates comprehension of the tradeoffs involved in design choices;
  14. an ability to apply design and development principles in the construction of software systems of varying complexity.

We have classified the above outcomes into three groups:

Outcome (f) falls under both the "technical skills" group as well as the "societal issues" group. It falls under the former because of the importance of the ACM Code and the technical factors underlying the code; and the understanding that students must have of the Code and these factors in order to achieve this outcome. It falls under the latter because of the general ethical considerations that students must grasp in order to achieve this outcome. Arguably, it also falls under the "professional skills" group.

2. Direct Assessment Methods and Evaluation of Assessment Results

2.1 Group 1 outcomes: Technical skills: The key (program-level) assessment tool we use to evaluate the degree of achievement of outcomes in the first group is POCAT (Program OutComes Achievement Test), an exit test that all BS-CSE majors take prior to graduation. When a BS-CSE major applies for graduation, generally three quarters before the expected date of graduation, he or she is asked to sign up to take POCAT. The test is offered once each quarter, typically in the third or fourth week of the quarter.

Although all CSE students are required to take the test, the performance on the test does not affect the grades of individual students in any courses, nor are records retained on how individual students performed on the test. When a group of students takes the POCAT, each student receives a unique code that appears on that student's test but only the individual student knows his or her code. Once the tests have been graded, summary results, organized by this code, are posted on electronic bulletin boards so an interested student can see how well he or she did and how his or her performance compared with that of others who took the test. This was a deliberate decision since we did not want the students to spend a lot of time preparing for the test. The goal of the test is to help assess the program by assessing the extent to which the students have acquired and internalized the knowledge and skills associated with the various outcomes of the program, not assess the individual students. Initially, there was a concern that if the individual students' performance on the test did not affect them in any tangible way, they would not take the test seriously. Our experience with the test since it was institued has eliminated that concern; most students seem to actually enjoy taking the test and take it quite seriously.

The questions on POCAT are based on topics from a number of required and the most popular elective high-level courses related to a variety of key topics such as software engineering, formal languages and automata theory, databases, programming languages, computer architecture, algorithm analysis, AI, computer graphics, etc. Each question is a multiple-choice with, typically, two or three questions in each topic area. But they are not the kind of questions one might find in, say, the final exams of these courses. Instead, they are more conceptual and are also designed to test how well students understand key concepts from across the curriculum. In other words, the questions attempt to evaluate how well the student is able to relate concepts presented in one course to problems and concepts presented in a later, related course. For example, a question may probe whether a student is able to apply the concept of finite state machines (from the course on automata theory) to the problem of designing a tokenizer for a compiler for a programming language; another may probe whether a student is able to apply ideas introduced in the algorithm analysis course to evaluate alternative ways in which a database might be used to solve a particular problem.

The ideal POCAT question not only has a specific correct answer but has distractors that are so chosen that they correspond to common misconceptions that students tend to have about the particular concept. It is for this reason that the summary results of a POCAT includes information not only about the percentage of students who answered a given question correctly but also the percentages of students who chose each of the distractors, in other words how many students harbored the particular misconceptions represented by the various distractors about the underlying concept(s). Each question is typically the result of discussions among faculty involved with the corresponding courses. The questions on the test are chosen in such a way that there are one or more questions related to each Group 1 outcome. Indeed, because of the nature of the questions and the nature of the outcomes, many of the questions tend be related to more than one outcome in the group.

Given the nature of the questions on POCAT, the grading of the tests is essentially mechanical. The faculty members responsible for each question also provide an estimate of the percentage of students who ought to be able to answer the question correctly as well as the particular outcomes that the question is related to. All of this information is included in the summary results that are produced. This allows the department's Undergraduate Studies Committee to have a well-informed discussion about the extent to which the particular group of students had achieved these outcomes and identify potential problem spots in particular courses, indeed in particular topics, and bring them to the attention of the appropriate group of faculty. This enables the relevant faculty to consider possible changes in various courses to address the problem and a number of such improvements have been made in a number of courses in the program.

POCAT has been in existence since early 2006. Originally, the test consisted of one or two questions from each of the high-level required courses (such as CSE 625, 655, 660 etc.) and a very small number of questions from a couple of the popular elective courses. Moreover, essentially the same questions were used in each offering of the test. Given that student performance did not have any effect on the individual student, this did not raise any concerns with respect to the "security" of the test but, after several test cycles, only minimal amount of new information was generated by the test results. Hence we have instituted a number of changes over time with the current (as of Fall 2010) approach being as follows:

In summary, the POCAT approach requires only modest resources to administer, hence is sustainable over the long term; is well accepted by students in the program; allows us to assess not just individual courses but groups of related courses and their contribution to various program outcomes; the discussions in the Undergraduate Studies Committee and among faculty most involved with particular courses and groups of courses, and documented in the evaluation page allows us to critically evaluate the assessment results; and has helped the CSE program identify specific improvements in the program to improve the extent to which particular outcomes are achieved. Recent tests, summary results, and a summary of the discussions evaluating the results are all available (restricted access). Summaries of portions of these evaluations that are related to the courses in various course groups are also included in the corresponding course group reports (CGRs) that each faculty group (responsible for the individual course groups) prepares once every two to three years (or more frequently if there are substantial changes in the courses in the group) since these evaluations provide part of the rationale for changes in the courses. Of course, faculty groups also depend on other evaluations such as student performance in final exams in individual courses in arriving at changes in their courses and these evaluations are also part of the CGRs but, in summary, the POCAT and the associated evaluation approach has proved extremely valuable for assessing and evaluating, at the program level, the outcomes in the technical skills group.

2.2 Group 2, 3 outcomes: Professional skills, Societal issues:
By their nature, both the achievement of the outcomes in these two groups as well as their assessment require different approaches than for group 1, i.e., the technical, outcomes. In general, a variety of courses from across the curriculum, including several non-CSE courses, contribute to student achievement of a number of these outcomes. At the same time, both in order to ensure that students engage in activities that help achieve these outcomes in a CSE context, and in order to help with assessment of the extent to which they achieve these outcomes, we have adopted the following approach.

CSE 601, the 1-credit required course on social, ethical and professional issues in computing, and each of the capstone design courses include a number of activities that are tailored to these outcomes while, at the same time, being well-integrated with the courses. CSE 601 requires each student to explore a new or recent product or practice or event etc., consider the impact it may have in a "global, economic, environmental, and societal context" (outcome (h)); consider as well any relevant contemporary issues (outcome (j)) as well as ethical and professional issues (outcome (f)) related to the product, practice, or event; and present the findings in a 3-4 page paper. CSE 601 includes this activity in order to further develop the degree of student achievement of these outcomes. Naturally, this activity also contributes to the development of written communication skills (outcome (g)). In addition, the course requires students to make oral presentations on topics related to social, ethical, and professional issues in computing. Suitable rubrics, each with appropriate dimensions corresponding to the component skills of these outcomes, have been developed to assess the extent to which students are achieving these outcomes as exhibited by their performance in these activities.

Each of the capstone design courses has a number of activities that contribute to the achievement of these outcomes. The central activity in each of these courses is, of course, a quarter-long team project that contributes strongly to outcome (d). The courses require student teams to make a number of oral presentations and produce suitable written documentation (including such things as storyboards) accessible to clients, project managers, peers, etc., thereby contributing to outcome (g). The courses also require students to engage in an activity similar to that in 601, researching a product or practice, typically that is relevant to the team's project, and write a paper or make presenations about it, thus contributing to (i). Often the team projects and/or the researched tools raise questions related to ethical, legal, etc., issues, thus contributing to (f); as well as issues related to impact of aspects of computing on individuals and society and related contemporary issues, thereby contributing also to (h) and (j). More details of the capstone design courses are available in a separate page. As in the case of CSE 601, suitable rubrics, each with appropriate dimensions corresponding to the component skills of these outcomes, have been developed to assess the extent to which students are achieving these outcomes as exhibited by their performance in these activities.

Since all capstone design courses use a common set of rubrics and since students take their capstone courses near the end of their programs together ensure that we have a uniform way of assessing these outcomes at the program level. However, until recently, the evaluation of the assessment results and identifying possible ideas for improvements has been done by the individual instructors of these courses. While this works, in order to make it possible for faculty to share these ideas and to strengthen the program-level assessment and evaluation of these outcomes, we have recently instituted a suitable mechanism for discussing the assessment results in the Undergraduate Studies Committee and recording the evaluation results, including ideas for improvements, so that they are easily accessible to all capstone course instructors and other interested faculty.

***The rest of this page is under major revision***

2.3 Group 3 outcomes: Societal issues: The outcomes in the third group are:

  1. an ability to analyze the local and global impact of computing on individuals, organizations, and society;
  2. a knowledge of contemporary issues;
In addition, as noted earlier, outcome (f), an understanding of professional and ethical responsibility, also falls partly under this group.

Assessment and evaluation of outcomes: A number of courses in the curriculum, including especially several in the general education portion of the curriculum, contribute to ensuring that students achieve these outcomes. In order to further improve the achievement of these outcomes and in order to assess the degree of this achievement by the time of the students' graduation from the program, CSE 601, the required course on social and ethical issues in computing, requires each student to explore a new or recent product or practice or event etc., consider the impact it may have in a "global, economic, environmental, and societal context" (outcome (h)); consider as well any relevant contemporary issues (outcome (j)) as well as ethical and professional issues (outcome (f)) related to the product, practice, or event; and present the findings in a 3-4 page paper. CSE 601 includes this activity in order to further develop the degree of student achievement of these outcomes. The rubric below is used to evaluate the student's paper with respect to these outcomes. The rubric also evaluates the student's paper with respect to the effectiveness of written communication skills of the student (program outcome (g)).

Cut from above: CSE 601 is another course that helps develop students' communication skills. The second rubric above is used in that course to evaluate students' oral presentations. Since the writing activity in that course is of a different nature from the one in the capstone courses, with the focus being on societal, ethical and professional issues rather than lifelong learning, a different rubric is used to evaluate that activity.

The degree of achievement of the outcomes in the third group which deal with "broad education" and "contemporary issues", also cannot be easily tested in the exit test. Hence, these outcomes are evaluated in CSE 601, the one credit required course on social and ethical issues in computing that students typically take immediately prior to the capstone course. Students in this course are required to write a paper on a suitable topic such as privacy, copyright, first amendment rights, etc., that is directly related to these outcomes. A suitable rubric has been designed that is used in CSE 601 to evaluate these papers in an appropriate manner. Given the nature of this student activity, it also contributes to further developing students' written communication skills (outcome (g)) and the rubric evaluates this aspect as well. Details of our approach to evaluating this group of outcomes, including this rubric, are available.

3. Process

The Undergraduate Studies Committee is responsible for coordinating these assessments and for discussing the results on a regular basis (typically once a year). Ideas for program improvement, based on the results of these direct assessments of each of the Criterion 3 outcomes, following discussions in UGSC are suggested to appropriate faculty for possible action. The results of the assessments and the improvements based on the results of the assessment will be documented.

4. Results of assessments and program improvements based on the results


Last modified: Thu Jan 6 17:24:05 EST 2011