This page is ***NO LONGER IN USE***
Go to this page instead!!
BS CSE Program:
Objectives, Outcomes, Assessments, Evaluation, Program Improvements
A. Introduction
This page is intended both to document our efforts
related to the assessment, evaluation, and continuous improvement of the
BS-CSE program, as well as to help direct those efforts.
It will help various constituents including
current students, alumni, employers of
graduates of the program, and others, understand the growth and evolution
of the program and the rationale behind the evolution. It is also
expected to provide the documentation needed to show that the
program meets the requirements of the
ABET criteria, especially the
requirements concerning
program educational objectives, student outcomes, assessment, evaluation, and
continuous improvement.
Please send comments, questions, or suggestions for improvements to this or
any related pages to neelam AT cse.ohio-state.edu
Background information and information about the Au '11 ABET evaluation
of the BS-CSE program
are available here.
(Previous (now outdated) version of this page; and an
even older version.)
B. ABET Terminology and Criteria Requirements
In its criteria, ABET uses specific terms defined in particular ways. It
is important to keep these definitions in mind when understanding the
criteria requirements. In particular, the following terms are important:
Program Educational Objectives: PEOs are
broad statements that describe what graduates are expected to attain
within a few years of graduation. Program educational objectives are
based on the needs of the program's constituencies. An example PEO:
graduates will be employed successfully as computing professionals.
Student Outcomes: SOs describe what students are
expected to know and be able to do by the time of graduation. These
relate to the skills, knowledge, and behaviors that students acquire
as they progress through the program. An example SO: students will acquire
an ability communicate effectively. The intent is that achievement of the
SOs will prepare the graduates to attain the PEOs.
Assessment: one or more processes that identify, collect, and
prepare data to evaluate the attainment of SOs and PEOs. Effective
assessment uses both direct and indirect, and quantitative and
qualitative measures as appropriate to the objective or outcome being
measured.
Direct assessment is assessment of actual student
work by someone qualified to assess it; indirect assessment
refers to things like student and alumni surveys. In general, direct
assessments are preferred.
Note: the purpose of assessment is
to assess the program, not individual students.
Evaluation: one or more processes for interpreting
assessment data to help identify weaknesses and/or potential
improvements in the program in a process of continuous improvement.
Summary of ABET criteria requirements:
- PEOs must be determined based on the
needs of the program's constituencies. There must be a documented and
effective process, involving the constituencies, for the periodic
review and revision of the PEOs. PEOs must be published.
- SOs must include the (a-k) outcomes listed in EAC Criterion 3;
the (a-i) in CAC Criterion 3; and the (j-k) in CAC's Program Criteria for
CS programs.
- SOs must prepare graduates to attain PEOs;
there must be a documented and
effective process for the periodic review and revision of the SOs.
SOs must be documented.
- The program must use appropriate, documented processes for
evaluating the extent to which both PEOs and SOs are being attained.
The results of these evaluations must be documented and must be
utilized as input for the continuous improvement of the program. Other
available information may also be used to assist in this.
C. Program Educational Objectives
C.1 Current PEOs
The PEOs of the BS-CSE program are:
- Graduates of the program will be employed in the computing
profession, and will be engaged in learning, understanding, and
applying new ideas and technologies as the field evolves.
- Graduates with an interest in, and aptitude for, advanced studies
in computing will have completed, or be actively pursuing, graduate
studies in computing.
- Graduates will be informed and involved members of their
communities, and responsible engineering and computing professionals.
These objectives are published on the OSU
Undergraduate Majors site and
on the program's site.
C.2 Process for assessment and review of PEOs
The main constituents for the program are current students, alumni, and the
computing industry, represented by the IAB. Input from current
students is obtained on all aspects of the program, including the
PEOs, at an open Undergraduate Forum that is held each year. The forum
is held in the Winter quarter and is attended by interested students,
key faculty members, and advisors from the Advising Office. It should be stressed, however, that discussion of
PEOs is, typically, only a very small component of any of the forums.
The IAB
convenes for a day-long meeting every year on campus, typically in
late May. The board gets a detailed update on various recent
developments in the department related to research, graduate programs,
and undergraduate programs. Once every three years or so, input is
sought from the board on the PEOs.
Alumni are especially important in the assessment of PEOs since they
have intimate knowledge of the program and, at the same time, also
have experience in industry. Input from alumni is obtained by means
of an alumni survey. The survey is sent to alumni who
graduated either two or three years prior to the survey date.
Thus the approach lets
us gather input from alumni who graduated relatively recently and
hence have more or less current knowledge of the program but also have
some experience in the job market and hence can comment on how well
the program prepared them for the profession. The key portion of the
survey that relates to PEOs asks the respondent to rate, on a scale of
very unimportant through extremely important, the
importance of each of the PEOs. Next, the respondent is asked to rate,
on a scale of strongly disagree through strongly
agree, the extent to which they agreed with the statement, "the
BS-CSE program adequately prepared me to achieve the PEO".
C.3 Assessment results
Annual forum:
Announcements about the annual Forum are made widely, especially on
the electronic (student) newsgroups to ensure wide participation.
Following the forum, a summary of the discussion is posted on the
newsgroups by the chair of the Undergraduate Studies Committee (UGSC).
The report is then discussed in regularly scheduled meetings of the
UGSC to help identify possible improvements in the program.
Reports from forums of the last several years are available here:
Undergraduate Forums Reports
Alumni Survey: The alumni survey
is administered every other year, the most recent ones being in
2006, 2008, and 2009-'10; the next one is planned to be in early
2012. Prior to 2006, the survey was administered as a paper survey. In
2006, the survey was moved on-line. As noted above, the survey is sent
to alumni who graduated either two or three years prior to the survey
date. Since the survey is conducted every other year, this ensures that
all graduates will receive the survey (exactly) once.
TheSurvey results are discussed in regularly scheduled meetings of the
UGSC to help identify possible improvements in the program
Full details of
the survey and results from the three recent surveys are available here:
Alumni Survey Results.
D. Student Outcomes
D.1 Current SOs
Students in the BS-CSE program will attain:
-
an ability to apply knowledge of computing, mathematics including
discrete mathematics as well as probability and statistics, science,
and engineering;
-
an ability to design and conduct experiments, as well as to analyze
and interpret data;
-
an ability to design, implement, and evaluate a software or a
software/hardware system, component, or process to meet desired needs
within realistic constraints such as memory, runtime efficiency, as
well as appropriate constraints related to economic, environmental,
social, political, ethical, health and safety, manufacturability, and
sustainability considerations;
-
an ability to function on multi-disciplinary teams;
-
an ability to identify, formulate, and solve engineering problems;
-
an understanding of professional, ethical, legal, security and
social issues and responsibilities;
-
an ability to communicate effectively with a range of audiences;
-
an ability to analyze the local and global impact of computing on
individuals, organizations, and society;
-
a recognition of the need for, and an ability to engage in
life-long learning and continuing professional development;
-
a knowledge of contemporary issues;
-
an ability to use the techniques, skills, and modern engineering tools
necessary for practice as a CSE professional;
-
an ability to analyze a problem, and identify and define the
computing requirements appropriate to its solution;
-
an ability to apply mathematical foundations, algorithmic
principles, and computer science theory in the modeling and design of
computer-based systems in a way that demonstrates comprehension of the
tradeoffs involved in design choices;
-
an ability to apply design and development principles in the
construction of software systems of varying complexity.
These SOs subsume outcomes (a) through (k) of EAC Criterion 3;
(a) through (i) of CAC Criterion 3; and (j), (k) of the CAC Program Criteria for CS programs.
These SOs are published on the
program's site.
Relation to PEOs:
Outcomes (a) through (c), (e), and (k) through (n) ensure that
graduates will be well prepared to succeed in challenging positions in
the computing profession thus contributing to achieving PEO (I).
Outcomes (d) and (g) will prepare graduates to work effectively as
part of teams of CSE professionals, further contributing to their
success as CSE professionals. Outcome (i), along with the solid
technical background ensured by outcomes (a), (b), (e), and (l)
through (n) will prepare graduates to achieve PEO (II), pursuit of
advanced/graduate studies in computing. Outcomes (f), (g), (h), and
(j) will prepare gradutes to be informed and involved members of their
communities and to be responsible engineering and computing
professionals, thereby helping them achieve PEO (III).
We have classified the SOs into two groups. Outcomes (a), (b), (c),
(e), (f), (k), (l), (m), and (n) form the technical group of
SOs; (d), (f), (g), (h), (i), and (j) form the professional
group of SOs; note that outcome (f) is included in both
groups. Previously, we had split the professional group into two
subgroups, "societal issues" and "professional skills"; but the
outcomes are so closely related that we have merged these two into a
single group, the professional outcomes group. (But some of the
pages in this site may still refer to these as two distinct groups; if
you notice such references, please email neelam AT cse.ohio-state.edu).
POCAT, an exit-test, is used to assess the technical group of
SOs. A set of rubrics, used to evaluate certain key
activities in the capstone design courses and in CSE 601, the required
course on professional and ethical issues in computing, is used to
assess the professional group of SOs. An exit-survey is used to assess
both groups of outcomes. POCAT and the rubrics are direct
assessments, the exit survey an indrect assessment. We
consider each in turn.
D.2.1 Assessment of the technical group of SOs using POCAT
POCAT (Program OutComes Achievement Test), is an exit-test
that all BS-CSE majors take prior to graduation. When a
BS-CSE major applies for graduation, generally three quarters before
the expected date of graduation, he or she is asked to sign up to take
POCAT. The test is offered once each quarter, typically in the third
or fourth week of the quarter. Although all CSE students are required
to take the test, the performance on the test does not affect
the grades of individual students in any courses, nor are records
retained on how individual students performed on the test. When a
group of students takes the POCAT, each student receives a unique code
that appears on that student's test but only the individual student
knows his or her code. Once the tests have been graded, summary
results, organized by this code, are posted on electronic bulletin
boards so an interested student can see how well he or she did and how
his or her performance compared with that of others who took the
test. This was a deliberate decision since we did not want the
students to spend a lot of time preparing for the test. The goal of
the test is to help assess the program by assessing the
extent to which the students have acquired and internalized the
knowledge and skills associated with the various SOs, not assess
individual students. Initially, there was a concern that if the
individual students' performance on the test did not affect them in
any tangible way, they would not take the test seriously. Our
experience with the test has eliminated that concern; most students
enjoy taking the test and take it seriously. At the same time,
security of the test is not a concern since students do not
have to worry about how their performance will affect their records;
hence, for example, it is perfectly appropriate to reuse questions from
one POCAT to the next and we do so often.
The questions on POCAT are based on topics from a number of required
and the most popular elective high-level courses related to a variety
of key topics such as software engineering, formal languages and
automata theory, databases, programming languages, computer
architecture, algorithm analysis, AI, computer graphics, etc. Each
question is a multiple-choice with, typically, two or three questions
in each topic area. The ideal POCAT question not only has a specific
correct answer but has distractors that are so chosen that
they correspond to common misconceptions that students tend to have
about the particular concept. It is for this reason that the summary
results of a POCAT includes information not only about the percentage
of students who answered a question correctly but also the percentages
of students who chose each of the distractors, in other words how many
students harbored the particular misconceptions represented by the
various distractors about the underlying concept(s). The questions on
the test are chosen in such a way that there are one or more questions
related to each Group 1 outcome. Indeed, because of the nature of the
questions and the nature of the outcomes, many of the questions tend
be related to more than one outcome in the group.
There is one unusual feature of the POCAT questions that is worth
noting. Each question on the test has, as one of the choices
(typically the last one), an answer along the lines of "I don't
know". The instructions for the test suggest that the student should
pick that answer if he or she has no idea what the correct answer
is. Since their performance on the test will have no impact on their
record, students who do not know the answer to the question and
know that they do not know, pick this answer. This means we do not
have to worry about students making wild guesses and confounding
our attempt to pin down misconceptions that they may have.
The grading of POCAT and production of summary results is
mechanical. The faculty members responsible for each question also
provide an estimate of the percentage of students who ought to be able
to answer the question correctly as well as the particular outcomes
that the question is related to. All of this information is included
in the summary results. This allows the UGSC to have a well-informed
discussion about the extent to which the particular group of students
had achieved these outcomes and identify potential problem spots in
particular courses, indeed in particular topics, and bring them to the
attention of the appropriate group of faculty. The result page for each POCAT
includes three key tables. The first one lists, for each question, the
particular answer (including, possibly, the "I don't know" one) each
student picked for that question. The second table lists, for each question
and each possible answer for the question, the number of students who
picked that answer; there is also a summary line that specifies what
percentage of students answered that question correctly. The third table
lists, for each of the SOs, the average level achievement of that SO.
This is computed on the basis of the SOs that each question is related to
and the number of students who answered that question correctly. Of these,
the second table is perhaps the most valuable since it allows UGSC and
relevant faculty to identify which particular misconceptions students
most commonly hold concerning a given topic and, hence, arrive at possible
improvements to address the problem.
Results from all the POCATs are available here:
POCAT Results (outcomes (a), (b), (c), (e), (f), (k), (l), (m), (n))
D.2.2 Assessment of the professional group of SOs using rubrics
By their nature, both the achievement of the professional SOs
as well as their assessment require different approaches than
for the technical SOs. In general, a variety of
courses from across the curriculum, including several non-CSE courses,
contribute to the achievement of a number of the professional SOs. At
the same time, both in order to ensure that students engage in
activities that help achieve these outcomes in a CSE context,
and in order to help with assessment of the extent to which they achieve
these outcomes, we have adopted the following approach.
CSE 601, the 1-credit required course on social, ethical and
professional issues in computing, and each of the capstone design
courses include a number of activities that are tailored to the
professional SOs while, at the same time, being well-integrated with
the courses. CSE 601 requires each student to explore a new or recent
product or practice or event etc. (outcome (i) (as well as outcome
(k))), consider the impact it may have in a "global, economic,
environmental, and societal context" (outcome (h)); consider as well
any relevant contemporary issues (outcome (j)) as well as ethical and
professional issues (outcome (f)) related to the product, practice, or
event; and present the findings in a 3-4 page paper. CSE 601 includes
this activity in order to further develop the degree of student
achievement of these outcomes. Naturally, this activity also
contributes to the development of written communication skills
(outcome (g)). In addition, the course requires students to make oral
presentations (outcome (g)) on topics related to social, ethical, and
professional issues in computing. Suitable rubrics, each with
appropriate dimensions corresponding to the component skills of these
outcomes, have been developed to assess the extent to which students
are achieving these outcomes as exhibited by their performance in
these activities.
Turning next to the capstone design courses, the central activity in
each of these courses is, of course, a quarter-long design and, in
most cases, implementation project. The courses require student teams
to make a number of oral presentations and produce suitable written
design documentation (including such things as storyboards) accessible
to clients, project managers, peers, etc., thereby contributing to
outcome (g). The courses also require students to engage in an
activity similar to that in 601, researching a product or practice,
typically that is relevant to the team's project, and write a paper or
make presenations about it, thus contributing to (i). One of the
capstone design courses, CSE 786, has developed an especially
innovative way (dubbed "technology teams") for this component of the
course that not only helps students develop life-long learning skills
but also engages them in additional team activity.
Often the team
projects and/or the researched tools raise questions related to
ethical, legal, etc., issues, thus contributing to (f); as well as
issues related to impact of aspects of computing on individuals and
society and related contemporary issues, thereby contributing also to
(h) and (j). As in the case of CSE 601, suitable rubrics, each with
appropriate dimensions corresponding to the component skills of these
outcomes, have been developed to assess the extent to which students
are achieving these outcomes as exhibited by their performance in
these activities. More details of the capstone design courses are
available in here.
Rubrics for assessment of professional outcomes:
- Rubric for assessing life-long learning paper in CSE 601 (outcomes (f), (g), (h), (i), and (j))
- Rubric for assessing individual presentations (outcomes (g))
- Rubric for assessing team work (outcome (d))
- Rubric for assessing team presentations (outcomes (d), (g))
- Rubric for assessing technology research as part of a technology team (outcomes (d), (f), (g), (h), (i), and (j)).
Sample assessment results from some recent courses (in each case, S1, S2, ... etc. refer to individual students whose work was assessed):
D.2.3 Assessment of all SOs using the Exit-Survey
Prior to graduation, BS-CSE majors are required to complete an an
anonymous exit survey; in fact, students complete the
exit-survey and take the POCAT in the same session. There are three
parts to the survey. The first one asks the respondent, for each
student outcome, to rank its importance on a scale of
very-unimportant/somewhat-unimportant/
somewhat-important/very-important;
and asks how strongly the respondent agreed with the statement "this
student outcome has been achieved for me personally" on a scale of
strongly-disagree/moderately-disagree/
slightly-disagree/slightly-agree/moderately-agree/strongly-agree.
In averaging the responses, we attached weights of 0%, 33%, 67%, and
100% to the four possible importance ratings; and weights of 0%, 20%,
40%, 60%, 80%, and 100% to the six possible achievement ratings.
The
second part of the exit survey asks students to evaluate the quality of
faculty advising as well as the quality of staff advising. Students
are asked to consider four specific items:
- Faculty Advising with respect to course choices;
- Faculty Advising with respect to graduate school, career
options, etc;
- Staff Advising with respect to curricular issues;
- Staff Advising with respect to career options, graduate school,
university policies and procedures, referrals, etc.
Students are asked to rank the importance of each item; and the
quality of advising, as it relates to that specific item, that they
received. The third part of the survey asks students to briefly
respond to two questions. The first asks, "What single aspect of the
CSE program did you find most helpful? Explain briefly." The second
asks, "What single change in the CSE program would you most like to
see? Explain briefly." These two parts of the survey, although not
directly related to specific student outcomes, are naturally very
important to students and provide us a good lens through which to view
the program and help identify possible improvements.
Results from the last four years' surveys are available here:
Exit Survey Results
E. Continuous Improvement
The various assessment processes listed above, the evaluations of
their results in UGSC discussions, other assessments such as from
student performance in individual courses and input from other ad-hoc
sources and their evaluations all have resulted in a number of
improvements in the program. Many of these improvements have been in
individual courses; others have been at the program level; and yet
others have been in the assessment/evaluation processes
themselves. Below we list a few of the recent improvements in each of
these categories, along with brief descriptions. In the case of the
improvements in the first two categories, we also
identify the related student outcome(s) or PEO. We also identify the
specific assessments (Undergrad Forum, Alumni Survey, Exit Survey,
POCAT, assessment using the rubrics listed above, IAB discussions,
or other
(such as student performance in specific activities in individual courses))
whose results and their evaluation led to the particular
improvement/change. In some cases, some of the details of the
improvement are still being discussed or yet to be implemented; such cases are marked with a "*".
In most cases, the evaluation of the assessment results and the possible
program improvements are discussed initially in UGSC meetings, followed by
actions by relevant faculty. In other cases, especially those based on
the results of assessment of student performance in specific activities in
individual courses, the evaluation and the program improvements are
initiated by individual faculty or faculty groups. In a few cases,
especially those involving the Alumni Survey, the College of Engineering's
Outcomes Committee is often part of the discussion since some of the items
on the survey apply to all programs in the college.
E.1 Improvements in Courses
- CSE 221, 222, 321: [Related SOs/PEOs: (b), (k); assessment instruments: UG Forum]
At one of the annual forums, some students
noted that that they spent a
lot of time figuring out some errors they got from the C++ compiler
and felt that,
given the simple nature of the underlying problem --once
they figured it out-- that was the source of the errors, it should not
have been so difficult to understand it. Most such
problems are, in fact, addressed in the on-line FAQ for the sequence.
However,
many students do not read the information in these pages. In order to
address this, the faculty responsible for these courses created
surveys to be completed by students in CSE 222 and 321 that asked the
students to respond to the following questions:
- Briefly describe one specific technical problem you encountered
and solved in CSE 221 (or, in the case of the survey for students in 321,
in CSE 222). For example, it could be a
problem you had while working on a lab assignment, or in the use of
the CSE computing environment, or any other issue that you struggled
with--yet managed to solve--and that you wish you could have known
about before you had to deal with it.
- Briefly explain the solution to the problem described above and how
you managed to find it.
- Is this problem and its solution listed on the Resolve/C++ FAQ pages
at:
http://www.cse.ohio-state.edu/sce/rcpp/FAQ/index.html?
Based on the student responses, a number of actions have
been taken:
- Improved the FAQ to provide detailed information about reading
compiler errors and the manner in which "build" and "make" work
when a project is recompiled.
- Added an "open lab" in 221 where a key goal of the lab is to
help students learn to use the information in the FAQ to understand
both compile-time and run-time errors.
- Developed a "closed lab" that requires students to
exercise their Unix/Emacs skills, including absolute and relative
paths for copying files and doing submissions; Emacs window buffers;
and Unix command line short cuts.
- Worked with the computing system staff to standardize remote
access utilities and offer a regular workshop/clinic to help students
set up their computers to enable remote access to departmental compute
servers.
- CSE 221: [Related SOs/PEOs: (c), (e), (f), (n); I; assessment instruments: other]
The first programming lab in CSE 222 addresses a number
of course learning outcomes, all of which are of a technical nature
(e.g., "Be competent with using the computing environment to complete
lab assignments" and "Be familiar with using [various software
components] to write application programs and/or component
implementations"). Student performance on this lab tends
to be all over the map. Analysis shows that, in a typical section of
CSE 222, fewer than a third of the students submit solutions that meet
the stated requirements, about half submit solutions that compile and
execute but fail to meet all the requirements, and the rest either
submit code that does not compile or do not submit a solution at all.
In order to confront the problems faced by the middle portion -- who
previously seemed to throw up their hands and simply move on -- we
have instituted a "revise and resubmit" policy for this lab.
This is similar to what might happen for an English
composition assignment, except that students are not told about the
opportunity to revise and resubmit until after their initial
submissions have been graded and returned. The graders for this first
lab now use a rather harsh grading rubric, so even a
seemingly minor deviation from a stated requirement results in a score
of 80% or less, leaving many students who thought they had done
"pretty well" believing they should have received more partial
credit. We therefore give them an opportunity (on this lab only) to
consider the grader's feedback and to resubmit a revised solution a
short time later. We use this occasion to discuss in class why "good
enough" is not good enough for software, how failure to understand and
meet all customer requirements may in some cases result in loss of
life, limb, money, or mission-critical opportunities for their
clients. This change in the course has been positive in the sense
that the number of submissions in the middle group on subsequent CSE
222 lab assignments has been noticeably smaller.
This experience also has resulted in changes to how we have written
the course learning outcomes for our first semester course in this
area, Software I (CSE 2221). Rather than being primarily technical in
scope, the first few outcomes of Software I deal with general
principles of software engineering that students can learn to
appreciate and apply early in the program (e.g., "Be familiar with the
reasons it is important that software be 'correct', i.e., why 'good
enough' is not good enough when it comes to software quality").
- CSE 321: [Related SOs/PEOs: (a), (e), (m); I, II; assessment instruments: POCAT]
One of the main goals of the 221-222-321 sequence
is to ensure that students are able to use simple formal logic
assertions involving mathematical set models to understand and reason
about an operation's behavior. Students typically take CSE
321 and Math 366, the first discrete math
course, during the same quarter. A major outcome of Math 366 is
developing facility with manipulating logical assertions.
Our expectation is that
the activities in Math 366 would help strengthen the lessons of the
221-sequence. We discovered, however, via a POCAT question that, in
fact, this was not happening. Students do not seem to transfer ideas
from one context to another even when they are closely related.
Here are some details. In the 221-sequence, in order to make them easily
machine readable and writable, the notation that is used to write logic
assertions spells out operators as, for example,
"there exists" and "union"; whereas in Math 366,
traditional math symbols are used.
The faculty involved with the 221-sequence had designed a POCAT
question that tested students ability to manipulate simple logic
assertions involving properties of small sets of numbers. When the
question was typeset for the POCAT, the person doing so, not being
regularly involved
with the 221-sequence, replaced the spelled out operators in the
assertions that appeared in the question with their
traditional math symbols. When UGSC discussed the test
results after the test had been administered, there was concern that
the performance of students on the question was much poorer than
expected; only about 35% of the students had answered the question
correctly as against the expected 70%. Various possible explanations
were discussed until the faculty member who had suggested the question
for inclusion in POCAT noticed the difference in notation used. He then
hypothesized that the poor performance was at least partly due to this
difference in notation; and that although students had seen and used
the traditional math notation in Math 366,
their not using that notation in the context of
specifying/reasoning about program behavior, likely had a major impact
on their performance.
And indeed, when the question was rewritten to use the 221-notation in
the next POCAT,
student performance was close to 70%.
To further clinch this issue, in the following POCAT, students
taking the test were split into two groups (as it turned out, over 60
students took the test during this following quarter (as against the
usual 25), allowing us to split them into two groups).
One group was
given the question using the 221-notation, the other
using the traditional math notation. And, indeed, the results
were as predicted: the performance of the second group
was much poorer than that of the first group.
This suggested a possible improvement in the sequence. The point is
that students, by the time they graduate (which is close to when they take
the
POCAT) and become computing professionals, need to be able to go back
and forth between the traditional math notation and other notations
such as that used in the 221-sequence. Otherwise, their ability to
identify, formulate, and solve engineering problems, and, especially,
their ability to understand others' formulations of engineering
problems, will be limited. Hence we have introduced a new activity in
321. In this activity, the instructor engages students in
translating specifications of reasonably simple behaviors written in
one notation to the other, in both directions; and assigns exercises
in which students perform the translation as well. When students
in one section of the course which included this activity were tested
on this type of problem, their performance did not depend on the
notation used; by contrast, performance of students in another section
which did not include this activity showed the same type of
difference, as did the students taking the POCAT, depending on the
notation used. Thus this activity is now part of 321.
- Pointers in Systems I course*: [Related SOs/PEOs: (c), (e), (k), (n); I; assessment instrument: Exit Survey]
One observation that faculty teaching courses such as CSE 660 and
CSE 677 have made over the years is that students' facility with pointers
was somewhat lacking. A number of graduating students have also noted,
in comments made as part of their exit surveys, that while the program
does a good job of helping develop students' high-level/abstract skills,
many students are not comfortable working at low-/systems-level,
specifically when working with pointers.
Hence in designing CSE 2421, Systems I, the semester replacement for
CSE 360, faculty decided to include, as a key component, material that
would directly address this weakness. Indeed, the title of the new course
("Low-level programming and computer organization") reflects this. The
first part of this course will help develop students' abilities with C
programming, including pointer manipulation. Since the other part of the
course is computer organization, there will be a natural relation between
the two parts and students should be able to not only develop their
pointer-based programming skills but also their understanding of the
relation to machine-level considerations. As we transition to semesters,
we will try to track student performance to see if there is a noticeable
difference in such courses as CSE 3461 (the replacement for CSE 677)
depending on whether the student took CSE 360 or the new Systems I course.
- CSE 321: [Related SOs/PEOs: (a, c, e, k, l, n); I; assessment instruments: other]
A couple of years ago, the 221-222-321 sequence was reorganized to
free up the last 2-3 weeks of CSE 321 so that students could engage
in the design and implementation from scratch of a new abstract
component that they use in an interesting case study (to generate tag
clouds web pages). The design of the new abstract
component is done though several homeworks and in-class activities to
provide incremental and timely feedback and allow students to complete
the design in a reasonable amount of time.
This change was prompted by feedback
from students and instructors in CSE 560. The feedback showed
that the gap between 321 and 560 was substantial
enough to create real issues for students in 560. One key concern was
the lack of experience in designing new components from scratch. The new
activity in 321 seems to have helped address this problem and also
enables students, near the end of the CSE 221-222-321 sequence, to look
back over the sequence and gain a summary understanding of the
component-based approach to software design that the sequence is focused on.
- CSE 459.24, 459.xx*:
[Related SOs/PEOs: (i, k); I; assessment instruments: Exit Survey]
One of the points that comes up frequently in the
Exit Survey (as well as in the Alumni Survey) is that while the
program provides students with excellent grounding in foundational
ideas, it is somewhat lacking when it comes to helping students
acquire experience with the most recent software tools.
The CSE~459 courses are meant to address this concern, enabling
students to explore a programming language that they may not otherwise
enounter in the curriculum. One of the recent languages that has
acquired importance in industry is
C#. Hence CSE 459.24 was developed about three years
ago that allows students to develop some skills in the
language. It has been offered regularly since then. (This
is related not just to outcome (k) but also (i)
since it encourages students to explore, on their own, the
various toolkits and libraries etc. that have developed around the
language.)
Another 459, this one on Python, is currently
being planned. While this language has been around for some time, its
importance in practical applications such as building GUIs has gone up
significantly recently, in part because of the numerous, powerful
Python libraries that have become available. In recent UG
forums and Exit Surveys, students have expressed
interest in learning Python. Recently, a senior graduate student (who
has used Python extensively in building systems for his research)
teamed up with a faculty member to propose developing a Python course
in the 459 series. After discussion in the Curriculum Committee, the
proposal has been approved and the course will be offered in Winter
'12; assuming that the pilot offering goes as well as it is expected
to, a semester version of the course will be developed and offered
starting in the 2012-'13 year.
- CSE 541:
[Related SOs/PEOs: (a, e); I; assessment instruments: other]
Many students struggle through this course on
numerical analysis. The problem is that they have not adequately
understood the necessary ideas from the prerequisite courses on
calculus, nor developed adequate skills to work with
typical real-valued functions and this is reflected in their poor
performance in 541. To address this,
faculty made
the following changes: i) introduced additional lectures at the start of the course
reviewing relevant parts of calculus and introducing relevant material
from basic linear algebra; ii) room for these topics was made by omitting
lesser priority topics
(such as Lagrangian interpolation). Student performance in the course
has improved following these changes.
- CSE 551:
[Related SOs/PEOs: (k, n); I; assessment instruments: UG Forum]
In a
recent offering of 551, one of the main assignments required
the student to conduct a broad survey of the field and write a 3-5
page paper and make a brief in-class presentation reporting the
findings. Alternately, students could build a
simple application related to information security for a mobile
handset. Microsoft Research had loaned us several Windows
Phone 7 phones; and students who chose this latter option implemented
their application on these phones.
At a recent UG Forum, a couple of students
suggested that focusing on one platform in this manner was not
appropriate. In response to this, the instructor allowed students in
the class to choose to develop the application on their own handset on
any platform they preferred. In addition, the instructor is exploring
the possibility of acquiring a few additional handsets on alternative
platforms so that students in future offerings of the course will be
able to develop on non-Windows platform even if they did not own an
appropriate handset.
- CSE 601*:
[Related SOs/PEOs: (f); III; assessment instruments: POCAT]
One of the questions on recent POCATs was the following:
Conflict of interest is a common ethical issue in business. Which of
the following has the most potential for being a conflict of interest?
- Recommending your company buy the same kind of computer you have at
home;
- Recommending your company buy computers from your brother-in-law;
- Buying a computer for home use from the same supplier your company
uses;
- Asking questions of the computer support people at work about
programs you're using at home.
Almost 90% of the students picked the correct answer, i.e., (2).
But in the UGSC discussion of the results,
the point was made that the question was too simple
and did not adequately address
outcome (f). For example, one common difficulty
students have is distinguishing between legal
questions versus ethical questions, or obligation to an employer
versus obligation toward society, etc. Following the
discussion, the faculty involved came up with a new
question:
Your brother/sister is very ill and needs medication you cannot
afford, so you steal it. Which of the following is the ethical
question that arises in this scenario?
- What kind of illness does your sibling have and how do we prevent
its spread among populations?
- Why is the medication not affordable and how do we make it more
affordable?
- Is it ever right to steal, even if you have a great need?
- What consequences might you face if you were caught stealing the
medication?
- All of the above are ethical questions.
- All of the above are important but none of them is an ethical
question.
- I cannot decide based on the given information.
Only about 70% of the students picked (3), the right answer.
In response, faculty are currently
considering adding a lecture at the start of the course that
explicitly considers the distinctions of the kind described above.
Faculty are also analyzing student answers more carefully to see which
of the various distractors seemed most popular to see what items to
focus on in this lecture.
- CSE 655:
[Related SOs/PEOs: (c, e, k); I; assessment instruments: POCAT]
A common problem
that programs, especially large ones, exhibit has to do with
uninitialized variables. Different programming languages help
address the problem in different ways.
First, the
language could be so designed that when a variable is defined, it is
automatically assigned some appropriate default value. Or
the syntax of the language could prevent the programmer from
introducing a new variable without specifying an
initial value for it. Or have the compiler
analyze each program to check that each variable has been
initialized before it is used. A fourth approach would be to have the
compiler
insert, into the compiled code, additional checks that make sure (at
runtime) that
each variable that is used has a value that was actually assigned to
it. The last approach would be to do nothing and expect the
programmer not to make the mistake of using a variable without
initializing it; in this case, if the program does have an
uninitialized variable, the program will probably crash when the
compiled code is actually executed.
Each of these approaches has advantages and disadvantages. For
example, the first approach may mask a bug in the program because the
programmer may have meant to but forgot to assign a specific initial
value to a variable; since the system provides a default initial
value, the program will run but may compute a wrong result and the
programmer may not realize that. The third approach (compiler
detecting the problem at compile time) but it can't be implemented in
general because of conditional and loop structures that depend on
runtime values. In other words, the compiler cannot tell
exactly which parts of
the program will be executed before which other parts. But it can do
an approximate analysis and arrive at a conservative evaluation
that would flag some uses of certain variables as questionable because
it is not able to conclusively establish that, in all cases, during
program execution, that the variable in question will be initialized
before being used. Java uses this approach. C++ uses
the fifth approach (leave it to the programmer); Resolve-C++, a
local dialect of C++ that is used in the 221-sequence
uses the first approach.
The topic is discussed in some depth in CSE 655 but for some
students the essential nature of the problem and its
possible solutions tend to remain unclear. Here is a related POCAT question:
One common problem in programs is that of uninitialized variables,
i.e., using a variable without having initialized it. This is
commonly a run-time error but Java flags this error at compile time.
How does it do this?
- Java uses a special technology that converts
run-time errors into compile-time errors;
- Java uses a "conservative"
approach, sometimes flagging situations which are not actually
erroneous;
- Java does automatic initialization of all variables so the
problem of uninitialized variables cannot arise in Java programs;
- Java is an interpreted language, so this question is meaningless;
- I have no idea.
When the question was tried a couple of years ago, faculty
expected 70% or so of the students to get the correct
answer. In fact, the number of students who picked the right answer
was substantially less. While some of the students seem to have chosen
an answer (such as (3)) that would indicate not having knowledge of
some Java details, many others chose answers (such as (1)) that
indicated failure to have a sufficiently good grasp of this important
concept. Indeed, someone with a good understanding of the concept
should, even if she had not heard of Java before, be able to choose
(2) as the most likely answer. Based on this, the faculty revised the
discussion in CSE 655 to include a more detailed discussion of the
topic. The performance of students in recent offerings of POCAT in
this (and similar) questions has been substantially better.
- CSE 670*:
[Related SOs/PEOs: (a, e, m); I; assessment instruments: POCAT]
Relational schema and, in particular, the notion of
primary keys for given schema, is a conceptually important
notion discussed in CSE 670, the required course on databases. In order
to see how well students are able to work with this notion at the time
of their graduation, faculty designed the following POCAT question:
Consider the relational schema R(A,B,C,D,E,F) with the functional
dependencies:
{BC --) ADEF, B --) DF, D--) EB} (note: angle
brackets cause trouble in html; so ")" is used instead).
Which of the
following could be the primary key for R:
- {A,B}
- {B}
- {C}
- {C, D}
- {B,C,D}
- None of the above
- I have no idea
(1) is incorrect because although using B we can get D and F and then
get E, there is no way to get to C. (2) is incorrect because we can't
get to either C or A. (3) is incorrect because using C we can't reach
any of the others. (4) is the right answer: using D, we can get E and B,
and then using B and C, we can get to the rest. (5) does let us get to
all the rest but is not minimal (since we can omit B).
Surprisingly few students (less than 10%) got the correct answer on the
POCAT. After some discussion in UGSC, it was decided that we should try
a simpler version of the question.
Consider the relational schema R(A,B,C,D) with the functional
dependencies:
{A --) BCD, BC --) AD, B --) D, D--) B}
Which of the following could be the primary key for R:
- {A,B}
- {B}
- {C}
- {C, D}
- {B,C,D}
- None of the above
- I have no idea
Again the right answer is (4); but again very few students got it.
Following further discussion in UGSC, it was decided to request one of the
670 instructors to try the question in his final exam. The Sp '11
instructor for the course did so, assigning the first version of the
question to half the students in the class and the second version to
the other half. To his surprise, the performance of the students was
as poor as in POCAT. Overall, less than 10% of the students (about the
same figure as in POCAT) got the correct answer for either form of the
question.
The preliminary conjecture is that students are not understanding the
minimality requirement of primary keys since several students
chose answer (5). But we plan to look into this further and see how to
revise the course to address the problem.
- CSE 680*:
[Related SOs/PEOs: (a, b, e, l, m); II; assessment instruments: POCAT]
One of the topics in CSE 680, the required course on algorithms,
is solving recurrence relations
These relations can be used to express the running time of certain
algorithms; in effect the running time of the algorithm for input of a
certain size is related to the running time for input of a smaller
size; which, in turn is related to the running time for input of still
smaller size; etc. But getting a good feel for the actual running time of such an
algorithm for large inputs requires us to "solve" the
relation to obtain the asymptotic behavior of the algorithm. In
general, this is a difficult task but if
certain conditions are satisfied, the Master
theorem can simplify it considerably. This can be important in
certain situations such as when dealing with algorithms
designed to search through very large volumes of data since the
difference in running time between different algorithms for the task
can be very substantial.
Hence the faculty involved with 680 designed a POCAT question
intended to see if students are able to solve (reasonably simple)
recurrence relations. The performance of the students who took the
test was unexpectedly poor. In the evaluation discussion analyzing the
test results, one explanation offered was that students were,
in fact, capable of using the Master theorem to solve the relation
--the relation in the POCAT question being one that satisfied the
conditions that allow the Master theorem to be applied-- but that,
because of the complex nature of those conditions, students could not
be expected to remember them when taking the POCAT. Indeed,
this seemed to be confirmed when the CSE 680 instructor asked a
similar question as part of his final examination for the course. A
large majority of the students in the course answered the final exam
question correctly.
But we decided to test this further. In the next offering
of POCAT, the question was revised to include a statement of the
theorem. With this, we felt students
should be able to check that the needed conditions were satisfied in
the given scenario and solve the specified recurrence relation. But,
in fact, student performance wazs no better. This is a puzzle
and one that has not yet been resolved. Why did the students in the
course final examination do so well when students taking the POCAT did
so poorly even when they were provided an explanation of the
Master theorem? Further fine-tuning of the question in future offerings
of the POCAT will, we hope, help address the question and tell us whether and
what changes in the course are needed.
It may also be worth noting that, in practice, one
would expect a CSE professional to look up, perhaps on-line, the
details of the theorem rather than necessarily remember them. Thus if
the faculty's original explanation that students taking the POCAT
simply did not remember the theorem had turned out to be correct, we
would have concluded that no change in the course was called for. But
it didn't.
- CSE 682:
[Related SOs/PEOs: (d); I; assessment instruments: teamwork rubric]
Based on the results of the peer-evaluations, using the temwork rubric,
in a recent offering of CSE 682, it is clear that some students have
problems being good team members. The evaluations provided 3 example sets of
comments by others in the group about individuals who did not function
well on the team. In one
case, comments made by the student about himself show how out of
touch he was with the rest of the group. The instructor is planning to
put these on
slides and make a mini-lecture out of them for use in future offerings of
the course. Previously, he has talked
in class about good team participation but having such specific real examples (with the names of the individuals being elided, of course)
should make the point much clearer.
- CSE 786:
[Related SOs/PEOs: (d, i, k); I; assessment instruments: teamwork rubric, life-long learning paper rubric]
Each capstone design course has required
students to explore a new tool, technology, or process and write a
three or four page paper on it. However, the coordinator for CSE 786 (the
capstone course on game design) felt
that this requirement was too distracting for students
since it took too much of their focus away from the capstone design
project. Hence he came up with an alternative approach to
achieving the outcome via an activity related to the design project
and, as an added bonus, also engaging students in another team
activity. The approach is to require groups of several students each
to form "technology teams". These teams are "orthogonal" to the
project teams; i.e., typically each technology team of, say, 5
students, will consist of one student from each of five different
project teams. Each technology team will focus on a particular
relevant (to the area of the particular course) topic,
for example,
"sound production" or "physics of games" (these being relevant to
the topic of CSE 786); each student in the team is
responsible for one aspect of the particular topic and is expected
to research that aspect. The team then puts together appropriate
resources and/or documentation that summarizes its findings and makes a
presentation to the entire class, with each student taking the lead
for the particular aspect that he/she was responsible for. The
document becomes a resource that is available to the entire class and
helps address (or provides pointers to resources that may address)
questions related to the particular topic that different project teams
may encounter in their projects.
Thus the approach helps students improves their lifelong
learning skills; and
since the student teams are required to document and present their
findings, it helps sharpen their communication
skills as well. Further, since the research and, especially, creating the
document and making the presentation are both team activities, and
since the team is distinct from the student's project team thus
requiring the student to interact closely with another group of
students, the approach contributes to improving students' team
skills. Moreover, since the topic is directly relevant to the design
projects that the various project teams in the course are engaged in,
the interaction between the technology team and the class during the
presentation is likely to be more engaging and of greater depth.
Other capstone design courses (such as CSE 682) are adopting the
approach to their courses.
- CSE 2221, 2231 (the semester replacements for CSE 221-222-321):
[Related SOs/PEOs: (m, k); I; assessment instruments: Alumni and Exit Surveys]
One comment that we have often seen in the
Exit and Alumni Surveys is that while students and graduates appreciated
the software
engineering discipline that the CSE 221-sequence teaches, many are
unsatisfied with the use of RESOLVE/C++ in the sequence. Many
would prefer to learn the concepts we teach in this sequence
with a different programming language as the delivery mechanism, a
language used more commonly in industry positions of the sort they are
most likely to secure for internships and after graduation. Hence, in
the semester version of the sequence, the conceptual focus of the
introductory software courses will remain, but we will use Java
as the programming language and focus more attention on current
industry best practices, much as we do now in CSE 421.
- CSE 3901, 3902 (semester replacements for CSE 560):
[Related SOs/PEOs: (c, e, k, l, n); I; assessment instruments: Alumni and Exit Surveys]
One of the comments we consistently see
in the Exit and Alumni Surveys is that CSE 560, the current junior level project
course, is extremely valuable since it helps students develop
teamworking as well as communication skills while also, at the same,
engaging them in a challenging design and development task. The one
negative comment that we see has been with respect to the domain
of the task, i.e., system software (assembler, linker, loader,
simulator). While understanding of such software is important,
especially for those who want to work, for example, in embedded
systems development and the like, most students feel that it should
not be the topic of the project in 560.
In designing the semeter program, we have, therefore, developed two
courses, CSE 3901 and 3902, each of which can be used to meet the
junior-level project course requirement. The project CSE 3901 is
concerned with development of a web application with the focus on both
client-side and server-side scripting. The project CSE 3901 is
concerned with development of a 2-d interactive game with the focus on
both 2-d graphics and rendering as well as on event-based programming.
Each course will, as does the current 560, engage students in
intensive team work as well as documentation/communication.
E.2 Program-level Improvements
- NEWPATH:
[Related SOs/PEOs: (d, g, k); I; assessment instruments: Alumni and Exit Surveys]
Many BS-CSE majors have a strong interest in
entrepreneurship and dream of being successful entrepreneurs.
Indeed, many of
these students gravitated to CSE because of their interest in
entrepreneurship. The university does offer, in the Fisher College of
Business, a minor program in the topic and a number of students do
complete that program. However, several comments over the years in
alumni surveys and exit surveys have suggested that a more focused
program, one that concerns not entrepreneurship in general but
IT-entrepreneurship would be of value. Partly in response to this, and
partly because NSF funding made it possible, the NEWPATH program was
created four years ago with the main goal of educating,
training and nurturing highly motivated students to become IT
entrepreneurs. The most ambitious students in the program are
expected, by the time of graduation, to be running their own IT
startups. The program has a number of key components: internships in
local IT-startups, arranged in collaboration with TechColumbus, a
state supported non-profit that serves as an incubator for high-tech
startups; an on-going weekly seminar session that provides a forum in
which students can learn from each other and from NEWPATH faculty
members, can brainstorm ideas that may serve as the basis for
startups, can hear from other students about their internship and
e-practicum experiences, can engage in case-studies of successful and
unsuccessful IT startups to identify best (and not so good) practices,
and can hear from CEOs and other senior people from local IT startups;
and a two-quarter long entrepreneurship practicum. This component is
designed to provide an in-depth practical experience in IT
entrepreneurship to NEWPATH students where they will have the
opportunity to take an idea from concept to the brink of
commercialization. The program has proven popular with a select group
of motivated BS-CSE majors as well as majors from ECE, Business,
etc. The availability of this activity contributes to several of the
student outcomes including (d), ability to function on
multidisciplinary teams; (g), communicate effectively with a range of
audiences; and (k), ability to use techniques and skills (including,
especially, entrepreneurial skills) to succeed as a CSE professional
and entrepreneur.
-
GET:
[Related SOs/PEOs: I; assessment instruments: Alumni and Exit Surveys, IAB meetings]
Another common refrain in the Alumni Survey over the years,
has been the importance of providing opportunities for students to
gain, via suitable internships, relevant work experience while they
are still in the program. Comments at some IAB
meetings have echoed this. Many of our students, out of
necessity, work at least part-time but that work is often unrelated to
CSE and does not address this need. The Global Enterprise Technology
(GET) Immersion Experience is designed to do so. It is an ``immersive
internship program'' for BS-CSE (and BS-CIS) majors, designed in
cooperation with JP Morgan Chase and Syracuse University. Students
admitted to the program engage in a 2-quarter+summer long on-site
internship at JP Morgan Chase or other companies, while being fully
enrolled in classes that are tightly woven into this internship.
Students who have gone through the program will gain valuable,
relevant work experience, and can also expect to receive permanent
job offers from JP Morgan Chase and associated companies when they
complete their degrees. The program is in its pilot state and has
already attracted considerable student interest. This improvement
directly contributes to preparing the student to achieve the first
part of program objective (I), i.e., graduates of the program will be
employed in the computing profession, ...
- Ethics Course:
[Related SOs/PEOs: (f); III; assessment instruments: Alumni Surveys]
The alumni survey as well as informal
interactions with alumni and employers has made clear the need for
students (engineering students in general, not just BS-CSE majors) to
receive training in topics related to engineering ethics in a
full-fledged course. In response to this data, the College of
Engineering's Core Committee worked to achieve two
things: replace one of the general education courses by a course
devoted to this topic; and have various departments around the
university develop suitable courses that engineering students could
take. The committee defined the set of learning outcomes that any
course in this category should achieve:
- An ability to explain the ways in which society regulates the use of
technology;
- An ability to identify stakeholders in an engineering solution;
- An ability to identify moral problems and dilemmas;
- An ability to analyze moral problems from different ethical perspectives;
- An ability to identify the personal values that an individual
holds and uses to resolve moral problems and dilemmas;
- An ability to describe the relation between personal values,
societal values, and professional values.
Four different courses have been developed and approved for this
category with the majority of students taking Philosophy 131.01,
Introduction to Engineering Ethics. This course is an introduction to
engineering ethics, and stresses the application of professional
ethical codes to specific cases. The focus is on the National
Society of Professional Engineer's (NSPE) ethical code but the course
also looks at the professional codes of other professional engineering
organizations. In addition, the course briefly surveys some of the major
ethical theories that have been proposed and discusses the general
relationship between advancing technology and society's ethical
standards.
The addition of the engineering ethics course to the curricula of
BS-CSE majors who entered the university in the last few years is an
important improvement that contributes to outcome (f), in particular
student's understanding of professional, ethical, legal, and social
issues and responsiblities.
- Change in PEO II, III*:
[Related SOs/PEOs: II, III; assessment instruments: IAB input]
During its May 2011, the Industrial Advisory Board, during its
periodic discussion of the BS-CSE program, suggested a couple of
(modest) revisions to the current PEOs. The first suggestion was to
revise PEO (III) which currently reads, "Graduates will be informed
and involved members of their communities, and responsible engineering
and computing professionals", to refer to professional
societies (such as ACM) rather than or, possibly, in addition to "communities". The second
suggestion was to consider dropping the word "computing" from PEO
(II) which currently reads, "Graduates with an interest in, and
aptitude for, advanced studies in computing will have completed, or be
actively pursuing, graduate studies in computing", to account for
graduates who may pursue, for example, an MBA. We will discuss these
suggestions in early Autumn 2011 in the UGSC
for possible action.
E.3 Improvements in Assessment/Evaluation Processes
- Quality of POCAT questions:
In the earlier discussion of CSE
601, we noted that the original POCAT question
was too simple and that was the reason for the
relatively high scores that students received. But this raises a
general question: is there a way to evaluate the quality of
POCAT questions in general? One obvious measure is
the difficulty of the question, i.e., the percentage of students
who choose the right answer.
But there is a more nuanced measure that we have been considering, one
that is based on a similar measure used elsewhere.
The idea is that since wrong answers in a POCAT question are not simply
arbitrary wrong answers but are supposed to represent common
misconceptions that students harbor, for each answer x of a
question, one can define its discrimination, disc as
the ratio of the number of students from the top quartile who
selected the answer x to the total number of students in the
top quartile less
the ratio of the number of students from the bottom quartile who
selected the same answer to the total number of students in the
bottom quartile. Thus if x is the correct answer and the question is high
quality (measured by its difficulty), we would expect most of the
students in top quartile to pick that answer and few of the student in
the bottom quartile to pick it; thus the disc value should be
positive and relatively high. Similarly, if x is an incorrect
answer, the question is high quality, and x represents a common
misconception, many students in the bottom quartile are likely to pick
it and few in the top quartile. Thus its disc should have
high negative value. On the other hand, if the question is poor
quality, the its disc value for the correct answer might be
close to
0 because both top and bottom students are likely to pick it.
Perhaps most interestingly, if the question is high quality in terms
of difficulty, x is a wrong answer but does not represent a common
misconception, not many students, even from the bottom quartile, might
choose it; thus it is likely not to have a high negative value.
Thus disc provides a good way to evaluate the quality of a POCAT
question not just in terms of its difficulty but also with respect to
the quality of the distractors in it. This is important since good
distractors allow us to identify common misconceptions that students
harbor which allow us come up with improvements in the courses to help
overcome the misconceptions. Hence we are revising the automated tool
that generates the summary results so that it computes all the
disc values and highlight those that are out of line.
Having these values (computed by hand) available during the UGSC
discussion of the POCAT results has already been extremely useful in
helping to guide the evaluation.
- Evaluation of POCAT results:
Each quarter, one key activity for UGSC is arranging the POCAT and
creating the test that will be used. In the first or second week of
the quarter, committee members discuss this briefly, decide a date for
the test, and discuss the questions that should be used on the test,
based on the results from previous tests. The summary results of the
POCAT are compiled within a week of the test administration. At the
following UGSC meeting, the results are evaluated and the conclusions
summarized in the minutes of the meeting. While this approach has
worked,
the evaluation is buried in the UGSC minutes. Hence, when subsequent
POCAT results are discussed, faculty's recollection of the previous
evaluations tends to be somewhat hazy.
In order to address this, we have now adopted the following approach.
For each question for which the results were unexpected or otherwise
led to discussions in the Undergraduate Committee (and beyond),
summaries of the discussion are written up. The summaries are similar
to the ones above (for CSE 321, 601, 655, 670, etc.). The summaries are
maintained in a single web page in reverse chronological order with
the summaries corresponding to the each POCAT
being organized in its own section. Each of these sections also
contains a link to the actual question used in that particular POCAT. In effect,
over time, the page provides a historical view of the changes that
were made to the program and the rationale, in terms of the assessment
results that triggered them and the summary evaluations of the
results, behind the changes. Thus, for example, a new faculty member
to the department can read through this page and get an excellent view
of the evolution of the program and the reasons behind important
changes in the program. (This page is protected since it contains links
to banks of POCAT questions as well as the tests.)
Please send comments, questions, and suggestions to Neelam at cse.ohio-state.edu