A person standing at a podium giving a speech to graduates.

“Education is not a preparation for life; education is life itself.” — John Dewey

Assessing Program-Level Student Learning

Learning is a complex process with cognitive, affective, and social dimensions that involves not only knowledge and understanding, but also values, attitudes, and habits of mind that affect students’ success in and beyond the classroom. Program-level assessment of student learning is conducted by faculty who wish to improve the quality of student learning in their programs and provide justification for the resources needed to maintain them. These assessments involve the systematic collection of information about student learning across courses in an academic program to determine how well students are meeting program learning goals and inform decisions about program content, delivery and pedagogy.

Departmental Learning Assessment Projects

Academic Departments conduct two types of assessment projects: (1) an “annual activity,” and (2) a comprehensive self-study, conducted once every five to seven years. The latter, known as Academic Program Review, aims to identify a program’s challenges and opportunities in areas of quality, service, efficiency and resources. Assessment of student learning is just one component of this process. Annual activities, on the other hand, are much more focused on student learning. One aim of the annual activity is to gather information that will enable a program to identify issues that may (or may not) require attention in a timely fashion. A second aim is to establish a record of continuous self-assessment needed for comprehensive self-studies. Both projects offer departments the opportunity to monitor their areas of strength and identify opportunities for improvement, and both should involve the assessment of student learning.

Annual Program Assessment Activity

When designing a learning assessment activity, it is important to consider the following: (1) Assessment is an opportunity to address faculty concerns related to student learning and should start with genuine questions, (2) It is important to articulate the learning goals of the program and the ways in which progress on these goals can be meaningfully measured, and (3) Good assessment is manageable and sustainable. It is neither prudent nor productive to measure everything. It is also unlikely that you will be able to assess progress on all of a program’s learning goals in a single year. Remember that learning assessment is an ongoing process and create a long-term strategy for the assessment of your program’s learning goals.

“Assessment must not be treated as an end in itself, but rather as one (albeit important) means for educational improvement. Educational values should guide not only what instructors choose to assess but also how they do so.” (APA on Outcomes Assessment, 2008)

There are two fundamentally different types of program-level assessment activities: (1) Activities that explore ways in which the program is meeting its mission and goals, identifying both strengths and opportunities for improvement, and (2) Activities that determine the effects of an intervention that has been carried out in order to strengthen the program in some way, such as changes in curriculum or instructional methodology. Naturally, the second type can be evaluated meaningfully only if it is preceded by an activity of the first type.

Annual Activity Guidelines

1. Define the mission/goals of the department or program.

If a department has not done so recently, it should review and possibly update its mission statement. Does the mission identify the student population the program intends to serve? Does it say what students should achieve by taking part in the program? Mission statements should also be consistent with the Queens College mission statement and the University’s. If the department has not yet identified its program learning goals, define the learning goals students to achieve as a result of participating in the program and the corresponding desired and observable outcomes; how progress on these goals can be meaningfully measured, aka. student learning objectives or outcomes (SLOs)“.

[SLOs: Statements of observable or measurable behaviors that indicate the level of mastery students are expected to demonstrate as a result of a learning experience.]

2. Establish how well the curriculum accomplishes its mission/goals.

Develop a Curriculum Map (AKA Degree Map): A Curriculum Map identifies how the courses in a degree program contribute to the program’s learning goals and addresses questions such as:

  • Is the curriculum well-sequenced for student learning?
  • What does the curriculum say about our student learning priorities?
  • How do the required learning experiences of our curriculum shape a graduate of our program?

For faculty, the process of curriculum mapping facilitates understanding of how courses are sequenced and “fit” together, and shifts the focus from “my course” to “our degree program.” Mapping also supports curriculum revision, as the activity often reveals curricular strengths, gaps, necessary and unnecessary overlaps, and needs (e.g., Do students have enough practice to achieve a goal?). For students, curriculum maps facilitate understanding of how (1) each course is intended to contribute to specific knowledge or skills of the program, and (2) elective courses could be strategically selected to further strengthen knowledge or skills or to explore new areas of interest.

A common method for developing a basic curriculum map is to:

  • Identify the courses you will be mapping to your program learning goals
  • List the courses in a row across the top of an Excel Sheet/chalkboard/sheet of paper
  • Place program learning goals in a column down the right side
  • Convene the faculty teaching within your degree program for 1 to 2 hours
  • Have faculty identify the program learning goals they are covering in their courses
  • Transfer the finished map into Excel once it is complete
  • ALWAYS follow up the creation of a Map with an examination of the Map

Carnegie Mellon’s Eberly Center offers a great Blank Mapping Tool (xls).

Design an Assessment Activity: When designing an assessment activity, consider both the logistics of the activity (How many classes are you assessing? What will be necessary to ensure that the students complete the assessment?) as well as the validity of the assessment (How seriously will the students take the assessment? How will students and instructor perceive the assessment?). Remember that while the extent to which a desired outcome is easy to measure has little to do with the degree to which it is worthy of measurement, it is important to avoid easy measures that do not sufficiently mirror the complexity of student learning.

Explain the procedures used to determine whether the curriculum is effective in achieving the learning goals sought by the department. Depending on the mission of the department, these methods may include in-course evaluations of student achievement, capstone courses, and measurement of student attitudes and learning through surveys.
Assistance can be provided for web-based surveys. Copies of surveys or other instruments used should be included in appendices. Examples include:

  • Curriculum coverage, but be aware of Barr and Tagg
  • Retention, grades
  • Attainment of learning goals
  • Comparison to other: departments, national norms; institutions
  • Placement success: jobs/salaries; graduate schools
  • Alignment with industry/professional needs

See the section on Program-Level Assessment Methods below for more example measures of student learning.

3. Identify where the department or program can improve and make suggestions for the future (“Close the Loop)”

An essential part of assessment is to use the information obtained either to publicize the program’s strengths or to identify opportunities for improvement and to act on them. If the activity identified a noteworthy strength, how will this result be disseminated? If the activity reveals an area of concern, what concrete actions will the department take to address it? Use assessment results to spark meaningful conversation among faculty as to what steps should be taken to enhance student learning, and how implemented changes might be assessed in the future.

It is also important to remember that no individual faculty member has the sole responsibility for ensuring that students will acquire one or more of the program’s learning goals. Student attainment of learning goals should result from the collective learning experiences they engage in during their time at the college. Therefore, learning assessment must not be used to evaluate any individual faculty to measure the performance of any instructional program.

Learning Assessment Feedback

Often, assessing progress on program learning goals involves a series of decisions dependent on your sample size, resources, needs, and other project variables. The Office of Institutional Effectiveness can provide feedback to help faculty conduct meaningful and worthwhile assessment activities.

Feedback on annual activities:

  • Is the activity clearly related to the mission and goals?
  • Is the program prepared to carry out the activity?
  • Is the scope of the activity suitable for completion in the timeframe planned?
  • For assessments that span multiple years, how well does the activity integrate with the overall project?
  • Will it provide meaningful, actionable results?
  • Can the usefulness of the activity be improved?

Feedback on annual reports:

  • In hindsight, was this a useful activity? 
  • Does the report indicate clearly what actions the academic unit will be taking, based on the results of this activity?
  • Are there suggestions for future annual activities based on the results of this one?
Academic Program Review (APR Self-Study)

Academic Program Review (APR Self-Study), which falls under the purview of the Provost’s Office, is a comprehensive self-study, conducted once every five to seven years aimed at identifying a program’s challenges and opportunities in areas of quality, service, efficiency and resources. The process is one of the primary means by which Queens College maintains the quality of our academic programs.

Assessment of student learning is an important part of this process. For every course and every academic program, faculty set learning goals for their students. Progress made towards these goals is evaluated when grades are assigned. However, grading is the analysis and use of data to make judgments about an individual student’s performance. If we want to consider how the quality of student learning might be improved in a course or academic program, we need a way to measure, and think about, learning overall. The process involves the systematic collection of information about student learning across courses in an academic program to determine how well students are meeting program learning goals and inform decisions about program content, design, pedagogy, and resource allocation. Findings support reflection on how learning can be improved within a particular program and a way to provide justification for the resources needed to maintain the program.

While the process can be burdensome and demanding for departments, as well as expensive and time-consuming for the administration, the knowledge gained is vital to the improvement of programs and the allocation of scarce college resources. A good program review should yield an assessment of all major features of an academic department, including its faculty, students, curriculum, research, and support services, and provide the department with a plan for its future development.

Academic Program Review Schedule

Academic Program Review Guidelines

Program-Level Assessment Methods

When properly aligned to learning goals, assessment methods help us determine whether our students are learning what we think they’re learning, as well as how well program content, delivery and pedagogy support students’ opportunities to achieve program learning goals. Methods will vary depending on the learning goal(s) and progress should be measured using several methods. Aim to measure both directly and indirectly. Direct measures measure how well students demonstrate that they have achieved a learning goal. Indirect measures measure students’ (or others’) perceptions of how well they have achieved a learning goal.

Examples of Direct Measures
  • Capstone Courses: Senior seminars or designated assessment courses where program learning goals are integrated into assignments.
  • Collective Portfolios: Faculty assemble samples of de-identified student work from various courses and use this “collective portfolio” to assess specific program learning goals. Portfolios should be assessed using normed scoring rubrics and expectations should be clarified before portfolios are examined.
  • Content Analysis: is a procedure that analyses the content of written documents. The analysis begins with identifying the unit of observation, such as a word, phrase, or concept, and then creating meaningful categories to which each item can be assigned. For example, a student’s statement that “I learned that I could be comfortable with someone from another culture” could be assigned to the category of “Positive Statements about Diversity.” The number of incidents that this type of response occurred can then be quantified and compared with neutral or negative responses addressing the same category.
  • Course Assessment: Data collected from de-identified course assessments can be analyzed to assess program learning goals if the assessments are aligned to these outcomes. 
  • Embedded Questions to Assignments: Questions related to program learning goals are embedded within course exams. For example, all sections of “research methods” could include a question or set of questions relating to your program learning outcomes. Faculty score and grade the exams as usual and then copy exam questions that are linked to the program learning outcomes for analysis. The findings are reported in the aggregate.
  • Locally developed essay questions: Faculty develop essay questions that align with program learning goals. Performance expectations should be made explicit prior to obtaining results.
  • Locally developed exams: Faculty create an exam that is aligned with program learning goals. Performance expectations should be made explicit prior to obtaining results.
  • Normed Scoring Rubrics: When developed and normed by faculty, these rubrics can be used to holistically score any product or performance such as essays, portfolios, recitals, oral exams, research reports, etc. A detailed scoring rubric that delineates criteria used to discriminate among levels is developed and used for scoring. Generally two raters are used to review each product and a third rater is employed to resolve discrepancies.
  • Observations: can be of any social phenomenon, such as student presentations, students working in the library, or interactions at student help desks. Observations can be recorded as a narrative or in a highly structured format, such as a checklist, and they should be focused on specific program objectives.
  • Primary Trait Analysis: is a process of scoring student assignments by defining the primary traits that will be assessed, and then applying a scoring rubric for each trait.
  • Standardized Achievement and Self-Report Tests: Select standardized tests that are aligned to your specific program learning goals. Score, compile, and analyze data. Develop local norms to track achievement across time and use national norms to see how your students compare to those on other campuses.

Source: Allen, Mary; Noel, Richard, C.; Rienzi, Beth, M.; and McMillin, Daniel, J. (2002). Outcomes Assessment Handbook. California State University, Institute for Teaching and Learning, Long Beach, CA.

Examples of Indirect Measures
  • Curriculum Maps: Curriculum Maps (or Degree Maps) are used to summarize the relationship between program goals and courses, course assignments, or course syllabus objectives to examine congruence and to ensure that all objectives have been sufficiently structured into the curriculum.
  • Focus Groups: A series of planned discussions among homogeneous groups of 6-10 students who are asked a carefully constructed series of open-ended questions about their beliefs, attitudes, and experiences. The session is typically recorded and later transcribed for analysis. The data is studied for major issues and reoccurring themes along with representative comments.
  • Interviews: Conversations or direct questioning with a student  or group of students. The interviews can be conducted in person or on the telephone. Interviewers should be trained to follow agreed-upon procedures (protocols).
  • Surveys: can be used to (a) assess learning, (b) assess student needs, (c) obtain student feedback, and (d) find out what happens to students after graduation. A program assessment survey should cover a number of topics, including: program goals, courses, instructors, educational supports, DEI, and career prep/professional development. Please request assistance with survey design and/or administration from OIE. If designing your own instrument, please consult Best Practices in Survey Research, as good data only comes from good methodology. See our Program Assessment Survey Template or our Alumni Outcomes Survey Template for examples of appropriate question items and answer options.
  • Transcript Analysis: are examined to see if students followed expected enrollment patterns or to examine specific research questions, such as to explore differences between transfer and freshmen enrolled students.

Source: Allen, Mary; Noel, Richard, C.; Rienzi, Beth, M.; and McMillin, Daniel, J. (2002). Outcomes Assessment Handbook. California State University, Institute for Teaching and Learning, Long Beach, CA.

To learn more about program-level learning assessment methods, browse our Assessment Resources or peruse guides to program-level assessment at other colleges, such as Clark CollegeCarnegie MellonCornell, or Rochester Institute of Technology. The Office of Institutional Effectivensess can support your assessment projects with institutional data, surveys, and applied research expertise.

Assessing Student Learning in Our General Education Program

A General Education at Queens College is an education in the liberal arts and sciences – courses that introduce students to the perspectives and knowledge of many disciplines. Our goals for providing students an education in the liberal arts and sciences have endured since the college was founded in 1937, even as the courses and requirements have changed over the years. Since 2013, entering freshmen and transfer students follow a liberal arts curriculum that fits the framework of the CUNY Pathways Initiative. Pathways course proposals are reviewed by several committees to ensure that they achieve the required set of Pathways Student Learning Outcomes, which represent the broad skills and knowledge that all undergraduate students are expected to attain as a result of their educational experiences at the College.

As the core of the undergraduate student experience, assessment of the general education program is of utmost importance. A recent Pathways Syllabi Project and Assessment of English 110 (First Year Writing at QC) indicate we need to rethink the general education program at QC. To this end, the Curriculum Strategic Planning Committee, a faculty- and staff-led working group, is tasked with determining how our General Education curriculum might be improved.