Queens CUNY Lockup Mobile Logo
A magnifying glass balanced on an open book.

“The fact that an opinion is widely held is no evidence whatsoever that it is not utterly absurd.” – Bertrand Russell

Glossary of Terms

Alignment: The process of analyzing how explicit criteria line up or build upon one another within a particular learning pathway.

Assessment: The process of gathering qualitative and/or quantitative data to systematically gauge the effectiveness of the institution and its progress towards its academic and operational goals. Assessment includes:

  • General Education learning assessment
  • Program-Level learning assessment
  • Course-Level learning assessment
  • Administrative assessment

Assessment Cycle: A process that supports evidence-based decision-making, which includes identifying goals and objectives, mapping intended outcomes to assessment methods, analyzing the data, and using assessment findings to inform decisions.

Assessment Liaison: A faculty member or administrator who serves as the coordinator of assessment activities in their academic department or administrative unit.

Assessment Method: A strategy for providing evidence of progress towards a goal. This may involve either:

  • Direct Measures: Products that provide direct evidence of an outcome, such as pre- and post-evaluations.
  • Indirect Measures: Perceptions of an outcome rather than actual demonstrations of outcome achievement, such as surveys and interviews.

Assessment Plan: A document that outlines the goals and assessment methods used to collect evidence related to measuring progress towards a goal, and the timeline at which the evidence is collected and reviewed.

Bloom’s Taxonomy: A framework for categorizing levels of knowledge. The framework consists of six categories:

  • Knowledge: Recalling or remembering information.
  • Comprehension: Explaining ideas or concepts.
  • Application: The ability to put ideas and concepts to work in solving problems.
  • Analysis: Breaking down information into its component parts, determining how the parts relate to one another or to the overall structure or purpose.
  • Synthesis: The ability to put parts together to form something original.
  • Evaluation: Judging the value of evidence based on definite criteria.

Closing the Loop: The process of using assessment results to inform and document evidence-based decision-making.

Course Assessment: The collection of data aimed at measuring successful learning across students in an individual course with the ultimate goal of improving overall learning and pedagogical practice.

Course Map: A matrix that connects the goals of a particular course to the learning activities and assessment methods within the course.

Curriculum Map: A matrix which aligns courses to the goal of an academic program in order to see which learning goals are taught and when. Also known as Degree Maps, they showcase where there may be gaps or over-concentration in the curriculum and provide a road map for program-level assessment by documenting which program outcomes have been assessed.

Exit Interviews: A survey or interview of graduating students to obtain feedback. This data is usually used to address strengths and weakness of a program or major.

Focus Groups: A facilitated discussion including a carefully constructed series of open-ended questions which ask participants about their attitudes, beliefs, and experiences.

Formative Assessment: The process of monitoring student learning and providing ongoing feedback and adjustments throughout the course.

Institutional Learning Outcomes / Objectives (ILOs): Statements of observable or measurable behaviors that indicate the knowledge, skills, and habits of mind all students are expected to demonstrate as a result of their overall educational experience. ILOs describe an institution’s college-wide learning goals in concrete terms. At QC, our Mission and Values inform our Cross-Disciplinary Learning Goals, which represent our ILOs.  

Learning Assessment: A systematic process of collecting and analyzing data about student performance on tasks aligned to learning objectives to inform pedagogy, curriculum and other related decisions intended to improve the quality of student learning in a course or academic program, and which includes the documentation of decisions made.

Learning Goals: Broad statements of what we expect students to know and be able to do as a result of completing a course or program.

Learning Objectives: The operational definition of a learning goal; the outcome of which can be observed and measured.

Logic Model: An explicit, often visual, statement that describes the theory of change underlying actions taken to meet a goal and the implicit assumptions about the linkages between what an academic program or administrative unit does and its expected outcomes.

Mission Statement: A broad statement of the aims and values of an academic program or administrative office. A mission statement should 1) provide a clear description of purpose, and 2) identify who the program/office serves.

Norming: A process that brings a group of faculty raters together to decide how to assess student work in a consistent way, so that regardless of which rater assesses the work, the rating falls within a close range. The process is generally overseen by a facilitator, someone familiar with norming processes, using a rubric or similar scoring tool. The aim is to come to agreement on how standards of performance  are applied to student work.

Objective: The operational definition of a goal or intended outcome; the outcome of which can be observed and measured via evidence.

Portfolio: A systemic and organized collection a student’s work that exhibits direct evidence of a student’s efforts, achievements, and progress over a period of time. Electronic versions are referred to as E-portfolios.

Program Assessment: The systematic collection of data aimed at measuring successful learning within a major or other learning program with the ultimate goal of improving student learning and educational practice.

Qualitative Data: Descriptive information rather than numeric data. Examples of qualitative data include responses to open-ended survey or interview questions; evaluations of writing samples, portfolios, or formal recitals; participant observations; ethnographic studies.

Quantitative Data: Numeric information that may be analyzed using statistical techniques. Examples of quantitative data include GPA, grades, exam scores; forced-choice survey responses; demographic information; standardized teaching evaluations.

Reliability: As applied to an assessment tool, it refers to the extent to which the tool can be counted on to produce consistent results over time.

Rubric: A type of scoring guide that articulates specific components and expectations for an assignment. A rubric can improve the consistency and accuracy of assessments conducted across multiple settings.

Self-Assessment: A metacognitive process in which a student engages in a systematic review of what they are learning and how well they are learning it.

Student Learning Outcomes / Objectives (SLOs): Statements of observable or measurable behaviors that indicate the level of mastery students are expected to demonstrate as a result of a learning experience; i.e. the operational definition of a learning goal.

Summative Assessment: Evaluations of student learning at the conclusion of an instructional period, such as a course or program. Summative assessments are almost always formally graded and often heavily weighted.

Surveys: A method of collecting self-reported information regarding attitudes, beliefs, experiences, values, etc.

Triangulation: A method of analysis commonly recommended to overcome issues of validity, whereby evidence from at least two measures is collected. If the measures lead to the same conclusion, then the level of uncertainty is reduced. One or more of the triangulated measures should be direct.

Validity: As applied to an assessment tool, it refers to a judgment concerning the extent to which the assessment tool measures what it purports to measure.  The validity of a tool can never be proved absolutely; it can only be supported by an accumulation of evidence.

 

Sources:

Allen, M.J. Assessing academic programs in higher education. Bolton: Anker, 2004.

Frechtling, Joy A. Logic modeling methods in program evaluation. Vol. 5. Jossey-Bass, 2007.

SLO Terminology Glossary: A Resource for Local Senates, The Academic Senate for California Community Colleges

Stassen, M. L. A., Doherty, K., & Poe, M. (2001). Program-Based Review and Assessment. http://www.umass.edu/oapa/oapa/publications/online_handbooks/program_based.pdf

Walvoord, B. E. Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education. Jossey-Bass, 2004.