Glossary of Terms

Assessment Glossary of Terms

 

A

Academic Program: A degree or certificate-bearing educational experience

Accreditation: Approval granted by an official regional or national review board after the program or institution meets specific requirements or standards. See Accreditor

Accreditor: An official regional or national review board recognized by the Council for Higher Education Accreditation. Accreditors establish standards for accreditation that a program or institution must meet to be granted accreditation.

Alignment: A connection between the curriculum and an intended outcome. This can be a connection between program-level objectives and course-level objectives. It can also be a connection between assessment measures and program- or course-level objectives. Alignment ensures that what is taught in a course or program is designed to achieve the intended outcomes and that direct and indirect assessments measure student progress on a learning objective. A curriculum map can create a visualization of alignments by showing when, where, and how students develop a learning objective in the curriculum and when, where, and how students learning development is measured.

Assessment: 1. An iterative process that involves collection of and reflection on students’ learning data and using that information to advance student success. The process produces evidence of student achievement of program or institution learning objectives; 2. An activity designed to measure student achievement in an educational experience. Examples include, but are not limited to, assignments, exams, portfolios, performances, projects, lab reports, and papers.

Assessment Plan: A document that outlines the

  • Program-level student learning objectives
  • Courses and other educational experiences that contribute to students’ abilities reach the desired program’s outcomes. This information is also included in the curriculum map
  • Description of which assessments will be used to measure student success
  • The target performance goal
  • Individual responsible for gathering data
  • Timeline for measuring student success for each objective (programs assess at least one learning objective per year)

Authentic Assessment: Determining the level of student learning by evaluating their ability to perform a “real world” task in the way professionals in the field would perform it. Authentic assessments typically have multiple acceptable solutions. Examples include, but are not limited to, developing a business plan, creating a patient treatment plan, auditioning for an arts performance, teaching in the classroom, and creating a solution to a discipline-related problem.

B

Benchmark: A point of reference for measurement; a standard of achievement against which to evaluate or judge performance.

C

Capstone Course/Experience: A class or experience designed to help students demonstrate comprehensive learning.

Closing the loop: Using assessment results for improvement and determining the success of improvement measures. After assessing student success and areas for potential improvement, results are used to create change, followed by assessing student learning once students have experienced the changed program and determining if the change contributed to improved student performance.

Course learning objectives (CLOS): Specific and measurable statements about the knowledge, skills, and attitudes students are expected to have after successfully completing the course. These statements should align with the program-level learning objectives

Criterion-referenced: Assessment in which student performance is compared to a pre-established performance standard, rather than compared against the performance of other students

Culture of assessment: A pervasive view that assessment activities are a means for continuous self-improvement through recognition of our strengths and weaknesses. The American Association of University Professors identifies 15 elements that contribute to a culture of assessment: general education goals, common use of assessment terms, faculty ownership, ongoing professional development, administrative support and understanding, practical and sustainable assessment plan, systematic assessment, student learning outcomes, comprehensive program review, assessment of co-curricular activities, institutional effectiveness, information sharing, planning and budgeting, and celebration of success.

Curriculum Map: A matrix that shows where each program-level student learning objective is addressed in the curriculum or educational experience and measured within courses or outside of courses. A curriculum map can create a visualization of alignments by showing when, where, and how students develop a learning objective in the curriculum and when, where, and how students learning development is measured. The map also ensures a course or program is designed to give ample opportunities for students to achieve the learning objectives.

D

Direct Assessment: Collecting evidence on students’ actual knowledge, skills, and behaviors. Direct data-collection methods provide evidence in the form of student products or performances. Such evidence demonstrates the actual learning that has occurred relating to a specific content or skill. Examples include, but are not limited to, exams, assignments, projects, papers, and presentations. See Indirect Assessment for comparison.

Distractors: Incorrect answer options on a multiple-choice question. The best distractors are those that could be reasonable answer options. Distractors help students and instructors learn what is misunderstood or where thinking may have gone wrong.

E

Evaluation: A value judgment; a statement about quality; a statement about merit and worth. In contrast, assessment is a reflective process to advance student success.

Evidence (of learning): Documentation of student learning. Evidence is typically divided into direct and indirect measures.

F

Focus Group: A qualitative data-collection method using facilitated discussions in which participants are asked a series of open-ended questions about their experiences, perceptions, and beliefs. Focus groups are typically considered an indirect data-collection method.

Formative Assessment: Assessment occurring during the learning process intended to improve student performance. Formative assessments are typically no or low stakes. Formative assessment is used internally, primarily by those responsible for teaching a course, to identify ways to advance learning. See Summative Assessment for comparison.

G

Goals: General expectations for students. Effective goals are broadly stated, meaningful and achievable.

H

High Stakes Assessment: Any assessment whose results have important consequences for students, teachers, programs, etc. Results of the assessment will be used to determine whether a student should pass a course, graduate, receive certification, graduate, etc. These measures may be externally developed and based on set standards. Examples include, but are not limited to, capstone projects, exit exams, and licensure exams.

I

Indirect Assessment: Evidence or data collected through perceptions about student mastery of learning outcomes but do not imply or ensure learning has occurred. Indirect measures do not include artifacts of students learning, such as assignments and exams. Examples include, but are not limited to, surveys, focus groups, and interviews. See Direct Assessment for comparison.

Institution-level learning objectives: Specific statements of learning goals that every student at the university, regardless of program, is expected to achieve by graduation.

K

Key performance indicator (KPI): A quantifiable measure for assessing achievement of a pre-determined goal. In academic program assessment, a KPI is a measure of student success aligned with the program’s goals and priorities for the academic program. KPIs take time to advance student success, often a 3-5 year goal, rather than immediate. KPIs provide a framework for determining outcomes but are not specifically tied to the curriculum or a single course. Examples include, but are not limited to, student satisfaction, sense of belonging, retention rate, graduation rate, job placement rates, diversity within the major, licensure pass rate, and number of internship opportunities.

L

Learning objectives: Statements that identify the intended knowledge, skills, or attitudes that students will be able to demonstrate, represent, or produce because of a given educational experience. Objectives should be specific, measurable, achievable, relevant, and time-bound. Institution, program, and course learning objectives are common examples.

N

Norm-referenced: Assessment in which student performance is compared to a larger group, such as a national sample. A norm-referenced assessment ranks students, not to measure achievement against a pre-established standard. See Criterion-referenced for comparison.

Norming: The process of educating raters to evaluate student performance and produce consistent and dependable scores. Often referred to as rater training. Typically, this process uses criterion-referenced standards and rubrics. Raters need to participate in norming sessions before scoring student performance.

O

Objective: Clear, concise statements that identify the intended knowledge, skills, or attitudes that students will be able to demonstrate, represent, or produce because of a given educational experience. Objectives should be specific, measurable, achievable, relevant, and time-bound. Institution, program, and course learning objectives are common examples.

Objective assessment: An activity measuring student achievement with only one correct answer. Examples include, but are not limited to, multiple choice or true/false questions. See Subjective Assessment for comparison

P

Performance assessment: Type of assessment in which students are asked to demonstrate their skills, rather than describe or explain those skills. Performance assessments typically have two components: assignment guidelines that tell students what they are expected to do or produce and the criteria (in the form of a rubric) used to evaluate student work

Performance target: The goal achievement for a specific assessment measure. The minimum score or achievement a program or instructor considers demonstration of success. Example: 75% of students will score ≥4 out of 5 on the portion of the rubric pertaining to critical thinking

Portfolio: A systematically curated and reviewed assessment of student learning containing multiple artifacts of student learning. Portfolios often include samples of student work in combination with a reflective statement about the work, its impact on learning, how it demonstrates growth, and how it integrates with other activities, disciplines, or experiences.

Program: A degree or certificate-bearing educational experience. Examples include Associate degree, Bachelor’s degree, Master’s degree, doctoral degree, and graduate certificate.

Program Assessment: A continuous process designed to monitor and advance student success. Program assessments start with defining the knowledge, skills, and attitudes students will gain through an educational experience; aligning educational experiences and measures of learning to those; collecting and reviewing data measuring student learning; reflecting on the data; and using results to improve student learning.

Program Description: A short narrative that describes the unique purpose of the program. The description may include wording from program goal/purpose and/or certifications/career paths that graduates may be eligible for after successfully completing the program.

Program-level learning objectives (PLOs): A specific and measurable statement of what a student will know, be able to do, or attitudes they will possess at the end of a degree- or certificate-bearing educational experience. PLOs should align with the program’s mission and goals.

Program mission: Overarching aspirational statement about the program. The statement often includes information about the reason for the program. A program’s mission connects to the school/college’s and university’s mission.

Program plan of study: Clearly outlined path to obtain degree, certificate, or diploma. The plan typically outlines course requirements, course sequencing, and course descriptions

R

Reliability: A mathematical calculation of consistency, stability, and dependability for a set of measurements.

Rubric: A tool for measuring student learning across one or more categories. Many rubrics are shaped like a matrix with categories listed in the first column and levels of achievement across the top row. For each category and each level of achievement within the category, a concise and observable description is provided. Rubrics provide students with performance expectations, particularly when the rubric is shared with students in advance.

S

Student learning objectives (SLOs): A specific and measurable statement of what a student will know, be able to do, or attitudes they will possess at the end of an educational experience.

Subjective assessment: An assessment with a variety of correct answers. Examples included, but are not limited to, an essay, presentation, performance, or project.

Summative Assessment: Measurements of student learning gathered at the conclusion of a module/unit, course, or academic program to determine whether or not learning objectives have been achieved. Summative assessments provide information on the performance of an individual student or all students within the course or program. See Formative Assessment for comparison.

T

Test blueprint: A list of learning goals addressed on an exam or quiz aligned to the individual questions or items. The blueprint is often in table format. The purpose of the blueprint is to ensure the exam or quiz tests the intended learning objectives and distributes the test items to give appropriate emphasis to various knowledge and skills being assessed.

V

Validity: Refers to the accuracy of a measure and the extent to which the tool measures what it intended to measure and whether the interpretation and intended use of assessment results are logical and supported by theory and evidence.

 

 

Helpful link: Internet Resources for Higher Education Outcomes Assessment

Sources:

  • Allen, M. (2008). Assessment Workshop at UH Manoa on May 13-14, 2008
  • American Association of University Professors. Establishing a culture of assessment. https://www.aaup.org/article/establishing-culture-assessment
  • American Psychological Association, National Council on Measurement in Education, & the American Educational Research Association. (1999). Standards for educational and psychological testing. Washington DC: American Educational Research Association.
  • CRESST Glossary. http://cresst.org/publications/cresst-publication-3137/
  • Gantt, P.A. Portfolio Assessment: Implications for Human Resource Development. University of Tennessee.
  • James Madison Dictionary of Student Outcomes Assessment. https://www.jmu.edu/curriculum/self-help/glossary.shtml/
  • Leskes, A. (Winter/Spring 2002). Beyond confusion: An assessment glossary. Peer Review, AAC&U.org
  • Middle States Commission on Higher Education. (2007). Student learning assessment: Options and resources (2nd Ed.). Philadelphia: Middle States Commission on Higher Education.
  • Mount San Antonio College Institution Level Outcomeshttp://www.mtsac.edu/instruction/outcomes/ILOs_Defined.pdf
  • Northern Illinois University Assessment Terms Glossary.http://www.niu.edu/assessment/resources/terms.shtml#A
  • Palomba, C.A. & Banta, T.W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass.
  • Suskie, L. (2018). Assessment student learning: a common sense guide (3rd ed.). John Wiley & sons.