Overview

Assessment At Chandler-Gilbert Community College Overview



What Does Assessment At Chandler-Gilbert Look Like?

What is college-wide general education student learning outcome assessment?

General assessment method as it relates to general education student learning outcome assessment.

Assessment process for general education student learning outcomes at Chandler-Gilbert Community College.

Closing the loop: Using assessment results to improve curriculum, instruction, or assessment methods. 

At Chandler-Gilbert Community College, we are committed to a culture of assessment in which we seek continuous improvement to support student success.

Assessment is focused on a concept called “Continuous Improvement.” This means that assessments are continuously conducted and revised based on the data collected. The changes are usually minor from year-to-year with more monumental changes occurring over multiple assessment cycles. Expect some revision during each assessment cycle but the change should be incremental and manageable.​

Steps Toward Creating a “Culture of Assessment” at an Institution*

Where is your unit in developing a culture of assessment?

STEPATTITUDE
DenialIt’s a fad. If I ignore it, it will go away.
AcceptanceOK. I guess we have to do it.
ResistanceI feel threatened. My department feels threatened. My campus feels threatened. Can I subvert it by not participating in the process or in some other way?
UnderstandingMaybe we can learn something useful. Can we use what we’ve already been doing?
CampaignWe have a plan. Maybe it’s not perfect, but let’s get moving!
CollaborationWe have a plan with long-range objectives that are clearly defined, and based on our experiences with assessment, we believe it works.
InstitutionalizationWe can’t imagine working without assessment. It’s a permanent part of our institutional culture.

*Adapted from Allen, M. J. (2004). Assessing academic programs in higher education, p.7. Anker Publishing Company: Bolton, MA.


Institutional Commitment

Chandler-Gilbert Community College will not use assessment results punitively or as a means of determining faculty or staff standing, position, or staffing or hiring preference. The purpose of assessment is to measure student learning, not to evaluate faculty or staff.

Chandler-Gilbert Community College will not use assessment in a way that will impinge upon the academic freedom or professional rights of faculty. Individual faculty members must continue to exercise their best professional judgment in matters of pedagogy, curriculum, and assessment.

Chandler-Gilbert Community College will not assume that assessment can answer all questions about all students.

Chandler-Gilbert Community College will not use assessment merely to be accountable to external entities. Assessment must involve ongoing observation of what we believe is important for learning.


Program Review

The major purpose of Program Review is to ensure high quality educational programs. Program Review is a process that evaluates the status, effectiveness, and progress of programs and helps identify the future direction, needs, and priorities of those programs. It should integrate assessment of student learning into plans to promote positive and forward-looking change in programs.

Program Review is closely connected to strategic planning, resource allocation, and other decision-making at the program, division, and college-level. The Program Review process should focus on improvements that can be made using resources that currently are available to the program. In cases where proposed program improvements and expansions that would require additional resources, need and priority for additional resources should be clearly specified.

Program Reviews should be completed every 3 years, unless a different schedule is dictated by an outside accreditation body. Approximately one-third of the Career and Technical Education (CTE) programs will be reviewed annually.


Assessment Tools at CGCC

Canvas
Rubrics

Rubrics are the preferred way to collect assessment data at CGCC. A rubric that clearly aligns to the chosen outcome and indicators enables a consistent set of parameters to be applied across the entire student population. These rubrics can produce objective student data that will be used during assessment. You may create your own or use one of the SLO templates available from the SLOAC Canvas Course.


Key Terms

Alignment

The degree to which student experiences, assignment(s) and instrumentation (rubrics) align to the Student Learning Outcome(s).

Analytic Scoring

Example: analytic scoring of a history essay might include scores of the following dimensions: use of prior knowledge, application of principles, use of original source material to support a point of view, and composition.

Anchors

Application

The minimum level of design requirement to attain the outcome of SLO assessment. Application requires students to take what they know and can do (knowledge and skills) and apply that to a circumstance set up in the assignment. This creates the condition of transfer, a necessary condition for valid conclusions of student performances.

Artifacts

Documents or assignments that provide the raw data for assessment.

Assessment

“Assessment is the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development.” (Palomba & Banta, 1999)

Assessment for Accountability

The assessment of some unit (could be a program, department, or entire institution) conducted to satisfy external stakeholders. Results are summative and often compared across units. (Leskes, A., 2002).
Example: to retain state approval, graduates of the school of education must achieve a 90 percent pass rate or better on teacher certification tests.

Assessment for Improvement

Assessment that feeds directly back into revising the course, program, or institution to improve student learning results. (Leskes, A., 2002)

Assessment Plan

A document that outlines the:

  • program mission/goals
  • desired student learning outcomes (or objectives)
  • learning processes (e.g., courses, activities, assignments) that contribute to students’ abilities reach the program’s outcomes (this may be shown in the form of a curriculum map)
  • long-range timeline
  • location of the mission/goals and student learning outcomes (e.g., web site, brochure, advising session)

A plan for a specific assessment activity/project will include the following:

  • the purpose or goal of particular assessment activities
  • how the results will be used and who will use them
  • brief explanation of data-collection methods and the analysis methods
  • an indication of which outcome(s)/objective(s) is/are addressed by each method,
  • the intervals at which evidence is collected and reviewed
  • the individual(s) responsible for the collection/review of evidence and dissemination of assessment results

(adapted from the Northern Illinois University Assessment Glossary)

Assignment

The tasks assigned to students for the purpose of assessing student performances and origination of artifacts.

Authentic Assessment

Determining the level of student knowledge/skill in a particular area by evaluating his/her ability to perform a “real world” task in the way professionals in the field would perform it. Authentic assessment asks for a demonstration of the behavior the learning is intended to produce.

Example: asking students to create a marketing campaign and evaluating that campaign instead of asking students to answer test questions about characteristics of a good marketing campaign.

Benchmark

A point of reference for measurement; a standard of achievement against which to evaluate or judge performance.

Capstone Course/Experience

An upper-division class designed to help students demonstrate comprehensive learning in the major through some type of product or experience. In addition to emphasizing work related to the major, capstone experiences can require students to demonstrate how well they have mastered important learning objectives from the institution’s general studies programs. (Palomba & Banta, 1999)

Closing the loop

Using assessment results for improvement and/or evolution.

Competency

The demonstration of the ability to perform a specific task or achieve specified criteria. (James Madison University Dictionary of Student Outcomes Assessment)

Continuous Improvement

Consistently implementing changes based on assessment that lead to improvement by meeting chosen outcomes.

Course Assessment

Assessment to determine the extent to which a specific course is achieving its learning outcomes.

Criteria for Success

The minimum requirements for a program to declare itself successful.

Example: 70% of students score 3 or higher on a lab skills assessment.

Criterion-referenced

Assessment where student performance is compared to a pre-established performance standard (and not to the performance of other students). (CRESST Glossary) See also: Norm-referenced.

Curriculum Map

A matrix showing the coverage of each program learning outcome in each course.

Direct Assessment

Collecting data/evidence on students’ actual behaviors or products. Direct data-collection methods provide evidence in the form of student products or performances. Such evidence demonstrates the actual learning that has occurred relating to a specific content or skill. (Middle States Commission on Higher Education, 2007). See also: Indirect Assessment
Examples: exams, course work, essays, oral performance.

Embedded Assessment

Collecting data/evidence on program learning outcomes by extracting course assignments. It is a means of gathering information about student learning that is built into and a natural part of the teaching-learning process. The instructor evaluates the assignment for individual student grading purposes; the program evaluates the assignment for program assessment. When used for program assessment, typically someone other than the course instructor uses a rubric to evaluate the assignment. (Leskes, A., 2002) See also: Embedded Exams and Quizzes.

Embedded Exams and Quizzes

Collecting data/evidence on program learning outcomes by extracting a course exam or quiz. Typically, the instructor evaluates the exam/quiz for individual student grading purposes; the program evaluates the exam/quiz for program assessment. Often only a section of the exam or quiz is analyzed and used for program assessment purposes. See also: Embedded Assessment.

Evaluation

A value judgment. A statement about quality.

Focus Group

A qualitative data-collection method that relies on facilitated discussions, with 3-10 participants who are asked a series of carefully constructed open-ended questions about their attitudes, beliefs, and experiences. Focus groups are typically considered an indirect data-collection method.

Formative Assessment

Ongoing assessment that takes place during the learning process. It is intended to improve an individual student’s performance, program performance, or overall institutional effectiveness. Formative assessment is used internally, primarily by those responsible for teaching a course or developing and running a program. (Middle States Commission on Higher Education, 2007) See also: Summative Assessment

Goals

General expectations for students. Effective goals are broadly stated, meaningful, achievable, and assessable.

Grading

The process of evaluating students, ranking them, and distributing each student’s value across a scale. Typically, grading is done at the course level.

High Stakes Assessment

Any assessment whose results have important consequences for students, teachers, programs, etc. For example, using results of assessment to determine whether a student should receive certification, graduate, or move on to the next level. Most often the instrument is externally developed, based on set standards, carried out in a secure testing situation, and administered at a single point in time. (Leskes, A., 2002)

Examples: exit exams required for graduation, the bar exam, nursing licensure.

Holistic Scoring

Scoring that emphasizes the importance of the whole and the interdependence of parts. Scorers give a single score based on an overall appraisal of a student’s entire product or performance. Used in situations where the demonstration of learning is considered to be more than the sum of its parts and so the complete final product or performance is evaluated as a whole. (P. A. Gantt) See also: Analytic Scoring

Indicators

Specific criteria adopted by CGCC that expand the SLOs into measurable performance statements by which students demonstrate proficiency with any one Student Learning Outcome. The indicators serve as a structure for SLO measurement.

Indirect Assessment

Collecting evidence/data through reported perceptions about student mastery of learning outcomes. Indirect methods reveal characteristics associated with learning, but they only imply that learning has occurred. (Middle States Commission on Higher Education) See also: Direct Assessment

Examples: surveys, interviews, focus groups.

Learning outcomes

Statements that identify the knowledge, skills, or attitudes that students will be able to demonstrate, represent, or produce as a result of a given educational experience. There are three levels of learning outcomes: course, program, and institution.

Levels of Quality

The range of student performance quality is described in the rubric. Each ‘cell’ includes a descriptor of quality. The descriptors have consistent instrumentality in the language of student performance; a requirement for a valid and reliable instrument.

Norm-referenced

Assessment where student performances are compared to a larger group. In large-scale testing, the larger group, or “norm group” is usually a national sample representing a wide and diverse cross-section of students. The purpose of a norm-referenced assessment is usually to sort or rank students and not to measure achievement against a pre-established standard. (CRESST Glossary) See also: Criterion-referenced.

Norming

Also called “rater training.” The process of educating raters to evaluate student performance and produce dependable scores. Typically, this process uses criterion-referenced standards and analytic or holistic rubrics. Raters need to participate in norming sessions before scoring student performance. (Mount San Antonio College Assessment Glossary)

Objective

Clear, concise statements that describe how students can demonstrate their mastery of program goals. (Allen, M., 2008) Note: on the CTLA Assessment web site, “objective” and “outcome” are used interchangeably.

Outcomes

Clear, concise statements that describe how students can demonstrate their mastery of program goals. (Allen, M., 2008) Note: on the Mānoa Assessment web site, “objective” and “outcome” are used interchangeably.

Performance Assessment

The process of using student activities or products, as opposed to tests or surveys, to evaluate students’ knowledge, skills, and development. As part of this process, the performances generated by students are usually rated or scored by faculty or other qualified observers who also provide feedback to students. Performance assessment is described as “authentic” if it is based on examining genuine or real examples of students’ work that closely reflects how professionals in the field go about the task. (Palomba & Banta, 1999)

Portfolio

A type of performance assessment in which students’ work is systematically collected and carefully reviewed for evidence of learning. In addition to examples of their work, most portfolios include reflective statements prepared by students. Portfolios are assessed for evidence of student achievement with respect to established student learning outcomes and standards. (Palomba & Banta, 1999)

Program Assessment

An on-going process designed to monitor and improve student learning. Faculty: a) develop explicit statements of what students should learn (i.e., student learning outcomes); b) verify that the program is designed to foster this learning (alignment); c) collect data/evidence that indicate student attainment (assessment results); d) use these data to improve student learning (close the loop). (Allen, M., 2008)

Reliability

In the broadest sense, reliability speaks to the quality of the data collection and analysis. It may refer to the level of consistency with which observers/judges assign scores or categorize observations. In psychometrics and testing, it is a mathematical calculation of consistency, stability, and dependability for a set of measurements.

Rubrics

A tool often shaped like a matrix, with criteria on one side and levels of achievement across the top used to score products or performances. Rubrics describe the characteristics of different levels of performance, often from exemplary to not-evident. The criteria are ideally explicit, objective, and consistent with expectations for student performance.

Rubrics are meaningful and useful when shared with students before their work is judged so they better understand the expectations for their performance. Rubrics are most effective when coupled with benchmark student work or anchors to illustrate how the rubric is applied.

Rubrics are instruments that can be used to score student performance artifacts. These scores become the data for assessment.

Standard

In K-12 education, Education, and other fields, standard is synonymous with outcome.

Student Learning Outcomes (SLO)

The general education outcomes adopted by CGCC are: Critical Thinking, Oral Communication, Information Literacy, and Personal Development. In general, they are statements of what students will be able to think, know, do, or feel because of a given educational experience.

Student Products

The specific artifacts such as papers, performances, constructs or other means that student display their levels of performance against the SLOs.

Summative Assessment

The gathering of information at the conclusion of a course, program, or undergraduate/graduate career to improve learning or to meet accountability demands. The purposes are to determine whether or not overall goals have been achieved and to provide information on performance for an individual student or statistics about a course or program for internal or external accountability purposes. Grades are the most common form of summative assessment. (Middle States Commission on Higher Education, 2007) See also: Formative Assessment

Transfer

A condition included in the design of courses and assignments in which students are required to apply knowledge and skills in a manner different from how students acquired that knowledge and those skills. The degree of difference in the transfer and application of knowledge and skills produces a range of rigor for student to attain the products.

Triangulation

The use of a combination of methods in a study. The collection of data from multiple sources to support a central finding or theme or to overcome the weaknesses associated with a single method.

Validity

Refers to whether the interpretation and intended use of assessment results are logical and supported by theory and evidence. In addition, it refers to whether the anticipated and unanticipated consequences of the interpretation and intended use of assessment results have been taken into consideration. (Standards for Educational and Psychological Testing, 1999)

Value-Added Assessment

Determining the impact or increase in learning that participating in higher education had on students during their programs of study. The focus can be on the individual student or a cohort of students. (Leskes, A., 2002).

A value-added assessment plan is designed so it can reveal “value”: at a minimum, students need to be assessed at the beginning and the ending of the course/program/degree.

Sources Consulted:

  • Allen, M. (2008). Assessment Workshop at UH Manoa on May 13-14, 2008
  • American Psychological Association, National Council on Measurement in Education, & the American Educational Research Association. (1999). Standards for educational and psychological testing. Washington DC: American Educational Research Association.
  • CRESST Glossary. http://cresst.org/publications/cresst-publication-3137/
  • Gantt, P.A. Portfolio Assessment: Implications for Human Resource Development. University of Tennessee.
  • James Madison Dictionary of Student Outcomes Assessment. https://www.jmu.edu/curriculum/self-help/glossary.shtml/
  • Leskes, A. (Winter/Spring 2002). Beyond confusion: An assessment glossary. Peer Review, AAC&U.org
  • Middle States Commission on Higher Education. (2007). Student learning assessment: Options and resources (2nd Ed.). Philadelphia: Middle States Commission on Higher Education.
  • Mount San Antonio College Institution Level Outcomes. http://www.mtsac.edu/instruction/outcomes/ILOs_Defined.pdf
  • Northern Illinois University Assessment Terms Glossary. http://www.niu.edu/assessment/resources/terms.shtml#A
  • Palomba, C.A. & Banta, T.W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass.
  • University of Hawai’i Manoa. (2017). Assessment Definitions/Glossary. Retrieved from manoa.hawaii.edu/assessment/resources/definitions