Measuring Standards of Excellence

How ARTS uses its Standards and Criteria for Excellence

ARTS-COA created the ARTS Standards and Criteria for Excellence which integrate the COA standards with a listing of typical documentation and suggested evaluative questions and criteria. In developing this document ARTS-COA reviewed similar documents published by TRACS, ABHE, PEQAB (Ontario Canada), and ATS. This is a living document, reviewed at the 2023 annual meeting, and will be reviewed annually.

These standards of excellence are rubrics that are used by site visit teams, and the COA, as the basis for evaluating self-studies, during accreditation visits, and the ARTS annual reports. Our criteria contain the following scale values:

  • Standard is met
  • Standard is partially met
  • Standard is not met

However, the COA has had discussions (at the recent annual meeting and during some COA meetings) regarding adding additional gradations to these three scale values. Would it be better to expand our criteria values from 3 to a greater number (such as 5-7)? I thought it would be worth summarizing our discussions and the pros and cons of expanding our scale.

Pros of Expanding the Scale:

  1. Enhanced Precision: A larger scale provides more granularity, allowing for a more precise assessment of performance. This can help in distinguishing between various degrees of achievement, especially in cases where the standard is not clearly met or not met.
  2. Improved Feedback: A larger scale can offer more detailed feedback to institutions, enabling them to identify specific areas where they excel or need improvement. This can lead to more targeted and actionable recommendations.
  3. Fairness and Flexibility: A larger scale allows for a more nuanced evaluation, accommodating situations where an institution might partially meet a standard but excel in some aspects. This added flexibility can be fairer to institutions that don’t neatly fit into a binary “met” or “not met” category.
  4. Encouraging Continuous Improvement: With a larger scale, institutions may be more motivated to strive for higher levels of performance, as there are more gradations to aim for. This can foster a culture of continuous improvement.
  5. Reflecting Complexity: In situations where standards are complex and multifaceted, a larger scale can better capture the complexity of the evaluation process, providing a more accurate picture of an institution’s performance.

Cons of Expanding the Scale:

  1. Increased Subjectivity: A larger scale may introduce more subjectivity in the evaluation process. It can be challenging to reach a consensus when there are numerous gradations, and different evaluators may interpret the criteria differently.
  2. Longer Evaluation Processes: Expanding the scale may require more time and resources for evaluations, as each additional gradation necessitates more detailed analysis and discussion.
  3. Difficulty in Maintaining Consistency: With more gradations, it can be more challenging to maintain consistency in evaluations across different evaluators, leading to potential discrepancies in assessments.
  4. Complexity in Reporting: A larger scale can make reporting and communication of evaluation results more complex, potentially making it harder for stakeholders to understand and act upon the feedback.
  5. Potential for Ambiguity: As the scale becomes more granular, there’s a risk of introducing ambiguity about what each gradation means, making it harder for evaluators to apply the scale consistently.

The current thinking of the COA is that ultimately a school is either meeting or not meeting the standard. During a site visit, or in discussions of an Annual Report submission, participants have the opportunity to explore the nuances of what has been presented, ask questions and seek clarification. It’s an inductive process which goes from the qualitative to a quantitative conclusion. Our current thinking is that we will not change this scale, but continue to think about our rubric and how best to evaluate. We invite further discussion on this topic. Please contact the executive director of ARTS if you would like to make your opinion known.

ARTS Annual Assessment Plan

ARTS requires schools to provide an annual assessment plan highlighting student learning outcomes.  This can sound like a daunting task, but it’s actually common sense. Let me break this down into 7 simple steps.

  1. Who is responsible?  The first thing your assessment plan should have is a statement of the person or persons at your institution who get together to discuss how well your students are meeting your learning goals.  Your plan should list those people, how they are organized, and how often they meet and where meeting minutes are kept .
  2. Do you have clearly stated learning outcomes?  All schools have program goals and learning outcomes, but when was the last time your reviewed and updated them?  Understand the difference between general goals, and specific learning outcomes.  Goals are big vision statements.  E.g. “Our goal is to equip men to be pastors”.  But there are many dimension in “equipping”.  If I were to ask you, show me an “equipped man” you should be able to list specific things that this person would be able to know, value or do – those specific items are your learning outcomes.
  3. What are the tools you use to measure your learning outcomes?  The one tool that always comes to mind is the student’s course evaluation form.  But there are other evaluation methods.  Measuring student completion and drop out rates is a form of assessment.  Peer reviews can be helpful.  Asking other faculty members to review completed assignments is another tool.  Reviewing rubrics used to evaluate student sermons, papers or portfolios is yet another.  There are several types of tools you can use to assess student learning.
  4. How do you analyze your assessment data?  While all seminaries collect course evaluations, not everyone takes the time to thoughtfully analyze the results.  A quick glance at what students say is not sufficient for an assessment plan.  Take the time to analyze, summarize, distribute and discuss the data.
  5. Do you conduct an annual review your assessment tools?  When was the last time you reviewed your assessment processes and tools?  Are you still using the same assignments, rubrics, surveys and course evaluation forms that you were 5 years ago?  Things change.  Have you changed to keep up?
  6. Are you improving based on your assessment results?  In assessment lingo, this is called “closing the loop”.  Your plan should explain how the data you’ve collected in your assessments is helping you to improve your teaching and learning results.  Each year you should list specific things you are doing better because of your assessment results.
  7. Are you telling the world about your learning results?  Where on your school’s web page is a link that says something like: “Educational quality”, or “Measuring student learning”?  This page should have the data you collect annually regarding graduation rates, results of student survey, testimonials, etc. 

When you answer the above seven questions you will have created an ARTS annual learning assessment plan.  The first time you create a plan, it will be relatively simple.  As assessment becomes part of your regular academic cycle you will find that the process becomes much easier and the results more valuable. 

In a future posting I’ll explore details of how to create meaningful learning outcomes.

Pin It on Pinterest