Assessment

CREATING A SCORING RUBRIC

Module Lesson Plan and Supporting Materials
Lesson Plan – Creating a Scoring Rubric (DOCX)
Using Rubrics in Program Assessment PPT
Rubric Testing and Modification Activity (DOCX)
Student Work Sample (excerpted from Suskie, 2009) (PDF)
Sample Score Sheet (DOCX)
Feedback Survey
Work Session Evaluation Report Template

 

What is a Scoring Rubric?

A scoring rubric is a tool used to assess students’ level of achievement in a particular area of performance, understanding, or behavior.

Rubrics are composed of four basic parts. In its simplest form, the rubric includes:

  1. A description of the task students are asked to perform.
  2. The criteria by which a student’s performance on the task is scored (rows).
  3. A scale describing how well or poorly any given task has been performed (columns). Scales with 3-5 levels are typically used. Examples of labels used:
    • Exemplary, proficient, marginal, unacceptable
    • Advanced, intermediate high, intermediate, novice
  4. A description of each level of achievement for each criterion (cells).

There are two main types of rubrics:

Analytic Rubric: An analytic rubric specifies at least two criteria by which a student’s performance on a task is to be evaluated and provides a separate score for each criterion (e.g., a score on “formatting” and a score on “content development”).

Advantages: provides more detailed feedback on student performance
Disadvantages: more time consuming than applying a holistic rubric

Holistic Rubric: A holistic rubric provides a global score for a student’s performance on a task.

Advantages: quick scoring; provides an overview of student achievement
Disadvantages: does not provided detailed information

Why use a Scoring Rubric?
  • Using rubrics can lead to substantive conversations among faculty.
  • When faculty members collaborate to develop a rubric and/or score students’ work, it promotes shared expectations and grading practices.
  • Complex products or behaviors can be examined efficiently.
  • Well-trained raters apply the same criteria and standards.
  • Rubrics are criterion-referenced, rather than norm-referenced. Raters ask, “Did the student meet the criteria for level 5 of the rubric?” rather than “How well did this student do compared to other students?”
  • A rubric creates a common framework and language for assessment.
How do we create a Scoring Rubric?

Step 1: Identify what you want to assess

Step 2: Search the internet to find an existing rubric and adapt it to your needs! It is rare to find a rubric that is exactly right for your situation, but you can adapt an already existing rubric that has worked well for others and save a great deal of time. Click here for sample rubrics.

Step 3: Evaluate the existing rubric. Ask yourself how the rubric relates to the outcome(s) being assessed. Keep the parts that are applicable and delete anything extraneous.

Step 4: Use the information gathered and proceed to identify the component skills (criteria) expected to successfully perform the task, and by which the student’s work will be scored (rows). These are also called “dimensions.”

  • Specify the skills, knowledge, and/or behaviors that you will be looking for.
  • Limit the characteristics to those that are most important to the assessment.

Step 5: Identify the range of mastery/scale (columns).

Tip: Aim for an even number (4 or 6) because when an odd number is used, the middle tends to become the “catch-all” category.

Step 6: Describe each level of mastery for each criterion/dimension (cells).

  • Describe the best work you could expect using these criteria. This describes the top category.
  • Describe an unacceptable product. This describes the lowest category.
  • Develop descriptions of intermediate-level products for intermediate categories.

Important: Each description and each criterion should be mutually exclusive.

Step 7: Test rubric and revise.

  • Apply the rubric to samples of student work.
  • Seek feedback from colleagues.
    • Questions to initiate discussions:
      • How clearly is each criterion under each achievement scale described?
      • Identify overlap in criteria and performance descriptors to assure differentiation among all criteria according to a level of achievement.
      • How clearly is each criterion differentiated along the continuum of achievement?
      • Is any language unclear or ambiguous?
  • Use feedback to revise rubric.

Important: When developing a rubric for program assessment, enlist the help of colleagues. Rubrics promote shared expectations and consistent grading practices which benefit faculty members and students in the program.

Step 8: When you have a good rubric, SHARE IT!

Tips for Choosing a Starting Rubric:
  • aligned with your curriculum
  • similar in level
    • Course
    • Program
    • General education
    • Institution
  • from the same or similar discipline/curriculum
  • from peer institutions or nationally recognized groups
Additional Resources:

Maki, P. L. (2004). Reaching consensus about criteria and standards of judgment. In P. L. Maki (Ed.), Assessing for learning: Building a sustainable commitment across the institution (pp. 129-131). Sterling,VA: Stylus.

Mertler, Craig A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25). Retrieved July 23, 2014 from http://PAREonline.net/getvn.asp?v=7&n=25

Moskal, Barbara M. (2000). Scoring rubrics: what, when and how?. Practical Assessment, Research & Evaluation, 7(3). Retrieved July 23, 2014 from http://PAREonline.net/getvn.asp?v=7&n=3

Moskal, Barbara M. & Jon A. Leydens (2000). Scoring rubric development: validity and reliability. Practical Assessment, Research & Evaluation, 7(10). Retrieved July 23, 2014 from http://PAREonline.net/getvn.asp?v=7&n=10

Stemler, Steven E. (2004). A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. Practical Assessment, Research & Evaluation, 9(4). Retrieved July 23, 2014 from http://PAREonline.net/getvn.asp?v=9&n=4

Simon, Marielle & Renée Forgette-Giroux (2001). A rubric for scoring postsecondary academic skills. Practical Assessment, Research & Evaluation, 7(18). Retrieved July 23, 2014 from http://PAREonline.net/getvn.asp?v=7&n=18

Suskie, L. (2009). Using a Scoring Guide or Rubric. In L. Suskie (Ed.), Assessing student learning: A common sense guide (pp. 137-154). San Francisco, CA: Jossey‐Bass.

Tierney, Robin & Marielle Simon (2004). What’s still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2). Retrieved July 23, 2014 from http://PAREonline.net/getvn.asp?v=9&n=2