Developing Course-Embedded Assessment Instruments
Types of Assessment Instruments
These two sources can help faculty determine what type of instrument to use:
1. “Evaluating Assessment Strategies” for classroom and course data, on The American Psychological Association’s Assessment division’s website: http://www.apa.org/ed/eval_strategies.html. This website evaluates both qualitative and quantitative assessment strategies.
- The site includes both traditional methods of evaluating students (ie, objective or essay tests) and newer methods designed specifically for assessment.
- The introduction explains the difference between evaluating students (grading) and assessment, and explains how assessment can be embedded into graded exams and assignments.
- Each entry identifies the advantages and disadvantages of each method.
2. Thomas A Angelo and K. Patricia Cross, Classroom Assessment Techniques: A Handbook for College Teachers 2nd ed. (San Francisco: Jossey-Bass, 1993). At least 30 copies are floating around UW-L. Newer faculty may own a copy of this book, as do the organizers of the CoTL conference; faculty who have been a Wisconsin Teaching Scholar or Fellow might own a copy; a copy is also on 3-day reserve in Murphy Library in the Faculty Development Collection (LB2822.75 .A54 1993).
- This book provides generalized designs for assessment instruments and illustrates several adaptations of each design to particular courses and disciplines. Each entry starts with an estimate of the time and energy necessary for faculty to design it, students to take it, and faculty to evaluate data from it. Each entry explains the pros and cons of each instrument and a step-by-step process for adapting the design to a specific course. The book includes fifty assessment designs.
- Note that some of these techniques merely provide instructors with a quick check on student understanding in the midst of a class session or with a check-in on how students think the course is going, and are thus not adequate for assessing program-level outcomes.
- To locate the instrument designs most likely to serve well for GE or department assessment, use the Teaching Goals Inventory in the book, or online with automated scoring at http://www.uiowa.edu/~centeach/tgi/index.html The inventory will identify the relevant “cluster” of outcomes for your course. You can conduct an extensive inventory of a course, or focus simply on the main priorities (ie, the GE or program outcomes reflected in the work you assign students to do to demonstrate their understanding). The inventory will identify the types of learning (“Teaching Goal Inventory clusters”) you are emphasizing in that course. Then use the chart on p. 113 in the book to identify which CATs might work for you.
1. Don’t reinvent the wheel. See Angelo and Cross as described above for general designs for instruments.
2. Graded work you already assign can function for assessment purposes.
3. Determine which outcome each item of your exam or assignment addresses. Extract those that measure the GE or program outcome you are measuring.
4. One instrument can measure three outcomes. This is particularly true of qualitative instruments. Each outcome would be scored with a rubric designed for that outcome.
Evaluating Qualitative Assignments
1. Evaluate qualitative instruments with rubrics. The Academy of Art University provides a clear explanation of the value of rubrics and some guidelines for creating and using them at http://faculty.academyart.edu/resources/rubrics.asp
2. A 5-point rubric allows you to use the assessment evaluation for grading as well. It also provides sufficient variation to be useful for guiding program improvement.
3. A rubric is well-designed when it is clear, specific, and simple enough that different raters score the same student work similarly.
4. Define levels clearly and specifically (ie, not just “unsatisfactory” to “superlative” or “F” to “A”). Think of it as explaining the difference between an A and a B and a C . . .
5. Break out the components of the outcome in your rubric.
6. Rubrics programmed as quizzes on D2L or SurveySelect provide electronic data; analysis can then be automated. They can also be make available to multiple instructors of the same course or to assessment committees.
7. Don’t reinvent the wheel, but choose or alter a rubric that fits the outcome, not just the content. See the chart below for a variety of online sources by GE outcome. If what you need is not included below, try a meta-site like NC State’s “Internet Resources for Higher Education Outcomes Assessment:” http://www2.acs.ncsu.edu/UPA/assmt/resource.htm#area This site includes resources by discipline as well as for General Education.
8. Many of the rubrics available online indicate only 3 or 4 levels. These are less valuable both for assessment and for grading. You can adapt them, often by adding a higher level that indicates more advanced levels of thinking. Remember that the prompts for GE rubric levels are: