Examining Mathematics Knowledge for Teaching (MKT)
through Classroom Observations
Jon Hasenbank & Jennifer Kosiak
Presented Jan. 28, 2011
- AMTE 2011 Conference Program.pdf (hosued on the AMTE 2011 conference website; last checked 1/30/2011)
- For more info, see the Wisconsin's Grassroots Teacher Quality Assessment (TQA) Model website.
Abstract: We report on the development, validation, and pilot of two observation instruments for measuring MKT of mathematics teachers. The projects aim to improve upon existing assessment systems by increasing consistency across programs and enhancing the resolution of the data collected.
Overview: Across the nation, colleges and universities are collecting a wide range of assessment data about teacher candidates. We will present two projects aimed at improving upon existing assessment systems by increasing consistency across programs and enhancing the resolution of the data collected.
The first project has led to the development of an instrument for evaluating mathematical content knowledge and pedagogical content knowledge through classroom observations during student teaching. This instrument was created as part of a state-wide initiative (Wisconsin's Grassroots Teacher Quality Assessment Model) aimed at capturing what teacher candidates know and can do prior to the completion of their math education program. Our discussion will focus on the evolution of the instrument, from the various models employed during the design phase (including lesson study, focus group work, one-on-one interviews, and pilot observations) as well as the format and implementation of the training modules. We will reserve time for discussion of the strengths and limitations of using such a tool to measure “teacher quality” in contrast with alternatives that infer teacher quality (and teacher education program quality) from student achievement.
The second project involved the creation of a digital observation instrument to measure fidelity of implementation in a quasi-experimental research project. We used the instrument to precisely measure the scope and sequence of different lessons in terms of the amount of time spent in each of eight ‘procedural understanding’ categories (e.g., carrying out a procedure, comparing alternate methods, discussing whether an answer makes sense). The instrument allows the observer to select among a number of categories as instruction unfolds, and the results are compiled to produce a quantitative measure of the time spent in each category. The frequency distribution and chronological-progression of the lesson are displayed using a compact visual representation similar to the problem solving graphs Schoenfeld (1992) has used in his research on students’ problem solving strategies. The digital instrument allows for multiple observations to be merged for reliability analyses, and the categories can be easily tailored to meet the needs of different projects. We will present the instrument in the context of the professional development project for which it was developed, but will reserve time for questions about the potential for adapting the instrument to other settings.
Reference: Schoenfeld, A. H. (1992). Learning to Think mathematically: Problem Solving, Metacognition, and Sense Making in Mathematics. In D. A. Grouws (Ed.), Handbook of Research on Mathematics Teaching and Learning (pp. 334-370): Macmillan.