Teacher Evaluation Default Model

Research Questions for Teacher Evaluation Pilots

The following series of questions represent focus areas for our pilot evaluation. They were developed by the teacher evaluation work group, the pilot leadership teams, and MDE. From these broad, guiding questions, our pilot evaluator will generate specific questions and will report findings and recommendations in these areas.

Outcome

The pilot and its evaluation will provide us with

• Revisions or changes that need to be made to the Model
• Strategies and resources to implement teacher models, including training and ongoing support
• Results of activities (including unintended consequences)

The initial findings and final report should be focused on these areas using data from the research questions that follow.

Whole Model Pilot Research Questions

Note: These seven questions apply both to the whole model pilots as well as the various focused pilots below.

• What were the effects on the students, teachers, and school communities?
• What measures and activities most accurately and fairly identify effective teachers and teachers needing support?
• Does the summative scoring model (i.e., component weights, numerical process, performance levels, score bands for performance levels) accurately identify effective teachers and teachers needing support?
• What resources (time, money, personnel) were associated with the implementation of the Model? Did the effects match the resources?
• Was the Model understandable, usable, and effective? What recommendations do you have for revising the model to increase efficiency and effectiveness?
• What selection, training, and ongoing support are needed for effective implementation? Were the training and ongoing support from MDE sufficient? From the district? Other resources?
• What external and internal systems are needed for implementation? How were documentation, analysis, data storage, and management handled in your district? In what ways did documentation help or hinder the work?

Teacher Practice Pilot

• How have the Model’s resources (rubrics, definitions, forms) contributed to teachers’ development and evaluation?
• How have the Model’s activities (peer review, points of contact, portfolio) contributed to teachers’ development and evaluation? What barriers and opportunities were discovered when implemented?
• Were the resources and activities sufficiently flexible to meet the needs of all teaching assignments (specialists and generalists) as well as all career stages (new/probationary teachers, mid-career, late-career)? Were there misunderstandings about using resources and activities?
• Did the Model’s resources and activities generate sufficient, meaningful, accurate evidence? How were the resources and activities used to generate fair summative evaluations, make personnel decisions, and plan ongoing professional development?
• How will or could the Model’s resources and activities be used in the future?

Student Engagement Pilot

Survey validation will be separate from this evaluation.

• What value did teachers find in student engagement evidence? Will evidence collected impact their practice and future professional learning? How did teachers use (or how do they plan to use) student engagement evidence?
• What logistical and technical support was needed to utilize the surveys? How were students surveyed in different grade levels and content areas? To which teachers did the survey apply?
• How were teachers and evaluators supported to understand and interpret the survey results for use in development and evaluation conversations? Did they find value in support and feel it was sufficient?
• What methods were used to define, observe, and collect “other measures” of student engagement?
• What did teachers believe about student engagement measures before implementation? How had those beliefs changed at the end of the process?

Student Learning Goals Pilot

• Did the SLG process align with district or building curricular, assessment, or staff development goals?
• How did the district and state (evaluators) ensure consistency (across evaluators and school sites) in expectations, rigor, and relevance in the SLG process?
• How were student starting points (levels of preparedness) established? Learning goals written? Were learning goals written appropriate measures of student growth?
• How were end-of-term assessments selected and mastery scores established? If none existed previously, how were assessments developed and approved? What kinds of assessments were used?
• How did teachers, teams, and/or evaluators determine performance ratings using the goals teachers set at the beginning of the year and the results from end-of-course assessments? Were performance ratings consistently determined? How did teachers, teams, and/or evaluators interpret the results to have conversations about teacher development and evaluation? Did they find value in those conversations?

Value-Added Model Pilot

• Do individual districts have the capacity and resources to develop, interpret, and report value-added data?
• What supports are needed at the individual, school, and district level to implement a value-added model for improvement planning and teacher evaluation?
• How will value-added data be collected, analyzed, and interpreted for individual teacher evaluation results? What strategies create meaningful, accurate results?