Visiting the posters has been an experience in of itself. Feelings of pride, optimism, and constant struggle competed with one another within so many of the projects. One thing is clear: the effectiveness standard for judging success is alive and well. What defines "effectiveness"--meeting intended goals-- and who is doing the defining is also clear from the posters and comments from participants. In most cases, teachers are the target of the effectiveness standard: how many teachers participated, completed professional development, what materials are being used, how often are new materials being used, how many hours were spent by teachers in learning to use materials, etc. If these indicators of effectiveness met goals or exceeded them, the project was considered a success. This is reasonable, I believe, and certainly far better than citing students' test scores.
But there is a catch to using indicators on teachers to determine the effectiveness of a LSC project. The theory is that if teachers participate in professional development and learn to use inquiry-based ("hands-on") materials, then their classroom practices will change and kids will learn more and better. The chain of assumptions in the theory begins with teacher participation and ends with kids learning (too often measured by standardized achievement tests). The first assumption--teacher participation--is the one that so many projects fasten onto and go no further. Participation leads to learning and that's it. In the late 1960s, staff development--the earlier incarnation of today's "professional development"--was thought to be the solution to teacher improvement and student learning. Much money was invested and charts of how many teachers attended 1-day workshops, summer institutes, etc. were compiled and offered as evidence of effectiveness. It didn't wash then and I fear that focusing on participation rates and hours involved in sessions will end up the same way.
The middle assumption (teachers are using different methods and materials as intended--the fidelity measure--and their classroom practices have changed) that very few of the projects pursued in the late 1960s and neglect now. The few LSC projects that did look into classrooms, observed teaching practices, and made judgments about what occurred in those classrooms seldom revealed the instruments they used or what they found. Assessing teaching practices in classrooms to determine whether teachers have indeed incorporated the approaches advocated by designers and LSC folks is hard work in securing the right instruments, watching teachers teach, determining how much and exactly what has changed, and what it all means. But it is worthwhile work because determining whether teachers have adopted (and adapted) new materials and methods into their classroom repertoires become a defensible substitute for using students' test scores.
I wonder what folks think of this view of the effectiveness standard, one that focuses appropriately on the goals of so many LSC projects in math and science by assessing what occurs in teachers' classrooms.