Experiencing continuous improvement in Singapore
Over the past week, I have had the chance to offer two history master classes to Singapore teachers on tech integration in history and social studies. I taught one on Friday and one on Monday.
Before the session, I was asked to covert the ministry’s standard workshop feedback form into Google Docs for digital distribution. I was asked not to change the questions, so that the ministry could track trend data and could compare my results to previous international speakers. I was being benchmarked.
So on Friday, I taught the first master class. One of the challenges of teaching in a foreign cultural context was that I couldn’t judge from people’s reactions how they were experiencing the workshop. I couldn’t adjust my plans on the fly, since I couldn’t tell how my workshop was being received. At the end of the workshop, I distributed my survey, without really having a sense of how things were received. I did recognize a universal human truth, that teachers are pretty tired on Friday afternoons.
Immediately after the workshop, at 5:30 on a Friday afternoon, the feedback-o-rama began. Two of my colleagues from the Ministry of Ed sat down with me for another 30 minutes to provide immediate feedback. They felt certain sections were too long, other sections not long enough. They observed that I needed to cold call teachers to elicit feedback, rather than waiting for people to raise their hands. They highlighted certain things that I had said that had generated a lot of interest, that I should emphasize next time. They also remembered certain things that i had said in previous meetings, that I hadn’t shared in the workshop, that they thought I should introduce.
Over the weekend, I sent the ministry the raw scores from my feedback and the qualitative comments. In the morning, I sat down with the assistant director for PD at the Academy of Singapore Teaches and the History subject chapter master teacher. Over the weekend, someone had converted the feedback responses into a 4 point feedback Likert scale. We particularly focused on a question related to “did you learn things that you can apply in your classroom?” which was a key question that the Academy used to evaluate their PD workshops. My scores on this question averaged 3.3, which apparently were pretty good for a first workshop in Singapore, but my colleagues believed that I could raise this score.
Now, there was a part of me that immediately reacted negatively. “You want me to manipulate my workshop so that people say that they are learning things of immediate value.” Should I cut more challenging materials? Should I cut some of the more complex thinking that shapes behavior without offering a direct application? Should I plant the message in people’s head in the workshop that I want them to say that they will find these things useful? How dare anyone hold me accountable by something as simple as a number!
But I softened my American, anti numerical attitude, and decided that when in Rome, I should do as the Romans do. After all, I do want people to feel like the material in my workshop, at least some of it, it is of immediate value. The two Academy officers offered suggestions for how I could shape the workshop, and in particular about how I could structure the session to give people more time to consider my suggestions within the Singapore context and elicit this sharing effectively (it turns out that in Singapore, you pretty much have no choice but to cold call; and people are perfectly happy to be called on.) Some of their suggestions seemed immediately obvious, and some of them seemed misguided. When I pushed back, they were quite clear: they saw room for improvement, they had specific suggestions, but ultimately they felt strongly that it was up to me to decide exactly how to implement their feedback.
Pause for a moment and consider this. Can you imagine a school district in the U.S., inviting an expert to fly in from half-way around the world, and after their first workshop sitting down with them for an hour and saying “Good start, but you can do better.” Me, neither.
Much of their feedback was spot on. Through their coaching, I did a much better job eliciting participant ideas, and I provided much more time for participants to reflect on how ideas from an American context could be applied to their own environments. Some of their specific suggestions didn’t make sense to me, but the principles behind those suggestions helped me think of new ideas. More importantly, their investment in my workshop, their sense that I could do better, inspired me to not just crank out the second workshop, but to really think about how I could serve these teachers even better. So I wanted to see my number get about 3.33 not for the sake of moving the number, but because that would be one way of measuring progress towards the shared goals that the Academy officers and I set in our feedback session.
So did I do any better? The social scientist in me wants to remember that the context of the workshop was quite different– we had a new batch of participants, the group was a little smaller, it was a Monday instead of a Friday–there are lots of reasons why my scores might have changed without any real improvement in my instruction. But all that said, participants rated the workshop a 3.67 out of 4 in terms of applying learning to the classroom, and no one said in the feedback that they needed more time to reflect on the Singapore context.
So maybe it’s only a number. But it certainly helped me experience more deeply the powerful role that numbers can play in a culture of continuous improvement.