Bud the Teacher becomes Bud the Researcher

We're unlikely to have much luck improving student learning outcomes with technology if we don't measure the impact of our technology investments and interventions.

In the last year or so, I've been thinking a lot about how I can be of service to school leaders who are trying to figure out how technology should fit in their schools. In general, I've been sharing two messages with school leaders: 1) technology should be in the service of defined learning goals and 2) schools and districts should assess whether technology is helping students make progress to those goals.
I have a particular soft spot for assessment because for the last four years I have run a research project measuring the degree to which U.S. K-12 wikis provide opportunities for students to develop 21st century skills. But I also think assessment has several virtues that make it a useful entry point for getting school leaders to think more deeply about the challenges of tech integration in schools. First, as soon as you bring up assessment, people realize that they need to have some learning goals, or there is nothing to assess. Talking about assessment is one entry point to thinking about backwards design in schools.  Second, the best outcome of No Child Left Behind (and I don't have much nice to say about NCLB) is that it has raised awareness of the importance of formative assessment for learning. As schools as institutions learn about using technology, they need to evaluate their progress. Finally, many school and district leaders realize that assessment data can play a role in garnering support from school boards, voters, and trustees for technology investment.
So while researchers like me should be studying technology at state and national levels, every school and district should be studying their own technology use and figuring out whether or not "it's working"--whether or not it's meeting learning goals that educators and students care about. I've been telling school leaders that this kind of research doesn't have to be ready for peer-reviewed scholarly journals--it just needs to be good enough to guide decision making and practice. Many educators hear "research" and "assessment" and they think that it needs to be complex statistical models and sophisticated surveys and rubrics. But we can't let the perfect be the enemy of the good here. Better to do a few small things--some content analysis, some focus groups, a short survey-- then to do nothing at all.
And I'm please to report that I have found an exemplar of the kind of educator action-research that can play a powerful role in shaping technology practice. I've long been a big fan of the thoughts and musings of Bud Hunt, AKA BudTheTeacher, since we were on an ISTE panel together a few years ago. He's recently completed his Masters Degree with a research thesis, “Wait, Am I Blogging?”: An Examination of School-Sponsored Online Writing Spaces, that examines the use of blogs in his school district.It's a fantastic piece of work, and great reading for anyone interested in assessing technology projects in schools. Bud's thesis is posted on his blog here.
What did Bud do?

  1. He took a sample of all of his school's blog posts
  2. He read them
  3. He categorized them
  4. He drew some conclusions about the kinds of learning taking place based on that analysis

In simplifying Bud's thesis, I don't want to minimize the work that Bud did. But I do want to highlight that it wasn't rocket science; it was the kind of action research project that any team of educators in a school or district could tackle. His work is systematic, not complex, and his results are compelling and troubling. Basically, Bud found that most blogging activity was either teacher-centered content delivery or the kind of simple hub-and-spoke interactions typically found in classrooms, where a teacher asks a question, a kid responds, and a teacher evaluates. in general, Bud didn't find young people finding a voice, developing an identity as a blogger, and pursuing topics and conversations with passion and interest. He found them producing perfunctory  answers to the same kinds of questions they got in class. His research raises some important questions for the teachers in his district about blogging practice specifically and technology mediated collaborative learning more broadly. As Bud tweeted to me: " My professional emphasis for the last two years has been on creating a culture of teacher research within the context of technology use and adoption. Hard. But worth doing."
There is so much here that Bud does right in his research project that makes it a model for other educators. First, Bud doesn't try to read everything that has ever been produced on blogs in his district; he takes a sample. He then investigates that sample in a systematic, but not overly complicated way. Basically, he looked at a pilot set of blog posts and developed a taxonomy of about 10 "purposes" of a blog post, and then he measures the distribution of those purposes in the full sample. In his analysis, he's also fearless in confronting what his findings suggest about student learning opportunities. He found that blogging online looked too much like the kinds of writing that students did offline, and he challenges his colleagues to embrace the possibilities of the new medium.
It's a tough message that Bud has to deliver to his fellow educators in his district. But it's very difficult to imagine that community of educators getting any better without facing up to the analysis that Bud has done. (And I should add here that I think Bud's findings square with my own assessment of the use of wikis in K-12 settings across the U.S.) This kind of assessment data gives teachers something concrete to use as they build a conversation about how they can get closer to their idea uses of technology.
So if you haven't already, go read Bud's thesis. Then if you are interested, come back and read the rest of this post. I have a two suggestions of things that Bud could have done differently.
Leverage random sampling:
Bud was very wise to draw a sample of posts to evaluate. Rather than try to look at a year's worth of posts, he chose to look at all 233 posts published from August to September, reasoning that there was a lot of blog activity during this time. My first thought was that students are also likely to be lousy bloggers during this period, if we assume that it takes some time during the year to get better. It might have been better to look at some posts from throughout the year.
Now, I don't think it would be necessary to draw an entire second or third sample and read all of them. Rather, I think it might be wise to randomly sample from a few different time periods throughout the year. Maybe read 1/3 of the posts produced in September, 1/3 of the posts produced in December, and 1/3 of the posts produced in May. Or, simply pick how many posts you would like to read, and randomly sample from throughout the year.
The beauty of random sampling is that most of the time, a small random sample will look very similar to the entire population. The process is not difficult. Take a list of objects to be analyzed, assign each a number, and generate a random series of numbers at random.org. Evaluate those that win the lottery.
So random sampling could be used to get more of a sense of blogging throughout the year. It could also be used to simply reduce the scope of the project. Reading 1/3 of all of the posts from September, as long as they are randomly drawn, is probably as good as reading the whole set. And for the purposes of a quick evaluation, reading 20 posts randomly drawn is better than doing nothing at all. For educators intimidated by the prospect of sinking tons of time into evaluation, random sampling offers a way to make assessment seem more manageable.
Evaluate impact on student learning goals
In Bud's analysis, he takes a "grounded theory" approach to evaluating the blog posts. Basically, he looks at the data without any pre-conceived notions of what he should find and developed a set of purpose categories by looking at the data. My sense is that Bud's district has no particular goals for classroom blogging, and so this grounded theory approach made sense for his thesis.
But schools should have learning goals for their technology interventions. For instance, in Singapore, they use technology for two things: developing self-directed learning skills and collaboration skills. Technology is potentially good at lots of things, but they focus on those two. When schools and districts have goals, then they can evaluate technology based on whether it meets those goals. They don't have to use grounded theory to figure out what is going on, they can say "we care about self-directed and collaborative learning, so those are the dimensions we are going to measure."
For instance, I was talking with an asst. principal at an international elementary school that just mandated that teachers start blogging about in order to build stronger relationships with families. Great goal, and one that leads to clear assessment criteria: are blogs building connections between families and school? A research program for evaluating that initiative might look like this: Every quarter for two years, call 10 parents at random. Ask them about how their kid is doing. Ask them about communication with the school generally without mentioning blogs specifically. Over the two years, parents should increasingly bring up the blogs in these conversations and refer to them in a positive light. If, after a year or two, parents in these conversations are still saying "wait, there's a blog?", then either improve the blogging practice or encourage teachers to spend their time in other ways.
I'm not suggesting that there is something wrong with "letting the data speak for itself," but thinking about research becomes much simpler when you assess progress towards a particular goal rather than any kind of learning that might be happening. (The downside of this approach is that you might miss some important aspects of learning occurring with a technology intervention.)
 
Again, these two points are not to say I thing Bud's decisions on these points were wrong, just to suggest some alternatives. What these alternatives have in common, is that they both suggest that this kind of research can be made even simpler, even more manageable in the context of busy teachers and administrators trying to chart the direction of their ships while they are sailing them.
My sense is that Bud and I end up in very similar places at the end of our research journeys-- we're discouraged by the bulk of activity, but we remain optimistic about the possibilities. I think we also both share the belief that ed tech advocates need to confront these realities-- lots of what we are doing with technology is reproducing practices from inside our classrooms. If technology will be transformative, we will need to be fearless in confronting places where it's not working so that we can do better. Hat's off to BudTheResearcher for doing just that.