Make Time for Peer Review
We're inundated with so much information these days, it's no surprise that we often glaze over articles, latch onto their headlines and call ourselves "informed". I think this is the tragedy of the information age, and any person can write out a blog post with relatively good conventions and they come across like they know what they're talking about.
This is why I love Diane Ravitch and TFA alum Gary Rubenstein. They take the time to look at actual studies that people say "support" their pedagogical or school-wide approaches. They have great blogs and counter a lot of value added measures or other statistical data that educators use that are often inflated or just plain wrong.
Taking Time to Read the Studies
I recently read an article by New York Times writer Matt Richtel pointing out that technology infusions in schools had shown neglible gains in student learning. Blogger Scott McLeod counters with his own rubuttal with a vareity of points: most noteably that the skills taught through new technologial learning environments are not easily measured by our current standardized testing system and that teacher often don't have sufficient training to use these tools effectively. I took it upon myself to read every hyperlink to every study that both of the authors use to support their points. It was exhausting.
The first interesting report was from 1997 and Clinton had assembled a task force to strengthen K-12 education and they had issued a report that makes reccomendations for a number of things such as teacher training, continuing support of pedagogical practices, budgeting and universal access-all things that I believe in. The report signs off with this though:
"To support its conclusion, the committee’s report cited the successes of individual schools that embraced computers and saw test scores rise or dropout rates fall. But while acknowledging that the research on technology’s impact was inadequate, the committee urged schools to adopt it anyhow."
|Qualitative studies like the one are opinionated, masquerading as "quantitative data".|
The poorest came from a Maine laptop initiative which we see above results coming not from student achievement, but from teacher opinions, no doubt to make the case for continuing budget increases. Their "results" were as follows:
The first point is a stiff fart in the wind. For the second, there is ambiguity of what "finishing" means. For the third, they give no indications of how many econonmically disadvanteged kids outperformed their peers (not to mention that there are instances of that on traditional paper tests as well). Finally, how much did their writing actually improve? The math results were not much better:
"Educators would like to see major trials years in length that clearly demonstrate technology’s effect. But such trials are extraordinarily difficult to conduct when classes and schools can be so different, and technology is changing so quickly." Richtel says. An important note about "Engagement" the author and other educators note as well is that "engagement" is a fluffy term and doesn't necessarily correlate with enhanced learning.
I'll close with Scott McLeod's point that our current testing system does not measure the skills of online learning environments. I'm a big fan of him and his blog "Dangerously Irrelevant" but I do disagree that bubble tests only measure low level recall-type information. Take this elementary math problem below:
Taking time to review hyperlinks to research is tiresome in today's information age. However, I think it's these skills that we hope to engender in our students, so why shouldn't we practice it as well? We as a collective profession have the opportunity to collect data on our practices in order to better the education of our youth, but if everyone is fabricating their results to attract tax dollars for facilities and Ipads, we have fallen to the same greed and manipulation of our politicians. We must be honest. We must stand with integrity and humility.
Evidence of Evidence Based Practice in Online Learning
Scaffolding Math Benchmarks with Blooms Taxonomy