T o the school policy maker or classroom teacher seeking guidance on what technology to buy or implement, educational research
may seem like a rambling scripture that anyone can
quote to suit his own purposes. Any proponent of
any intervention can find a citation somewhere to
support a product or procedure, and it may take
longer to track down and read all the citations than
it would to just try out the practice yourself.
One problem is that the citations at the end of
any article often include a mix of evidence types.
One class of evidence is theory, which is a formal
statement about how the world works. Theory is
based on quantitative evidence that tries to specify how likely the world is to work in a particular
way, such as: “95% of the time you would not
find this big a difference between treatment and
control groups if our curriculum didn’t cause it.”
Policy makers, who have to make greatest-good-for-the-greatest-number judgments, are inclined
(and in the United States, essentially required) to
make decisions based on this type of evidence.
Another class of evidence is an example, or a
documented case of what happened in a particular situation. This type of evidence is typical for
program evaluations, which—regardless of what
theory says should happen—focus on the details of
events in a particular setting. Teachers, who have to
deal with individuals (including students who don’t
respond to interventions according to theory), prefer examples: Sure, kids need to learn letter-sound
combinations, but tell me how this particular phonics program worked in a classroom like mine.
Schools run into problems when they apply
the wrong type of evidence to a decision. For instance, complaints often arise when someone purchases technology based on an anecdotal example
that, it turns out, doesn’t generalize.
Talbot Bielefeldt
is a senior research
associate with ISTE’s
Research and Evaluation Department
( iste.org/research).
You can contact him
at talbot@iste.org.
Talbot Bielefeldt
RESE
AR
C
HW
IN
DO
WS
The U.S. government has tried to help schools
find generalizable practices by establishing a
central review site for research studies, the What
Works Clearinghouse (WWC, ies.ed.gov/NCEE/
WWC). Products the WWC deems effective
have multiple rigorous studies that demonstrate
that students benefited and that those benefits
are clearly attributable to the intervention.
However, the WWC has two drawbacks. One is
logistical. If you instituted a 1: 1 program this fall,
you probably are still getting up to speed on apps
and hardware. Next year (2014–15) is likely to be
the first opportunity students will have to experi-
ence fully implemented lessons using the new
apps. The results won’t come in until spring 2015,
they won’t be analyzed and written up until fall,
and they won’t be accepted for publication until
2016. That means the first time the WWC will be
able to review the materials will be three years after
your district made its purchasing decision. By the
time another district tries to use your experience
as evidence for its own decision, the hardware and
software likely will no longer be manufactured.
The results from a well-controlled study can be
replicated in a similar population and setting, but
if your setting is different from that in the study,
the results are less reliable. A thorough study will
not only have quantitative comparisons of similar
groups with and without the intervention, but will
also include interviews, observations, or other
qualitative information that will try to identify
features of the school and lessons that affected the
outcome. Sometimes these are hard to anticipate.
For example, a web-based simulation works fine
until you run out of bandwidth, at which point the
kids are back to copying notes out of the textbook.
These issues are well known and debated among
researchers and evaluators at professional meetings
Your Tax Dollars at Work
How to Make Sense of the Research