KNOW THEnets
How to Reach Agreement
Back in June, we started the Know the NETS (KtN) column with a scenario in which a middle school science teacher used tarnished pennies to learn
about pH. Students used websites and whole-class
discussion to learn how acids and salt interact with copper.
Then the students did a “wet” lab that measured the pH
of different solutions. They used their data to predict how
each solution would react with a tarnished penny. Finally,
they tested their predictions and wrote up the results.
ISTE’s Research and Evaluation (R&E) Department
evaluators rated the scenario as addressing 8 of 24 NETS•S
performance indicators, including making predictions
(1d), contributing to teams (2d), analyzing data and reporting results (3b, 3c, and 4c), exhibiting a positive attitude toward technology (5b), and understanding and
troubleshooting technology systems (6a and 6c).
Sixty-three L&L readers went online to rate the first
KtN scenario. Most respondents checked between 7 and
13 indicators, with a median of 11—three more than ISTE
evaluators identified. The specific indicators that readers
checked varied.
The table “L&L Readers’ Level of Agreement with ISTE
R&E” shows the percentage of respondents who checked
each indicator and the extent to which they agreed with
the evaluators. It reveals that, among the 63 respondents
(50 of whom were teachers), interpretations of the NETS
differed quite a bit. With only two options (addressed/not
addressed), a reliable rubric would have very high or low
percentages for each item. Instead, there were many split
decisions. (A statistic used to estimate inter-rater reliability
in situations like this is something called Fleiss’ kappa. For
the 24 ratings in two categories by 64 raters—63 readers
plus ISTE R&E—our kappa is about . 32. A guideline is that
kappa should be around .70. That would require around
90% agreement on all items.)
This happens during actual program evaluations. Teachers, technology coordinators, and evaluators meet after a
round of evaluations and find that they do not agree on
what they saw. Here are the steps ISTE follows to improve
reliability within a project:
1. Read the fine print. Consider indicator 1a, “apply
existing knowledge.” ISTE R&E did not check this
indicator. Most respondents disagreed, we assume
because students in the scenario applied just-
acquired facts about copper and acids to predict the
results of the experiment. We felt prediction was
already explicitly covered by indicator 1d. Did stu-
dents “generate new ideas, products, or processes”?
In our by-the-book lab scenario, they did not. In a
real observation, they certainly could, by inventing
better ways to control the experiment or improve
procedures.
An effective training and practice approach is to repeat the KtN process in real classrooms. Observers pair
up to watch different teachers (or one another) at work
and then resolve differences by discussing the fine print,
key attributes, and project-specific interpretations of the
NETS.
We will be back next issue with another scenario. To
submit your ratings for KtN scenarios, visit www.survey
monkey.com/s/knowthenets. ISTE’s Classroom Observation Tool (ICOT) is available for download at nets-assessment.iste.wikispaces.net.