through the cloud. This way, thousands of people can try out promising
tools and curriculum while simultaneously contributing data for product
improvement. Digital learning systems can automatically collect data as
end users generate it, and researchers
and innovators can then analyze it to
improve the tools.
Simulations and serious gaming are
“digital learning systems” that embed
formative assessment of learners’ skill
development by tracking how the
learners use and respond to the application. One way this can work is
that the machine “learns” and adapts
to the user, similar to adaptive testing
models where users receive different information or questions based
on their previous work. Scientists are
developing pattern recognition algorithms that can adapt to the learner
using different strategies or modalities
of learning.
By embedding measurements such
as time on task and diverse problem-solving strategies, data on user interaction with the digital learning system
can assess digital age skills, such as
persistence and creativity. Dashboards
that make this real-time data available
to teachers and learners can support
formative assessment, learning feedback, and strategies for personalized
learning. These systems mean that
we can seamlessly integrate testing
into learning, unlike the costly testing
methods we currently use.
Researchers are also looking at how
integrating school data with sources
beyond the school can help steer
learners in directions that are inspiring to them, whether they are inside
or outside of formal schooling. We can
record and validate informal learning
so that employers and educators will
value it. Badging systems for participation in massive open online courses
(MOOCs) are an example (see “
Massive and Open” on page 12 for more
about MOOCs). These same methods
of integrating data sources can help
identify students struggling with issues
outside of school so that we can support them more holistically.
Crowdsourcing and Collaboration
One way to evaluate new tools and
innovations is with user ratings and
feedback made possible by web 2.0
formats. What L&L reader hasn’t
tapped into repositories of course
modules and online resources to support their teaching? The cumulative
effect of practitioner feedback and
expert ratings is a body of evidence
representing the effectiveness of the
learning objects across diverse contexts. Chapter 5 of the report focuses
on how best to manage these growing repositories and refers readers to
many online sources of materials.
The report presents an Evidence
Framework that “is intended to help
education stakeholders implement
a process of planning, creating,
choosing, and combining appropri-
ate evidence-gathering approaches
that could be useful under different
circumstances.” Both researchers and
practitioners are critical to gathering
evidence.