firsttime
Cover Image
Brief Description

In a recent webinar hosted by the Society for Research on Educational Effectiveness (SREE), SEERNet members Drs. Jeremy Roschelle, Anthony Botelho, and Ben Motz explored how Digital Learning Platforms (DLPs) are evolving from simple instructional tools into a powerful, scalable research infrastructure that can provide more precise answers to "what works" in education.

The Shift from "Soup-to-Nuts" to Automated Infrastructure

Dr. Jeremy Roschelle, Principal Investigator of SEERNet opened the session by contrasting modern methods with the "infrastructure-less" research of the 1990s. In those days, researchers had to physically wire cables, install computers, and bring video cameras into classrooms—a "soup-to-nuts" process that was prohibitively expensive. 

Today, schools use DLPs as everyday instructional infrastructure. By leaning into the idea that these platforms are research infrastructure, the SEERNet community of scholars aims to:

  • Conduct research that is less disruptive to daily classroom life.

  • Reduce costs and increase the speed of findings.

  • Study learning in realistic contexts throughout the entire school year.

  • Shorten the time from research discovery to large-scale impact.

A prime example of this infrastructure is UpGrade, an open-source tool used by Carnegie Learning’s MATHia platform. UpGrade allows researchers to deploy student-level randomized A/B tests seamlessly within the platform. In one instance, researchers used UpGrade to compare a static math diagram to an interactive animated version; within just four months, they collected data from over 4,000 students, finding that the animated version led to fewer errors and faster mastery.

New Methodologies: Beyond the Average Effect

Dr. Anthony Botelho from the University of Florida delved into the specific methodologies that DLPs enable. While traditional research often focuses on a single "end result" (like a test score), DLPs collect fine-grained clickstream data. This allows researchers to see the learning process as it unfolds, including timestamps of every action, help-seeking behaviors, and multiple attempts at a problem.

Botelho highlighted several key areas of study enabled by this data:

  • Knowledge Tracing: Using machine learning to track how knowledge is built over time.

  • Behavioral Detectors: Identifying "gaming the system," "wheel spinning" (unproductive persistence), or "stopout" (when students give up).

  • Affective Computing: Inferring emotional states like frustration, confusion, or anxiety from interaction patterns.

Critically, Botelho argued that this scale allows us to move beyond "what works on average" to ask "what works for whom and under what conditions?". By using advanced statistical methods like causal forests, researchers can identify qualitative interactions—instances where an intervention might help one group of students while potentially hindering another. This nuance is vital for creating truly personalized and adaptive learning technologies.

The Streetlight Effect and the Generalizability Crisis

Dr. Ben Motz of Indiana University introduced the Streetlight Effect as a concept. He argued that researchers often study learning "under the streetlight"—only where they have the buy-in and data to run a study—and then make universal claims. This leads to a generalizability crisis, where sweeping recommendations lack a connection to the diverse reality of classrooms.

To solve this, Motz advocates for the "Many Classes" research model. Using the research plugin Terracotta (which integrates with the Canvas Learning Management System), researchers can run the same experiment across dozens of diverse courses simultaneously.

Motz shared a study on pre-questions—asking students questions before they receive instruction. While 100 years of laboratory research suggested pre-questions are universally beneficial, Motz’s "Many Classes" study revealed a more complex picture:

  • The Benefit: On average, there was a 5% improvement in performance.

  • The Catch: Pre-questions actually caused disengagement in some students; 21.6% of students who received pre-questions didn't even start the instructional video, compared to 16.5% in the control group.

  • Rich-Get-Richer Effect: Students who already had high prior knowledge benefited most, while those with low prior knowledge saw less benefit or disengaged entirely.

Without the broad illumination provided by DLPs, researchers might have continued recommending pre-questions universally, inadvertently widening achievement gaps.

Getting Involved and Finding Support

For those looking to enter this field, the presenters suggested starting with the data already being collected by their institutions or platforms. They also emphasized that this work is inherently collaborative, requiring the intersection of computer science, statistics, and learning theory.

The webinar concluded with several funding and collaborative opportunities:

  • AIMS EduData Initiative: Offers grants for math platforms and researchers, with a new call for proposals expected in January 2026.

  • SEERNet: IES-funded network of platforms and research teams, publishing working papers and blogs for newcomers and experts alike.

  • SafeInsights: An NSF-funded initiative focusing on large-scale research with a focus on privacy, expected to be open for business in about a year.

  • Terracotta: A free plugin for Canvas that allows researchers to turn their own courses into experimental labs.

By embracing Digital Learning Platforms as infrastructure, the education community can move toward a more precise, equitable, and impactful era of research—one that looks beyond the single streetlight to illuminate the entire landscape of learning.