blog posts and news stories

New Multi-State RCT with Imagine Learning

Empirical Education is excited to announce a new study on the effectiveness of Imagine Math, an online supplemental math program that helps students build conceptual understanding, problem-solving skills, and a resilient attitude toward math. The program provides adaptive instruction so that students can work at their own pace and offers live support from certified math teachers as students work through the content. Imagine Math also includes diagnostic benchmarks that allows educators to track progress at the student, class, school, and district level.

The research questions to be answered by this study are:

  1. What is the impact of Imagine Math on student achievement in mathematics in grades 6–8?
  2. Is the impact of Imagine Math different for students with diverse characteristics, such as those starting with weak or strong content-area skills?
  3. Are differences in the extent of use of Imagine Math, such as the number of lessons completed, associated with differences in student outcomes?

The new study will use a randomized control trial (RCT) or randomized experiment in which two equivalent groups of students are formed through random assignment. The experiment will specifically use a within-teacher RCT design, with randomization taking place at the classroom level for eligible math classes in grades 6–8.

Eligible classes will be randomly assigned to either use or not use Imagine Math during the school year, with academic achievement compared at the end of the year, in order to determine the impact of the program on grade 6-8 mathematics achievement. In addition, Empirical Education will make use of Imagine Math’s usage data for potential analysis of the program’s impact on different subgroups of users.

This is Empirical Education’s first project with Imagine Learning, highlighting our extensive experience conducting large-scale, rigorous, experimental impact studies. The study is commissioned by Imagine Learning and will take place in multiple school districts and states across the country, including Hawaii, Alabama, Alaska, and Delaware.

2018-08-03

For Quasi-experiments on the Efficacy of Edtech Products, it is a Good Idea to Use Usage Data to Identify Who the Users Are

With edtech products, the usage data allows for precise measures of exposure and whether critical elements of the product were implemented. Providers often specify an amount of exposure or the kind of usage that is required to make a difference. Furthermore, educators often want to know whether the program has an effect when implemented as intended. Researchers can readily use data generated by the product (usage metrics) to identify compliant users, or to measure the kind and amount of implementation.

Since researchers generally track product implementation and statistical methods allow for adjustments for implementation differences, it is possible to estimate the impact on successful implementers, or technically, on a subset of study participants who were compliant with treatment. It is, however, very important that the criteria researchers use in setting a threshold be grounded in a model of how the program works. This will, for example, point to critical components that can be referred to in specifying compliance. Without a clear rationale for the threshold set in advance, the researcher may appear to be “fishing” for the amount of usage that produces an effect.

Some researchers reject comparison studies in which identification of the treatment group occurs after the product implementation has begun. This is based in part on the concern that the subset of users who comply with the suggested amount of usage will get more exposure to the program. More exposure will result in a larger effect. This assumes of course, that the product is effective, otherwise the students and teachers will have been wasting their time and will likely perform worse than the comparison group.

There is also the concern that the “compliers” may differ from the non-compliers (and non-users) in some characteristic that isn’t measured. And that even after controlling for measurable variables (prior achievement, ethnicity, English proficiency, etc.), there could be a personal characteristic that results in an otherwise ineffective program becoming effective for them. We reject this concern and take the position that a product’s effectiveness can be strengthened or weakened by many factors. A researcher conducting any matched comparison study can never be certain that there isn’t an unmeasured variable that is biasing it. (That’s why the What Works Clearinghouse only accepts Quasi-Experiments “with reservations.”) However, we believe that as long as the QE controls for the major factors that are known to affect outcomes, the study can meet the Every Student Succeeds Act requirement that the researcher “controls for selection bias.”

With those caveats, we believe that a QE, which identifies users by their compliance to a pre-specified level of usage, is a good design. Studies that look at the measurable variables that modify the effectiveness of a product can not only be useful for school in answering their question, “is the product likely to work in my school?” but points the developer and product marketer to ways the product can be improved.

2018-07-27

How Are Edtech Companies Thinking About Data and Research?

Forces of the rebellion were actively at work at SIIA’s Annual Conference last week in San Francisco. Snippets of conversation revealed a common theme of harnessing and leveraging data in order to better understand and serve the needs of schools and districts.

This theme was explored in depth during one panel session, “Efficacy and Research: Why It Matters So Much in the Education Market”, where edtech executives discussed the phases and roles of research as it relates to product improvement and marketing. Moderated by Pearson’s Gary Mainor, session panelists included Andrew Coulson of the MIND Research Institute, Kelli Hill of Kahn Academy, and Shawn Mahoney of McGraw Hill Education.

Coulson, who was one of the contributing reviewers of our Research Guidelines, stated that all signs are pointing to an “exponential increase” of school district customers asking for usage data. He advised fellow edtech entrepreneurs to start paying attention to fine-grained usage data, as it is becoming necessary to provide this for customers. Panelist Kelli Hill agreed with the importance of making data visible, adding that Kahn Academy proactively provides users with monthly usage reports.

In addition to providing helpful advice for edtech sales and marketing teams, the session also addressed a pervasive misconception that that all it takes is “one good study” to validate and prove the effectiveness of a program. A company could commission one rigorous randomized trial reporting positive results and obtaining endorsement from the What Works Clearinghouse, but that study might be outdated, and more importantly, not relevant to what schools and districts are looking for. Panelist Shawn Mahoney, Chief Academic Officer of McGraw-Hill Education, affirmed that school districts are interested in “super contextualized research” and look for recent and multiple studies when evaluating a product. Q&A discussions with the panelists revealed that school decision makers are quick to claim “what works for someone else might not work for us”, supporting the notion that the conduct of multiple research studies, reporting effects for various subgroups and populations of students, is much more useful and reflective of district needs.

SIIA’s gathering proved to be a fruitful event, allowing us to reconnect with old colleagues and meet new ones, and leaving us with a number of useful insights and optimistic possibilities for new directions in research.

2018-06-22
Archive