blog posts and news stories

New RFP calls for Building Regional Research Capacity

The US Department of Education (ED) has just released the eagerly anticipated RFP for the next round of the Regional Education Laboratories (RELs). This RFP contains some very interesting departures from how the RELs have been working, which may be of interest especially to state and local educators.

For those unfamiliar with federal government organizations, the RELs are part of the National Center for Education Evaluation and Regional Assistance (abbreviated NCEE), which is within the Institute of Education Sciences (IES), part of ED. The country is divided up into ten regions, each one served by a REL—so the RFP announced today is really a call for proposals in ten different competitions. The RELs have been in existence for decades but their mission has evolved over time. For example, the previous RFP (about 6 years ago) put a strong emphasis on rigorous research, particularly randomized control trials (RCTs) leading the contractors in each of the 10 regions to greatly expand their capacity, in part by bringing in subcontractors with the requisite technical skills. (Empirical conducted or assisted with RCTs in four of the 10 regions.) The new RFP changes the focus in two essential ways.

First, one of the major tasks is building capacity for research among practitioners. Educators at the state and local levels told ED that they needed more capacity to make use of the longitudinal data systems that the ED has invested in through grants to the states. It is one thing to build the data systems. It is another thing to use the data to generate evidence that can inform decisions about policies and programs. Last month at the conference of the Society for Research on Educational Effectiveness, Rebecca Maynard, Commissioner of NCEE talked about building a “culture of experimentation” among practitioners and building their capacity for simpler experiments that don’t take so long and are not as expensive as those NCEE has typically contracted for. Her point was that the resulting evidence is more likely to be used if the practitioners are “up close and immediate.”

The second idea found in the RFP for the RELs is that each regional lab should work through “alliances” of state and local agencies. These alliances would cross state boundaries (at least within the region) and would provide an important part of the REL’s research agenda. The idea goes beyond having an advisory panel for the REL that requests answers to questions. The alliances are also expected to build their own capacity to answer these questions using rigorous research methods but applying them cost-effectively and opportunistically. The capacity of the alliances should outlive the support provided by the RELs. If your organization is part of an existing alliance and would like to get better at using and conducting research, there are teams being formed to go after the REL contracts that would be happy to hear from you. (If you’re not sure who to call, let us know and we’ll put you in touch with an appropriate team.)

2011-05-11

A Conversation About Building State and Local Research Capacity

John Q Easton, director of the Institute of Education Sciences (IES), came to New Orleans recently to participate in the annual meeting of the American Educational Research Association. At one of his stops, he was the featured speaker at a meeting of the Directors of Research and Evaluation (DRE), an organization composed of school district research directors. (DRE is affiliated with AERA and was recently incorporated as a 501©(3)). John started his remarks by pointing out that for much of his career he was a school district research director and felt great affinity to the group. He introduced the directions that IES was taking, especially how it was approaching working with school systems. He spent most of the hour fielding questions and engaging in discussion with the participants. Several interesting points came out of the conversation about roles for the researchers who work for education agencies.

Historically, most IES research grant programs have been aimed at university or other academic researchers. It is noteworthy that even in a program for “Evaluation of State and Local Education Programs and Policies,” grants have been awarded only to universities and large research firms. There is no expectation that researchers working for the state or local agency would be involved in the research beyond the implementation of the program. The RFP for the next generation of Regional Education Labs (REL) contracts may help to change that. The new RFP expects the RELs to work closely with education agencies to define their research questions and to assist alliances of state and local agencies in developing their own research capacity.

Members of the audience noted that, as district directors of research, they often spend more time reviewing research proposals from students and professors at local colleges who want to conduct research in their schools, rather than actually answering questions initiated by the district. Funded researchers treat the districts as the “human subjects,” paying incentives to participants and sometimes paying for data services. But the districts seldom participate in defining the research topic, conducting the studies, or benefiting directly from the reported findings. The new mission of the RELs to build local capacity will be a major shift.

Some in the audience pointed out reasons to be skeptical that this REL agenda would be possible. How can we build capacity if research and evaluation departments across the country are being cut? In fact, very little is known about the number of state or district practitioners whose capacity for research and evaluation could be built by applying the REL resources. (Perhaps, a good first research task for the RELs would be to conduct a national survey to measure the existing capacity.)

John made a good point in reply: IES and the RELs have to work with the district leadership—not just the R&E departments—to make this work. The leadership has to have a more analytic view. They need to see the value of having an R&E department that goes beyond test administration, and is able to obtain evidence to support local decisions. By cultivating a research culture in the district, evaluation could be routinely built in to new program implementations from the beginning. The value of the research would be demonstrated in the improvements resulting from informed decisions. Without a district leadership team that values research to find out what works for the district, internal R&E departments will not be seen as an important capacity.

Some in the audience pointed out that in parallel to building a research culture in districts, it will be necessary to build a practitioner culture among researchers. It would be straightforward for IES to require that research grantees and contractors engage the district R&E staff in the actual work, not just review the research plan and sign the FERPA agreement. Practitioners ultimately hold the expertise in how the programs and research can be implemented successfully in the district, thus improving the overall quality and relevance of the research.

2011-04-20

Quasi-experimental Design Used to Build Evidence for Adolescent Reading Intervention

A study of Jamestown Reading Navigator (JRN) from McGraw-Hill (now posted on our reports page), conducted in Miami-Dade County Public Schools, found positive results on the Florida state reading test (FCAT) for high school students in their intensive reading classes. JRN is an online application, with internal record keeping making it possible to identify the treatment group for a comparison design. While the full student, teacher and roster data for 9th and 10th grade intensive reading classes were provided by the district, JRN—as an online application—provided the identification of the student and teacher users through the computer logs. The quasi-experimental design was strengthened by using schools with both JRN and non-JRN students. Of the 70 schools that had JRN logs, 23 had JRN and non-JRN intensive reading classes and sufficient data for analysis.

Download the 2010 report here.

2011-04-15

Looking Back 35 Years to Learn about Local Experiments

With the growing interest among federal agencies in building local capacity for research, we took another look at an article by Lee Cronbach published in 1975. We found it has a lot to say about conducting local experiments and implications for generalizability. Cronbach worked for much of his career at Empirical’s neighbor, Stanford University, and his work has had a direct and indirect influence on our thinking. Some may interpret Cronbach’s work as stating that randomized trials of educational interventions have no value because of the complexity of interactions between subjects, contexts, and the experimental treatment. In any particular context, these interactions are infinitely complex, forming a “hall of mirrors” (as he famously put it, p. 119), making experimental results—which at most can address a small number of lower-order interactions—irrelevant. We don’t read it that way. Rather, we see powerful insights as well as cautions for conducting the kinds of field experiments that are beginning to show promise for providing educators with useful evidence.

We presented these ideas at the Society for Research in Educational Effectiveness conference in March, building the presentation around a set of memorable quotes from the 1975 article. Here we highlight some of the main ideas.

Quote #1: “When we give proper weight to local conditions, any generalization is a working hypothesis, not a conclusion…positive results obtained with a new procedure for early education in one community warrant another community trying it. But instead of trusting that those results generalize, the next community needs its own local evaluation” (p. 125).

Practitioners are making decisions for their local jurisdiction. An experiment conducted elsewhere (including over many locales, where the results are averaged) provides a useful starting point, but not “proof” that it will or will not work in the same way locally. Experiments give us a working hypothesis concerning an effect, but it has to be tested against local conditions at the appropriate scale of implementation. This brings to mind California’s experience with class size reduction following the famous experiment in Tennessee, and how the working hypothesis corroborated through the experiment did not transfer to a different context. We also see applicability of Cronbach’s ideas in the Investing in Innovation (i3) program, where initial evidence is being taken as a warrant to scale-up intervention, but where the grants included funding for research under new conditions where implementation may head in unanticipated directions, leading to new effects.

Quote #2: “Instead of making generalization the ruling consideration in our research, I suggest that we reverse our priorities. An observer collecting data in one particular situation…will give attention to whatever variables were controlled, but he will give equally careful attention to uncontrolled conditions…. As results accumulate, a person who seeks understanding will do his best to trace how the uncontrolled factors could have caused local departures from the modal effect. That is, generalization comes late, and the exception is taken as seriously as the rule” (pp. 124-125).

Finding or even seeking out conditions that lead to variation in the treatment effect facilitates external validity, as we build an account of the variation. This should not be seen as a threat to generalizability because an estimate of average impact is not robust across conditions. We should spend some time looking at the ways that the intervention interacts differently with local characteristics, in order to determine which factors account for heterogeneity in the impact and which ones do not. Though this activity is exploratory and not necessarily anticipated in the design, it provides the basis for understanding how the treatment plays out, and why its effect may not be constant across settings. Over time, generalizations can emerge, as we compile an account of the different ways in which the treatment is realized and the conditions that suppress or accentuate its effects.

Quote #3: “Generalizations decay” (p. 122).

In the social policy arena, and especially with the rapid development of technologies, we can’t expect interventions to stay constant. And we certainly can’t expect the contexts of implementation to be the same over many years. The call for quicker turn-around in our studies is therefore necessary, not just because decision-makers need to act, but because any finding may have a short shelf life.

Cronbach, L. J. (1975). Beyond the two disciplines of scientifi­c psychology. American Psychologist, 116-127.

2011-03-21

Conference Season 2011

Empirical researchers will again be on the road this conference season, and we’ve included a few new conference stops. Come meet our researchers as we discuss our work at the following events. If you will be present at any of these, please get in touch so we can schedule a time to speak with you, or come by to see us at our presentations.

NCES-MIS

This year, the NCES-MIS “Deep in the Heart of Data” Conference will offer more than 80 presentations, demonstrations, and workshops conducted by information system practitioners from federal, state, and local K-12 agencies.

Come by and say hello to one of our research managers, Joseph Townsend, who will be running Empirical Education’s table display at the Hilton Hotel in Austin, Texas from February 23-25th. Joe will be presenting interactive demonstrations of MeasureResults, which allows school district staff to conduct complete program evaluations online.

SREE

Attendees of this spring’s Society for Research on Educational Effectiveness (SREE) Conference, held in Washington, DC March 3-5, will have the opportunity to discuss questions of generalizability with Empirical Education’s Chief Scientist, Andrew Jaciw and President, Denis Newman at two poster sessions. The first poster, entitled External Validity in the Context of RCTs: Lessons from the Causal Explanatory Tradition applies insights from Lee Cronbach to current RCT practices. In the second poster, The Use of Moderator Effects for Drawing Generalized Causal Inferences, Jaciw addresses issues in multi-site experiments. They look forward to discussing these posters both online at the conference website and in person.

AEFP

We are pleased to announce that we will have our first showing this year at the Association for Education Finance and Policy (AEFP) Annual Conference. Join us in the afternoon on Friday, March 25th at the Grand Hyatt in Seattle, WA as Empirical’s research scientist, Valeriy Lazarev, presents a poster on Cost-benefit analysis of educational innovation using growth measures of student achievement.

AERA

We will again have a strong showing at the 2011 American Educational Research Association (AERA) Conference. Join us in festive New Orleans, April 8-12 for the final results on the efficacy of the PCI Reading Program, our qualitative findings from the first year of formative research on our MeasureResults online program evaluation tool, and more.

View our AERA presentation schedule for more details and a complete list of our participants.

SIIA

This year’s SIIA Ed Tech Industry Summit will take place in gorgeous San Francisco, just 45 minutes north of Empirical Education’s headquarters in the Silicon Valley. We invite you to schedule a meeting with us at the Palace Hotel from May 22-24.

2011-02-18
Archive