Research

Empirical’s research scientists bring an extraordinary level of expertise in the methods for education science.

While we are experts in the rules for IES’s What Works Clearinghouse, and have been a subcontractor on the review process, we have also moved well beyond the narrow focus on correct design and analysis (“internal validity”), to develop ways to understand how, for whom, and under what conditions a program or policy works. This is the key to relevance of research and is the basis for the coherence of our company’s work.

Reducing the Cost of Experiments

We have pioneered work in conducting low cost experiments to assist decision-makers in answering local questions. These studies take advantage of planned implementations or pilots, such as when a program is scaled up within a jurisdiction. Our experiment in Alabama is used as the exemplar of such “scale up” experiments in the IES/NSF common guidelines for research. Our engineers and analysts have also prototyped tools for running quasi-experimental analyses against existing state and district data.

Latest Federal Requirements

We have stayed abreast of developments and trends in federal requirements through our evaluations of i3 programs and our collaboration with the contractor that is conducting the national evaluation of the i3 and the technical assistance to evaluators. We understand that technical requirements for research are evolving in response to the needs of decision-makers. This includes a clearer sense of a range of methods corresponding to the importance of the decision. Decision-makers need to recognize the conditions and incentives of the study as similar to those they face. We especially take into consideration organizational characteristics, such as community collaboration, at the research sites and work to assure our designs do not disrupt the very processes needed for the program under study to succeed.

Industry Guidelines

Based on Empirical’s understanding of how research is used by decision-makers and our extensive experience with commercially-funded research, Dr. Newman authored a set of guidelines for research that was published by the Software and Information Industry Association. These were intended for publishers and developers of educational technologies and provide a standard of best practices for conducting and reporting evaluation studies of their products.

Going Beyond “Average Impact”

We specialize in rigorous program evaluations that meet the research standards of the US Department of Education, including both experimental and quasi-experimental designs. Our work integrates quantitative analyses with relevant information on implementation, changes in school and classroom processes, and qualitative data collected from teachers and classrooms. Thus our studies can go well beyond the overall student achievement outcomes to examine finer grained program results at the school and classroom levels, and for subgroups based on student and teacher characteristics. Using mediation analyses, we are able to evaluate the contribution that educational processes make to our findings. Our data mining and value-added analyses help our clients with descriptive evaluations and needs assessment.

Experimental Design

Empirical has proven design capabilities for a wide range of experimental and evaluation studies. We have planned and executed 27 randomized experiments as well as numerous non-randomized or quasi-experiments, encompassing hundreds of school districts. Our research designs include group randomized trials and longitudinal experiments tracking both teachers and students over periods as long as five years. In quasi-experimental designs, Empirical uses a variety of matching-based approaches, including propensity score analyses and differences-in-differences estimation, as well as simpler methods involving multilevel regression. We also use data-mining techniques, including value-added analyses, to examine data patterns and build hypotheses when comparison groups are not available. Beyond the designs based on quantitative comparisons, we conduct formative research on programs that are in the early stages of development. In some cases, our reviews may use meta-analytic methods.

Descriptive, Observational, and Policy Oriented Studies

Through our extensive work with the federally-funded Regional Education Labs, Empirical conducts policy-oriented research primarily for state departments of education. These studies include analyses of state educator effectiveness pilots, predictive analytics regarding minority student success in entering STEM careers, and correlational studies of program scale-up based on interviews and surveys of teachers and administrators. Our work on educator effectiveness benefits from the Gates Foundation’s MET project dataset and includes factor analysis demonstrating that “multiple measures” combine to measure several distinct facets of teaching. This work is being used as the basis of our analysis of state systems.

Outcomes

We focus on outcomes of school-based interventions that address different content areas (academic, health promotion) for various kinds of students (mainstreamed or specifically designated).

Student Achievement

We specialize in rigorous program evaluations that meet NCLB requirements for scientifically based research. We assess impacts on performance on both state and other standardized tests (e.g., NWEA). We focus on programs that are targeted to improve the performance of different kinds of students, from the mainstreamed to those with special needs in self-contained classes. We address effects of programs on various outcomes, from tests of performance in different subject areas, to classroom processess and student attitudes to learning as gauged through surveys and questionnaires of teachers and students. Our data mining and value-added analyses help our clients with descriptive evaluations and needs assessments.

Student Health and Welfare

School are ideal settings for implementing programs to promote healthy habits. Our goal is to help clinicians, researchers, and program developers to move promising programs that address student health and welfare from the “bench” to successful implementation at the school level. Our methods are suited to assessing impacts on various health-related behaviors, including, eating habits, alcohol use, and risk-taking. We address questions such as:

Youth Development and Juvenile Justice

Schools are integral parts of communities and we focus on evaluation of programs for promoting juvenile justice and youth development. We assess the catalysts and barriers to the implementation of interventions that are designed to promote safe schools by preventing delinquency and facilitating healthy and productive community involvement.

We are interested in working with program developers and administrators to evaluate and support the implementation and scale up of school-based programs for preventing school drop-out, absenteeism, substance abuse, bullying and teenage pregnancy, and that promote self-esteem, perseverance and interest in school, positive peer relationships, academic success, and successful transitioning to higher education and employment.

A Holistic Approach to School-Based Research Involving Youth Outcomes

We take a holistic approach to research and evaluation. Youth outcomes are connected. Self-esteem and positive peer relationships affect school performance, and healthy habits—in and out of school—affect academic success. Therefore, we see our three main areas of concentration—academic achievement, student health and welfare and youth development and justice—as fundamentally connected.

By considering a broader set of outcomes, our goal is to bring relevant theory to bear on outputs important for student success, especially in the context of schools. Consistent with this commitment we plan for evaluations that support the articulation and assessment of program models that address a wide set of contexts, activities and outcomes in schools, as well as the relations among them.