blog posts and news stories

Empirical Education Focuses on Local Characteristics at the 14th Annual CREATE Conference

Empirical Education staff presented at the National Evaluation Institute’s (NEI) 14th annual CREATE conference in Wilmington, North Carolina. Both presentations focused on the local characteristics of the evaluations. Dr. Denis Newman, president of Empirical Education, and Jenna Zacamy, research manager, presented a randomized experiment which evaluated the impact of a pre-algebra curriculum (Carnegie Learning’s Cognitive Tutor Bridge to Algebra) being introduced in a pilot program in the Maui School District. The district adopted the program based in part on previous research showing substantial positive results in Oklahoma (Morgan & Ritter 2002). Given the unique locale and ethnic makeup in Maui, a local evaluation was warranted. District educators were concerned in particular with their less experienced teachers and with ethnic groups considered at risk. Unlike in prior research, we found no overall impact although for the algebraic operations subscale, low scoring students benefited from being in the Cognitive Tutor classes indicating that the new program could help to reduce the achievement gaps of concern. We also found for the overall math scale that uncertified teachers were more successful with their Cognitive Tutor classes than their conventional classes. Dr. Newman also presented work co-authored with Marco Muñoz and Andrew Jaciw on a quasi-experimental comparison, conducted by Empirical Education and Jefferson County (KY) schools, of an activity-based middle-school science program (Premier Science) to more traditional textbook programs. All the data were supplied by the district including a rating of quality of implementation. The primary pretest and outcome measures were tests of science and reading achievement. While there was no discernible difference overall, poor readers gained more from the non-textbook approach, helping to diminish an achievement gap of concern to the district.

2008-12-15

Reports Released on the Effect of Carnegie Learning’s Cognitive Tutor

The Maui School District has released results from a study of the effect of Carnegie Learning’s Cognitive Tutor (CT) on long-term course selections and grade performance. Building upon two previous randomized experiments on the impact of CT on student achievement in Algebra I and Pre–algebra, the study followed the same groups of students in the year following their exposure to CT. The research did not find evidence of an impact of CT on either course selection or course grade performance for students in the following school year. The study also found no evidence that variation among ethnicities in both the difficulty of course taken and course grade received depended on exposure to CT.

A concurrent study was conducted on the successes and challenges of program implementation with the teachers involved in the previous CT studies. The study took into account teachers’ levels of use and length of exposure to CT; the descriptive data comprised surveys, classroom observations, and interviews. The major challenges to implementation included a lack of access to resources, limited support for technology, and other technological difficulties. After 3 years of implementation, teachers reported that these initial barriers had been resolved; however teachers have yet to establish a fully collaborative classroom environment, as described in the Carnegie Learning implementation model.

Maui School District is the company’s first MeasureResults subscriber. A similar research initiative is being conducted at the community college level with The Maui Educational Consortium. The report for this study will be announced later this year.

2008-12-10

Five Presentations Accepted for AERA 2009

Empirical Education will be heading to sunny San Diego next April! Once again, Empirical will have a strong showing at the 2009 American Educational Research Association conference, which will be held in downtown San Diego on April 13-17, 2009. Our presentations will span several divisions, including Learning & Instruction, Measurement & Research Methodology, and Research, Evaluation, & Assessment in Schools. Research topics will include:

As a follow-up to our successful 2008 AERA New York reception at Henri Bendel’s Chocolate Room, Empirical Education plans to host another “meet and greet” at this year’s conference as well. Details about the reception will be announced on our website soon.

2008-12-01

Report Released on The Efficacy of PCI’s Reading Program — Level One

Empirical Education and PCI Education have released the results of a one-year randomized control trial on the efficacy of PCI’s Reading Program — Level 1 for students with moderate to severe disabilities. Conducted in the Brevard and Miami-Dade County school districts, the study found that, after one year, students in the PCI program had substantial success in learning sight words in comparison to students in the control group — equivalent to a 21 percentile point difference. Though researchers found that students’ grade level had no effect on achievement with the program, they found a small moderating effect of the phonological pre-assessment: students starting with greater phonological skills benefit more from PCI than students starting with lower scores. This report will be presented at the 2009 AERA conference in San Diego, CA. A four-year follow-on study is being conducted with a larger group of students in Florida.

2008-11-01

Final Report on “Local Experiments” Project

Empirical Education released the final report of a project that has developed a unique perspective on how school systems can use scientific evidence. Representing more than three years of research and development effort, our report describes the startup of six randomized experiments and traces how local agencies decided to undertake the studies and how the resulting information was used. The project was funded by a grant from the Institute of Education Sciences under their program on Education Policy, Finance, and Systems. It started with a straightforward conjecture:

The combination of readily available student data and the greater pressure on school systems to improve productivity through the use of scientific evidence of program effectiveness could lead to a reduction in the cost of rigorous program evaluations and to a rapid increase in the number of such studies conducted internally by school districts.

The prevailing view of scientifically based research is that educators are consumers of research conducted by professionals. There is also a belief that rigorous research is extraordinarily expensive. The supposition behind our proposal was that the cost could be made low enough to allow experiments to be conducted routinely to support district decisions with local educators as the producers of evidence. The project contributed a number of methodological, analytic, and reporting approaches with potential to lower costs and make rigorous program evaluation more accessible to district researchers. An important result of the work was bringing to light the differences between conventional research design aimed at broadly generalized conclusions and design aimed at answering a local question, where sampling is restricted to the relevant “unit of decision making” such as a school district with jurisdiction over decisions about instructional or professional development programs. The final report concludes with an understanding of research use at the central office level, whether “data-driven” or “evidence-based” decision making, as a process of moving through stages in which looking for descriptive patterns in the data (i.e., data mining for questions of interest) will precede the use of statistical analysis of differences between and associations among variables of interest using appropriate methods such as HLM. And these will precede the adoption of an experimental research design to isolate causal, moderator, and mediator effects. It is proposed that most districts are not yet prepared to produce and use experimental evidence but would be able to start with useful descriptive exploration of data leading to needs assessment as a first step in a more proactive use of evaluation to inform their decisions.

For a copy of the report, please choose the Toward School Districts Conducting Their Own Rigorous Program Evaluation paper from our reports and papers webpage.

2008-10-01

Maui Community College Hires Empirical Education for an Evaluation of NSF-Funded Project

In Hawaii, Ho’okahua means “to lay a foundation”. Focusing on Hawaiian students over multiple years, the Ho’okahua Project aims to increase the number of Maui Community College (MCC) students entering, persisting, and succeeding in college level science, mathematics, and other STEM (Science, Technology, Engineering and Math) degree programs. Several strategies have already been implemented, including a bridge program with the high schools from which the MCC student community is largely drawn.

The Maui Educational Consortium provides leadership for this work and has been instrumental in a number of other initiatives for increasing the capacity to achieve their goals. For example, the implementation of Cognitive Tutor for Algebra 1 was the subject of a related Empirical Education randomized experiment. Another important capacity fostered by the Educational Consortium, working with the University of Hawai’i Office of the State Director for Career and Technical Education, is an initiative called HI-PASS, which aggregates student data across high school and community college. Initially in its evaluation, Empirical Education will be using information on math courses developed through the HI-PASS project to follow the success of students from the earlier study.

2008-08-22

Blue Valley Schools and Empirical Education Explain Local Program Evaluations

Dr. Bo Yan, Program Evaluator for the Blue Valley Schools in Kansas, and Dr. Denis Newman, president of Empirical Education, co-presented at the Northwest Evaluation Association’s annual Members Seminar in Portland OR. The topic was how school districts can use their own testing such as that administered by the NWEA member districts to conduct their own local program evaluations. Dr. Yan, who is expecting to conduct seven such evaluations in his district this coming year, used an evaluation of READ 180 as an illustration of a comparison group design using primarily statistical controls. Dr. Newman presented the randomized control work Empirical Education has done with the Maui school system to evaluate math software and curriculum from Carnegie Learning (Cognitive Tutor: Year 1 and Year 2). Both emphasized the importance of local evaluations to estimate the impact of the programs for the specific populations and resources available to the district. They also made clear the need for a comparison group from the local district since the improvement a district can expect is anchored in its own students’ current level of achievement. While the presentation focused mainly on the use of NWEA’s Measures of Academic Progress (MAP) in the quantitative estimation of the program’s impact, the presenters also emphasized the importance of gathering information on implementation and of the conversations that must go on to integrate evaluation findings into the district’s decision-making. NWEA will be providing this presentation, along with the PowerPoint slides, as a podcast.

2008-06-27

Development Grant Awarded to Empirical Education

The U.S. Department of Education awarded Empirical Education a research grant to develop web-based software tools to support school administrators in conducting their own program evaluations. The two-and-a-half year project was awarded through the Small Business Innovative Research program administered and funded by the Institute of Education Sciences in the U.S. Department of Education. The proposal received excellent reviews in this competitive program. One reviewer remarked: “This software system is in the spirit of NCLB and IES to make curriculum, professional development, and other policy decisions based on rigorous research. This would be an improvement over other systems that districts and schools use that mostly generate tables.” While current data-driven decision making systems provide tabular information or comparisons in terms of bar graphs, the software to be developed—an enhancement of our current MeasureResults™ program—helps school personnel create appropriate research designs following a decision process. It then provides access to a web-based service that uses sophisticated statistical software to test whether there is a difference in the results for a new program compared to the school‘s existing programs. The reviewer added that the web-based system instantiates a “very good idea to provide [a] user-friendly and cost-effective software system to districts and schools to insert data for evaluating their own programs.” Another reviewer agreed, noting that: “The theory behind the tool is sound and would provide analyses appropriate to the questions being asked.” The reviewer also remarked that “…this would be a highly valuable tool. It is likely that the tool would be widely disseminated and utilized.” The company will begin deploying early versions of the software in school systems this coming fall.

2008-05-22

IES Holds its First Workshop on How States and Districts Can Evaluate Their Interventions

Empirical Education staff members participated in a workshop held in Washington sponsored by the U.S. Department of Education’s Institute of Education Sciences (IES) on April 24. The goal of the event was to encourage “locally initiated impact evaluations.” The primary presenter, Mark Lipsey of Vanderbilt University, called upon his extensive experience in rigorous evaluations to illustrate and explain the how and why of local evaluations. David Holdzkom of Wake County (NC) provided the perspective from a district where the staff has been conducting rigorous research on their own programs. Empirical Education assisted IES in preparing for the workshop by helping to recruit school district participants and presenters. Of the approximately 150 participants, 40% represented state and local education agencies (the other 60% were from universities, colleges, private agencies, R&D groups, and research companies). An underlying rationale for this workshop was the release of an RFP on this topic. Empirical Education expects to be working with a number of school districts on their responses to this solicitation, which is due October 2, 2008.

2008-05-01

Two-Year Study on Effectiveness of Graphing Calculators Released

Results are in from a two-year randomized control trial on the effectiveness of graphing calculators on Algebra and Geometry achievement. Two reports are now available for this project, which was sponsored by Texas Instruments. In year 1 we contrasted business as usual in the math classes of two California school districts with classes equipped with sets of graphing calculators and led by teachers who received training in their use. In the second year we contrasted calculator-only classrooms with those also equipped with a calculator-based wireless networking system.

The project tracked achievement through state and other standardized test scores and implementation through surveys and observations. For the most part, the experiment could not discern an impact as a result of providing the equipment and training for the teachers. Data from surveys and observations make clear that the technology was not used extensively (and by some, not at all) suggesting that training, usability, and alignment issues must be addressed in adoption of this kind of program. There were modest effects, especially for Geometry, but these were often not found consistently for the two measurement scales. In one case contradictory results for the two school districts suggests that researchers should use caution in combining data from different settings.

2008-03-01
Archive