blog posts and news stories

Poway District Using MeasureResults

With the new school year approaching, we are excited to announce a partnership with MeasureResults’ newest user, Poway Unified School District (PUSD). With the help of MeasureResults, PUSD educators will design and conduct their own program evaluations, while outsourcing the analytics and reporting functions to MeasureResults’ automated analysis engine. Planned study designs include an “interrupted time series,” which compares current achievement levels to levels from several years prior to the introduction of the program under evaluation. Plans also include a comparison group study which, by matching classrooms that are using the program with similar classrooms that are not, can estimate the difference that the new program has made. Special analyses will determine whether the program benefits various subgroups of students (e.g. English language learners) more than others. We anticipate that PUSD’s valuable product feedback and input will enable us to make ongoing improvements to MeasureResults’ functionality and usability.

2009-08-17

Empirical Education Lends Expertise to the Software Industry

Empirical Education continued its active involvement as an associate member of Software & Information Industry Association (SIIA), participating in its 9th annual Ed Tech Industry Summit and helping to draft a set of guidelines for provider-sponsored research on educational technology products and service.

At SIIA’s San Francisco conference held May 3-5, Dr. Denis Newman discussed Empirical Education’s experience in state and local experimental evaluations as a panelist in a session addressing technology-based assessment tools entitled “Harvesting Information from Assessment to Support the Learner and the Educator.” Other panelists representing SRI and CTB/McGraw Hill brought their expertise in testing to a discussion of the differences between formative and summative assessments and how data from such assessments are used to inform decision-making at various levels: teacher, school, district, and state. Dr. Newman’s remarks drew the discussion into technology trends such as longitudinal data systems, growth scales, and value-add analysis and the need to apply statistical processes that are not yet commonly part of school data systems.

For the past year, SIIA has also supported a working group consisting of members with interest and expertise in research and evaluation. Dr. Newman was asked to participate as co-chair and has been working with the other group members to complete a set of guidelines that can provide guidance in designing scientific evaluations, given the particular characteristics of technology interventions. The company has worked with many SIIA members, assisting them to work with school systems to evaluate their products and services.

2009-06-10

New Directions for Research Discussed at Institute of Education Sciences Conference

The Fourth Annual Institute of Education Sciences (IES) Research Conference was convened June 7 - 9 with an air of anticipation about new directions, as John Q. Easton began his term as director. Formerly executive director of the Consortium on Chicago School Research, he brings a new perspective to IES. While Dr. Easton is quoted in Education Week as saying he will retain the rigor that IES has made a priority, the Consortium’s work points to the importance of building local capacity for research to support reform. In a paper published online, he and his colleagues provide a clear and detailed rationale for their approach that includes the need for combining high quality research with the ability to cut through technical details to communicate both good and bad news to local decision makers.

Three Empirical Education staff members furthered this agenda of building capacity for local school and district evaluations in poster presentations at the conference. Dr. Robert Smith, the company’s vice president of engineering, outlined the company’s progress on MeasureResults™, a web-based evaluation solution for schools and districts. (Funding for the development of MeasureResults is from an IES Small Business Innovation Research grant.)

Dr. Denis Newman, the company’s president, and Andrew P. Jaciw, director of experimental design and analysis, presented their findings on the process of developing low cost, timely, and locally relevant experiments (funded by an IES research grant). Development efforts on MeasureResults continue through the Empirical Education team’s application of the knowledge gained from this project entitled “Low Cost Experiments to Support Local School District Decisions.” This project guides the team in developing decision makers’ understanding of, and building local capacity for, conducting evaluation research.

2009-06-09

Empirical Education to Design a Multi-Year Evaluation of New GreatSchools Initiative

GreatSchools, a nonprofit provider of web-based resources for parents, has contracted with Empirical Education to design a multi-year evaluation of its initiative to empower parents to participate in their children’s development and educational success.

The initiative is funded by the Gates Foundation, the Robertson Foundation, and the Walton Family Foundation. Bill Jackson, Founder and President of GreatSchools, wrote in his blog:

“More than a year ago, we began to consider: What more could we at GreatSchools do to improve education? How could we do more for our large audience of parents? And what could we do for low-income parents whose children face the steepest climb to college? Our answer: We should leverage the technology of our times to create a comprehensive parent-training program and support group that inspires and guides parents—especially low-income parents—to raise children who are ready for college.”

2009-05-13

Welcome On Board, John Easton!

The Obama administration has named John Q. Easton, executive director of the Consortium on Chicago School Research, as the new director of the Institute of Education Sciences. The choice will bring a new perspective to IES. The Consortium provides research support for local reforms and improvements in the Chicago Public Schools. While Dr. Easton is quoted in Education Week as saying he will retain the rigor that IES has made a priority, the Consortium’s work points to the importance of building local capacity for research to support reform. In a paper published online, Roderick, Easton, & Sebring (2009), he and his colleagues provide a clear and detailed rationale for their approach that includes the need for combining high quality research with the ability to cut through technical details to communicate both good and bad news to decision-makers. Empirical Education congratulates John Easton, and we look forward to working with him to replicate this model for research in school districts throughout the country.

Consortium on Chicago School Research: A New Model for the Role of Research in Supporting Urban School Reform, 22009. Melissa Roderick, John Q. Easton, and Penny Bender Sebring.

2009-04-03

American Recovery and Reinvestment Act

In response to the recent announcements of education stimulus funding through the American Recovery and Reinvestment Act (ARRA), Empirical Education is now offering assistance to state and local education agencies in building local capacity for rigorous evaluations. Our MeasureResults system helps state and district planners answer questions about the effectiveness of schools and of locally implemented programs. The enhanced analytic capabilities built into the MeasureResults system enables school districts to conduct their own rigorous research with findings relevant to their localized setting, using their existing datasets. Read more about how Empirical Education can work with state and local education agencies to take full advantage of ARRA funding. Also see how we partner with publishers to provide research packages along with their product offerings.

2009-04-01

Empirical Education Partners with NWEA to Research Virtual Control Groups

Northwest Evaluation Association, the leading provider of computer adaptive testing for schools, is partnering with Empirical Education to analyze the properties of its virtual control group (VCG) technologies. Empirical has already conducted a large number of randomized experiments in which NWEA’s “Measures of Academic Progress” (MAP) served both as pretest and posttest. The characteristics of a randomly assigned control group provide a yardstick in evaluating the characteristics of the VCG. The proposed research builds on extensive theoretical work on approaches to forming comparison groups for obtaining unbiased impact estimates from quasi-experiments.

In parallel to this theoretical analysis, NWEA and Empirical Education are cooperating in a nationwide comparison group (“quasi-”) experiment to estimate the impact of a basal reading program in wide use nationally. Taking advantage of the fact that MAP is in use in thousands of schools, Empirical will identify a group of schools currently using this reading program and testing their students’ reading using MAP and then select a well matched comparison group from non-users who also test with MAP. Characteristics of the schools such as SES, percent English learner, urbanicity, ethnicity, and geographic region, as well as prior reading achievement, will be used in identifying the comparison group.

2009-03-09

Presentation at the Society for Research in Educational Effectiveness (SREE) Explores Methods for Studying Achievement Gaps

Frequently in Empirical Education’s experimental evaluations for school districts, the question of local concern is an achievement gap identified between two student groups. The analysis of these experiments also often finds significant differences between these subgroups in how effective the intervention was (that is, if it increased or decreased the gap) while not finding a significant overall difference. In his 2005 book, Howard Bloom suggested why there may be more statistical power to detect subgroup differences than to detect the average effect. The exploration presented at SREE, which was held in Washington March 1-3, examined the statistical characteristics of eight experiments conducted over the last three years to find out whether a critical assumption of Bloom’s approach held. His assumption is that the average performance gap does not vary across the units that are randomized. The work, led by Andrew P. Jaciw, Empirical Education’s Director of Experimental Design and Analysis, found that the assumption held. This finding is important because it suggests that local experiments focusing on achievement gaps may be less expensive than experiments addressing only the overall average effect of an intervention. (Click here for a copy of the poster and handout.)

Bloom, H. S., (2005). Randomizing groups to evaluate place-based programs. In H. S. Bloom (Ed). Learning More From Social Experiments. New York, NY: Sage.

2009-03-01

Methods for Local Experimental Evaluation of STEM Initiatives Presented to State Legislators

The National Conference of State Legislatures (NCSL) presented a seminar for education committee chairs January 9-11 in Huntsville, Alabama, home of NASA’s Marshall Space Flight Center. The topic was “Linking Research and Policy to Improve Science, Technology, Engineering and Math Education.” Empirical Education‘s president, Denis Newman, presented the company‘s research on the Alabama Math Science and Technology Initiative, an 80-school randomized experiment being conducted as part of its contract with the Regional Education Laboratory for the Southeast. The presentation also drew on findings from experiments the company has conducted to evaluate STEM initiatives elsewhere in the country to illustrate the importance of local research goals and characteristics in evaluation design. The seminar was part of a series on research funded by the Institute of Education Sciences. (Click here for a copy of the presentation.)

2009-02-01

New MeasureResults® Webpage

Empirical Education clients and partners can now keep track of the latest MeasureResults developments on our new webpage. The webpage features video clips that explain how MeasureResults’ online interface can be used to set up and conduct school level studies, as well as details about the design and development of program features. Updates and announcements will be posted on a regular basis. In addition to serving our existing MeasureResults subscribers, Empirical Education is currently developing a custom-designed MeasureResults tool for use in more than ten school districts for a high-school math effectiveness study. For more information about MeasureResults, email Robert Smith.

2009-01-01
Archive