Latest Evidence

Observations and Opinions about Research in Schools

Evidence from 2007

What Happens When a Publisher Doesn’t Have Scientific Evidence?

December 2007

A letter from Citizens for Responsibility and Ethics in Washington (CREW) to the Inspector General of the U.S. Department of Education raises important issues. Although the letter is written in a very careful, thorough, and lawyerly manner, no doubt most readers will notice right away that the subject of the letter are the business practices of Ignite!, the company run by the president’s brother Neil.

“The idea behind having evidence that an instructional program works is a good one. The law has to address how the evidence can be produced while supporting local innovation and choice.”

CREW documents that Ignite! has sold quite a few units of Curriculum on Wheels (COW) to schools in Texas and elsewhere and that these were purchased with NCLB funds. They also document that there is no accessible scientific evidence that COWs are effective. Given the NCLB requirement that funds be used for programs that have scientifically-based evidence of effectiveness, there appears to be a problem. The question we want to raise is: whose problem is this?

The media report that Mr. Bush has responded to the issues. For example, this explanation appeared in eSchool News (Nov. 17, 2007):

Mr. Bush appears to suggest that NCLB requires only that products incorporate scientific principles. This suggestion is doubtful, outside Reading First, which had its own rules. With respect to actually showing scientifically valid evidence of effectiveness, he concedes that none exists for COWs, but points to the fact that his company’s competitors also lack that kind of evidence.

We came to two conclusions about CREW’s contentions: First, their letter suggests that Ignite! did something wrong in selling its product without scientific evidence. A perspective we want to suggest is that nothing in NCLB calls for vendors to base their products on the “science of learning,” let alone conduct WWC-qualified experimental evidence of effectiveness. Nowhere is it stated that vendors are not allowed to sell ineffective products. Education is not like the market for medical products, in which the producers have to prove effectiveness to get FDA approval to begin marketing. NCLB rules apply to school systems that are using federal funds to purchase programs like COW. The IG investigation has to be directed to the state and local agencies that allow this to happen. We think that the investigators will quickly discover that these agencies have not been given much guidance as to how to interpret the requirements. (Of course with Reading First, the Department took a hands-on approach to approving only particular products whose effectiveness was judged to be scientifically based, but this approach was exceptional.)

Our second conclusion is that the current law is unenforceable because there is insufficient scientific evidence about the effectiveness of the products and services for which agencies want to use their NCLB funds. The law needs to be modified. But the solution is not to water down the provisions (e.g., by allowing anecdotal evidence if that’s all that is available) or remove them altogether as some suggest. The idea behind having evidence that an instructional program works is a good one. The law has to address how the evidence can be produced while supporting local innovation and choice. State and local agencies will need the funds to conduct proper evaluations. Most importantly, the law has to allow agencies to adopt “unproven” programs under the condition that they assist in producing the evidence to support their continued usage.

CREW’s letter misses the mark. But an investigation by the IG may help to ignite a reconsideration of how schools can get the evidence they need. —DN

(Read responses to this opinion piece)

(Respond to this opinion piece)


Congress Grapples with the Meaning of “Scientific Research”

October 2007

Good news and bad news. As reported recently in Education Week(Viadero, 2007, October 17), pieces of legislation currently being put forward contain competing definitions for scientific research. The good news is that we may finally be getting rid of the obtuse and cumbersome term “Scientifically Based Research.” Instead we find some of the legislation using the ordinary English phrase “scientific research” (without the legalese capitalization). So far, the various proposals for NCLB reauthorization are sticking with the idea that school districts will find scientific evidence useful in selecting effective instructional programs and are mostly just tweaking the definition.

“There is a relatively simple fix that would help democratize the process for states and districts that want to try something because it looks promising but has not yet been 'proven' in a sufficient number of other districts.”

So why is the definition of scientific research important? This gets to the bad news. It is important because the definition—whatever it turns out to be—will determine which programs are, in effect, on an approved list for purchase with NCLB funds.

Let’s take a look at two candidate definitions, just focusing on the more controversial provisions.

Both say essentially the same thing, but the new wording takes the primacy off random assignment and puts it on eliminating plausible competing explanations. We see the change as a concession to researchers who find random assignment too difficult to pull off. These researchers are not, however, relieved of the requirement to eliminate competing explanations (for which randomized control remains the most effective method). Meanwhile, another bill, introduced recently by Senators Lugar and Bingaman takes a radically different approach to a definition.

As soon as legislation tries to be this specific, counter examples immediately leap to mind. For example, we are currently conducting a study of a reading program that fits the last two points but, because the program is designed as a 10-week intervention, it can never become research-proven under this definition. Another oddity is that the size of the impact and the size of the sample are specified, but not the level of confidence required—it is unlikely we would have any confidence in a finding of a 0.2 effect size with only 10 classrooms in the study. Perhaps the most unacceptable part of this definition is the term “research-proven.” This is far too strong and absolute. It suggests that as soon as two small studies are completed, the program gets a perpetual green light for district purchases under NCLB.

As odd as this definition may be, we can understand why it was introduced. The most prevalent interpretation of the requirement for “Scientifically Based Research” in NCLB has been that the program under consideration should have been written and developed based on findings derived from scientific research. It was not required that the program itself have any scientific evidence of effectiveness. The Lugar-Bingaman proposal calls for scientific tests of the program itself. In Reading First, programs that had actual evidence of effectiveness were famously left off the approved list, while programs that simply claimed to be designed based on prior scientific research were put on. This proposal will help to level the playing field. To avoid the traps that open up when specific designs are legislated, perhaps the law could call for the convening of a broadly representative panel to hash out the differences between competing sets of criteria rather than enshrine one abbreviated set in federal law.

But even with consensus on the review criteria for acceptable research (and for explaining the trade–offs to the consumers of the research reviews at the state and local level), we are still left with an approved list—a set of programs with sufficient scientific evidence of effectiveness to be purchased. Meanwhile new programs (books, software, professional development, interventions, etc.) are becoming available every day that have not yet been “proven.”

There is a relatively simple fix that would help democratize the process for states and districts that want to try something because it looks promising but has not yet been “proven” in a sufficient number of other districts. Wherever the law says that a program must have scientific research behind it, also allow the state or district to conduct the necessary scientific research as part of the federal funding. So for example, where the Miller–McKeon Draft calls for

“a description of how the activities to be carried out by the eligible partnership will be based on a review of scientifically valid research,”

simply change that to

“a description of how the activities to be carried out by the eligible partnership will be based on a review of, or evaluation using, scientifically valid research.”

Similarly, a call for

“including integrating reliable teaching methods based on scientifically valid research”

can instead be a call for

“including integrating reliable teaching methods based on, or evaluated by, scientifically valid research.”

This opens the way for districts to try things they think should work for them while helping to increase the total amount of research available for evaluating the effectiveness of new promising programs. Most importantly, it turns the static approved list into a process for continuous research and improvement. —DN

(Read responses to this opinion piece)

(Respond to this opinion piece)


Ed Week: “Federal Reading Review Overlooks Popular Texts”

September 2007

The August 29, 2007 issue of Education Week reports the release of the What Works Clearinghouse’s review of beginning reading programs. Out of nearly 900 studies that were reviewed, only 51 met the WWC standards—an average of about two studies per reading program that were included. (120 other reading programs were examined in 850 studies deemed methodologically unacceptable.) The article, written by Kathleen Kennedy Manzo, notes that the major textbook offerings, on which districts spend hundreds of millions of dollars, did not have acceptable research available. Bob Slavin, an accomplished researcher and founder of the Success for All program (which got a middling rating on the WWC scale), also noted that the programs reviewed were mostly supplementary and smaller intervention programs, rather than the more comprehensive school-wide programs.

“WWC is a good starting point... but the WWC reviews are not a substitute for trying out the intervention in your own district.”

Why is there this apparent bias in what is covered in WWC reviews? Is it in the research base or in the approach that the WWC takes to reviews? It is a bit of both. First it is easier to find an impact of a program when it is supplemental and it is being compared to classrooms that do not have that supplement. This is especially true where the intervention is intense and targeted to a subset of the students. In contrast, consider trying to test a basal reading program. What does the control group have? Probably the prior version of the same basal or some other basal. Both programs may be good tools for helping teachers teach students to read, but the difference between the two is very hard to measure. In such an experiment, the “treatment” program would have “no discernible effect” (the WWC category for no measurable impact). Unlike a medical experiment where the control group gets a placebo, we can’t find a control group that has no reading program at all. Probably the major reason there is so little rigorous research on textbook programs is that districts usually have no choice: they have to buy one or another. Research on supplementary programs, in contrast, can inform a discretionary decision and so has more value to the decision-maker.

While it may be hard to answer whether one textbook program is more effective than another, a better question may be whether one works better for specific populations, such as inexperienced teachers or English learners. It is a useful question if you are deciding on a text for your particular district but it is not a question that is addressed in WWC reviews.

Another characteristic of WWC reviews is that the metric of impact is the same whether it is a small experiment on a highly defined intervention or a very large experiment on a comprehensive intervention. As researchers, we know that it is easier to show a large impact in a small targeted experiment. It is difficult to test something like Success for All that requires school-wide commitment. At Empirical Education we suggest to educators that WWC is a good starting point to find out what research has been conducted on interventions of interest. But the WWC reviews are not a substitute for trying out the intervention in your own district. In a local experimental pilot, the control group is your current program. Your research question is whether the intervention is sufficiently more effective than your current program for the teachers or students of interest to make it worth the investment. —DN

(Respond to this opinion piece)


Should the New NCLB Still Talk about “Scientifically Based Research”?

July 2007

Now that NCLB is up for reauthorization, interest groups are jockeying to influence the new legislation. But we’ve been surprised by the absence of major policy papers about scientifically based research (SBR). Is this important topic just going to be ignored? We think it is time to open up discussion about what SBR should mean for schools. And what should be written into the new NCLB.

By now, schools may have good reasons to prefer that the SBR provisions simply go away. The Reading First (RF) scandals in which the US Department of Education and its consultants apparently promoted specific products were enabled by the way SBR was defined in the legislation setting up that program. In that context, the legislation refers to studies conducted by a research community. It appears that products were approved for purchase with RF funds if they were viewed by consultants—who conducted the research and were paid by publishers to promote the products—as incorporating the research. In RF, SBR is treated as the authoritative view of “what science says.” It treats science as a static set of facts that the scientists tell us are true. A consequence for schools of NCLB treating science as a set of facts is that science is reduced to an item on a purchasing check list. If some authority has declared that a particular product is “scientifically based,” it gets a check mark and can be purchased. This way of defining science made the RF scandal possible. It is a weakness in the NCLB that needs to be corrected.

“There are parts of NCLB where science is viewed as a process rather than a set of facts.”

On the other hand, a different aspect of SBR contained in NCLB should be retained. There are parts of NCLB where science is viewed as a process rather than a set of facts. This process calls for the kind of objectivity, observations, and controls familiar to many as the scientific method. Instead of accepting a product because an authority says it is based on scientific facts, educators should ask whether the product has been subjected to a scientific test. This would avoid the scandal in which products that were shown to be effective using scientific methods were not allowed on the approved list for RF because they weren’t viewed as properly based on the scientific facts. If we focus the new NCLB on this kind of SBR, then product vendors would be required to show that their products are effective using scientific methods. Schools should not be prevented from using NCLB funds for a product if they use scientific methods to show it is effective for their teachers and students. Instead of being told by somebody at the Department of Education they can’t use a product or program, educators in schools should be allowed to try it out locally in a scientific pilot to determine whether or not it works for them. —DN

(Respond to this opinion piece)


National Study of Educational Software a Disappointment

June 2007

The recent report on the effectiveness of reading and mathematics software products provides strong evidence that, on average, teachers who are willing to pilot a software product and try it out in their classroom for most of a year are not likely to see much benefit in terms of student reading or math achievement. What does this tell us about whether schools should continue purchasing instructional software systems such as those tested? Unfortunately, not as much as it could have. The study was conducted under the constraint of having to report to Congress, which appropriates funds for national programs, rather than to the school district decision-makers, who make local decisions based on a constellation of school performance, resource, and implementation issues. Consequently we are left with no evidence either way as to the impact of software when purchased and supported by a district and implemented systematically.

“The study was conducted under the constraint of having to report to Congress, which appropriates funds for national programs, rather than to the school district decision-makers, who make local decisions based on a constellation of school performance, resource, and implementation issues.”

By many methodological standards, the study, which cost more than $10 million, is quite strong. The use of random assignment of teachers to take up the software or to continue with their regular methods, for example, assures that bias from self-selection did not play a role as it does in many other technology studies. In our opinion, the main weakness of the study was that it spread the participating teachers out over a large number of districts and schools and tested each product in only one grade. This approach encompasses a broad sample of schools but leaves the individual teachers often as the lone implementer in the school and one of only a few in the district. This potentially reduces the support that would normally be provided by school leadership and district resources, as well as the mutual support of a team of teachers in the building.

We believe that a more appropriate and informative experiment would focus in the implementation in one or a small number of districts and in a limited number of schools. In this way, we can observe an implementation measuring characteristics such as how professional development is organized and how teachers are helped (or not helped) to integrate the software with district goals and standards. While this approach allows us to observe only a limited number of settings, it provides a richer picture that can be evaluated as a small set of coherent implementations. The measures of impact, then, can be associated with a realistic context.

Advocates for school technology have pointed out limitations of the national study. Often the suggestion is that a different approach or focus would have demonstrated the value of educational technology. For example, a joint statement from CoSN, ISTE, and SETDA released April 5, 2007 quotes Dr. Chris Dede, Wirth Professor in Learning Technologies at Harvard University: “In the past five years, emerging interactive media have provided ways to bring new, more powerful pedagogies and content to classrooms. This study misestimates the value of information and communication technologies by focusing exclusively on older approaches that do not take advantage of current technologies and leading edge educational methods.” While Chris is correct that the research did not address cutting edge technologies, it did test software that has been and, in most cases, continues to be successful in the marketplace. It is unlikely that technology advocates would call for taking the older approaches off the market. (Note that Empirical Education is a member of and active participant in CoSN.)

Decision-makers need some basis for evaluating the software that is commercially available. We can’t expect federally funded research to provide sufficiently targeted or timely evidence. This is why we advocate for school districts getting into the routine of piloting products on a small scale before a district-wide implementation. If the pilots are done systematically, they can be turned into small-scale experiments that inform the local decision. Hundreds of such experiments can be conducted quite cost effectively as vendor-district collaborations and will have the advantage of testing exactly the product, professional development, and support for implementation under exactly the conditions that the decision-maker cares about. —DN

(Respond to this opinion piece)

Latest
Evidence

U.S. Department of Education Could Expand its Concept of Student Growth
August 2014

Getting Different Results from the Same Program in Different Contexts
May 2014

State Reports Show Almost All Teachers Are Effective or Highly So. Is This Good News?
April 2013

(more)

News

Empirical Education Helps North Carolina to Train and Calibrate School Leaders in the North Carolina Educator Effectiveness System
August 7, 2014

Understanding Logic Models Workshop Series
June 17, 2014

Factor Analysis Shows Facets of Teaching
May 9, 2014

(more)