A version of this article originally appeared in the Dec. 7, 2001 issue of the MIRS newsletter in answer to the questions: "How effective are current standardized education tests, principally the Michigan Education Assessment Program (MEAP), in gauging the efficacy of our public education system? And are there currently too many incentives placed on MEAP performance?"
More than 15 years ago, a presidential commission released its seminal report, "A Nation At Risk," documenting the decline of student achievement in American schools. Since then, the testing and standards movement has captured the attention of policymakers and education reformers across the country. Today, almost every state has specific academic standards written into their laws and regulations governing public education. Forty-four states have standardized curricula in at least four basic subjects: English, math, social studies and science. Twenty-one states administer their own exams to tests students in these subjects, including Michigan.
The rationale for standardized testing has always been a matter of common sense: In order to measure how each student is doing academically, there has to be a standard of measure. Being able to measure achievement from one individual or group to another enables us to compare the efficacy of different educational methods and decide which is best for which student or group. Public opinion surveys over the last thirty years have demonstrated broad and durable support for achievement testing. In large numbers, parents believe that testing promotes accountability and improvements in the education system and tests should be relied upon to help make important decisions about students.
There are two main types of standardized tests: norm-referenced and criterion-referenced. Norm-referenced tests compare individual test results against the average results of students representative of the total student population, or against the "norm." The ACT, SAT and Stanford Nine exams are examples of privately-created and widely used norm-referenced tests. Criterion-referenced tests measure student progress against an absolute standard, like state curriculum. The Michigan Educational Assessment Program or MEAP is a criterion-referenced test that measures student progress against the state academic standards adopted by the State Board of Education.
The MEAP program began in 1969 as a legislative attempt to measure the achievement levels of students attending the state's public schools. Initially, it was a norm-referenced test, but after only four years it switched to a criterion-referenced test after public outcry against the socioeconomic scaling system used. Although the MEAP changed format, the complaints persist. The public was also told that the change in format would allow for the test to be used as a diagnostic tool to allow teachers to fine-tune curriculum and teaching methods. Recent changes to the testing program include adding a social studies exam, creating a scholarship as incentive for students to take the test seriously, and trimming the number of hours for specific exams. School performance on the MEAP had been the major component of the state's accreditation system until a few months ago when Superintendent Watkins abandoned the system previously established by the State Board of Education.
Many school leaders clearly value the MEAP. A quick Internet search for "MEAP" and "success" pulled up more than 250 Michigan schools that include student success on MEAP as a primary goal for their schools. For example, Westdale Elementary in Saginaw Township lists "improved MEAP writing scores" as a criterion for success. Central Middle School in the Plymouth-Canton Community school district describes a goal of students scoring "10 percent above the state average" on the MEAP math test.
As long as public education is funded by the public, the state can justify a role in setting goals and assuring the public that they are getting their money's worth. State-mandated tests of some sort are hard to dismiss since the state does have the right to know how the schools it funds are doing.
The fiscal year 2002 state budget for MEAP tests is $14.5 million, including an additional $1.5 million for test development. New statutory guidelines require that significant improvements be made to the tests, and that 50 percent of the questions be publicly released annually. New tests will be introduced for science and mathematics in 2002 and for English language arts in 2003.
Do the benefits of the MEAP program outweigh the costs? And to the extent that it prods teachers to "teach to the test," is that positive or negative?
If the public has bought into the state's academic standards, and the test measures progress against those standards, "teaching to the test" is not necessarily a bad thing. To best prepare students to score well on the test, teachers must only teach the state's standards. But if the standards are flawed or politicized or rigidly micromanaged by distant bureaucracies, then teaching to the test can be decidedly counterproductive.
The benefits of testing could outweigh the cost to taxpayers with a few changes.
If MEAP is retained, we should change the scoring of the exams so teachers and parents get results in a timely manner. Currently, tests are administered in the spring and results arrive in the fall. Teachers do not have the opportunity to modify curriculum and teaching or to provide remediation to students based on MEAP results because they arrive too late. Elementary students are in middle school by the time their test scores arrive.
Better yet, we should administer a nationally norm-referenced exam every year to every student. This need not be a state-created one, but could be one of the privately created and marketed exams and thus avoid at least some of the politics of a government-created test. This type of testing program would allow for meaningful comparisons between schools, teachers, states and districts and, more importantly, it would chart individual student progress from year-to-year. This type of value-added assessment enables evaluators to control for everything other than the teacher's effectiveness. It also would take into account the fact that students start at different levels of achievement. A value-added analysis allows parents, taxpayers and legislators to hold schools and teachers accountable for their ability to move students academically. For example, students in the suburbs starting at the 85th percentile would have to demonstrate academic gains, as well as students in the inner cities that might start at the 25th percentile.
Incidentally, a recent study found the cost of developing, administering, and scoring a state-based test is an average of four times more expensive than using a nationally norm-referenced exam. The costs of administering a state-based test also are passed on to local schools. Standard & Poor's found that Michigan schools spend an average of $413 per student for central administration, a figure that has ballooned by 18 percent between 1997 and 1999 (an increase that has outpaced all other spending categories). Brennan Brown, director of operations and advancement for the Davison-based and reform-minded Michigan School Board Leaders Association, adds, "Using a norm-referenced exam like the Stanford Nine will help parents make more informed decisions about where they send their children to school and highlight the need for systemic change of the government-run education monopoly. Forward-thinking school board members want value-added assessment in which each student's progress can be measured year-to-year and from school-to-school."
As a recent Standard & Poor's report points out, the MEAP exam does not currently allow these kinds of analyses for a couple of reasons. First, the fourth grade MEAP test measures progress toward the fourth grade academic standards. The fifth grade MEAP test measures progress toward the fifth grade academic standards. These tests do not allow for comparison because they measure progress against two different standards. Second, the MEAP test is not administered at every grade level. Students are not tested before fourth grade and there are gaps between the higher grade levels. Students can fall through the cracks between tests.
Currently, there is no connection between teacher pay and student performance. A recent MSU study found the highest paid teachers in Michigan teach at Southfield with an average annual pay of $69,900. Southfield's test scores are similar to Detroit's, which pays its teachers an average of $30,000 less per year. To really give schools incentive to improve student test scores, we must find ways to get around archaic union work rules and practices that prevent such things as merit pay and that handicap management in assigning teachers to where they are most needed. We need to relax counterproductive tenure and certification rules as well.
We also need substantial reform in teacher preparation at Michigan's 15 public universities. As a 1996 Mackinac Center for Public Policy study reported, the absence of a core curriculum and the "dumbing down" of standards and requirements in such key courses as freshman composition are ill-serving our prospective teachers. They spend too much time in universities in pedagogical courses loaded with politically correct content of dubious value and too little time immersed in the subject matter they ultimately will be expected to teach.
Every Michigan school district should take a look at the innovative program underway in the Rockford Public Schools near Grand Rapids. That district has developed its own testing materials and a student's competency is explained and certified on the back side of every graduate's diploma. Moreover, Rockford puts its money where its mouth and testing standards are. If an employer finds that one of its graduates does not meet the standards specified on the diploma, Rockford will offer remedial courses to that graduate free of charge to him or the employer, paid for by the district. Now that's accountability!
Remedial education, by the way, is surprisingly costly to Michigan businesses and universities, at a minimum of $600 million per year as documented in another Mackinac Center report.
Ultimately, the fuss over testing is a sideshow. The real issue involves monopoly on the one hand vs. choice and competition on the other. Monopoly produces mediocrity, while choice and competition produce incentives for quality improvement. Until the educational system is driven not by politics and compulsion but instead by the choices of parents in a thriving, competitive marketplace, we will forever be pulling out our collective hair over what test is appropriate and we'll be just as disappointed in the results.
Testing will become less controversial when the system starts producing better results in terms of educational attainment, and that's why genuine educational reformers these days are less focused on testing and more on systemic reforms that would infuse the system with incentives for excellence.
The Mackinac Center for Public Policy is a nonprofit research and educational institute that advances the principles of free markets and limited government. Through our research and education programs, we challenge government overreach and advocate for a free-market approach to public policy that frees people to realize their potential and dreams.
Please consider contributing to our work to advance a freer and more prosperous state.