An assistant professor at the University of Oklahoma, Daniel Hamlin, joins Paul E. Peterson to discuss Hamlin’s research on gun ownership in America, and its relationship to school shootings over 40 years.
Additionally, Hamlin and Peterson are currently moderating the virtual conference, A Safe Place to Learn, hosted by the Harvard Program on Education Policy and Governance.
By: Education Next Title: The Education Exchange: Gun Ownership Rates Decline, as School Shootings Spike Sourced From: www.educationnext.org/the-education-exchange-gun-ownership-rates-decline-as-school-shootings-spike/ Published Date: Tue, 31 May 2022 08:59:30 +0000
Critics of education choice claim that introducing and expanding choice programs will lead to a massive exodus of students that will dismantle public-school systems by “defunding” them. For instance, one critic claims that vouchers “could dramatically destabilize public-school systems and communities.” Legislators in states such as Indiana, Ohio, and West Virginia claimed that school-choice bills introduced in their states would destroy public schools.
Such overwrought claims are hard to square with our work and many other analyses of education-choice programs, including a recent study that showed students participating in choice programs, including programs that have been around for multiple decades, represent just 2 percent of all publicly funded students in the states that operate these programs.
As part of the publication The ABCs of School Choice, we report participation rates, or “take-up rates,” by program for each school year.
This is how we calculate that figure:
Trends matter too, though. Existing research doesn’t tell us about how programs might evolve or the extent to which participation increases or decreases over time. The rate in the third year that a program operates is probably going to be different than the rate in the same program’s twenty-third year. Take-up rates over time is what we are interested in understanding.
We decided to look at programs that were introduced in 2010 or later and that were in operation for at least 5 years. Our sample includes 27 private-education-choice programs in 19 states. These programs consist of four education savings accounts programs, 13 voucher programs, and 10 tax-credit scholarship programs. Thirteen of these programs exclusively serve students with special needs. All programs in the sample are statewide except one: Wisconsin’s Racine Parental Choice, which is open to students who reside in the Racine Unified School District.
Our estimates reflect eligibility requirements in place for each program during a given year. We generate these estimates at both the program and state levels. One challenge with generating state-level estimates is that, in states with multiple programs, eligibility may overlap, which could lead to double-counting. We therefore avoid double-counting by subtracting out regions of overlap. There are also some program-specific pathways that we do not account for given data limitations, such as students from military families. Additionally, for states with special-needs programs that have income limits, we assume that the household income distribution for special-needs students is the same as the income distribution for all households with children at the state level.
Even Over the Long Term, Take-Up Rates Remain Low
We found that, even after a decade of a program’s existence, take-up rates remained low (see Table 1). An exception was Wisconsin’s Racine Parental Choice Program, which has the highest take-up rate in the sample for each year in operation: 2.95 percent in the program’s first year and 37.15 percent in the program’s tenth year. Although the Racine program may seem like a total outlier, this program is actually distinct from the others included in this analysis. It operates within a large urban school district, whereas the other programs operate on a statewide basis. Racine may also have a high take-up rate because Wisconsin has had the presence of a choice program since 1990 with the Milwaukee Parental Choice Program. Thus, it’s likely that many families in Racine were already aware of the Racine program when it started, due to previous familiarity with the Milwaukee Parental Choice Program.
Among statewide choice programs, the Maryland BOOST program experienced the highest take-up in its first year, with 1.25 percent of eligible students in Maryland participating in the program. Among programs in their tenth year, the Indiana Choice Scholarship Program had the highest take-up rate, 6.95 percent. In the initial year, all but two programs had take-up rates well below 1 percent. By the fifth year, take-up rates for 21 of the 27 programs were below 2 percent and remained below that level through their ninth year. It appears that the exodus of students from states’ public school systems did not materialize.
Even After 10 Years, Programs with High Take-Up Rates Are Exceptions (Table 1)
For most programs, take-up rates remain below 2 percent for the better part of a decade. Scroll left-to-right for full results
Program Name
Launch Year
Program Type
State
Number of years in operation
Eligibile to Special Needs Students Only
Year 1
Year 2
Year 3
Year 4
Year 5
Year 6
Year 7
Year 8
Year 9
Year 10
Alabama Education Scholarship Program*
2013
Tax Credit
AL
9
N
0.01%
1.81%
0.26%
1.28%
1.39%
1.28%
1.47%
1.55%
1.11%
n/a
Arkansas Succeed Scholarship Program for Students with Disabilities
2016
Voucher
AR
5
Y
0.03%
0.23%
0.33%
0.52%
0.61%
n/a
n/a
n/a
n/a
n/a
Arizona Empowerment Scholarship Accounts
2011
ESA
AZ
10
Y
0.12%
0.24%
0.59%
1.02%
1.89%
2.62%
3.62%
4.54%
7.33%
6.58%
Arizona “Switcher”
2012
Tax Credit
AZ
8
N
0.43%
1.29%
1.55%
1.99%
2.13%
2.37%
2.54%
2.38%
n/a
n/a
Florida Gardiner ESA
2014
ESA
FL
7
Y
0.43%
1.30%
2.11%
2.64%
3.01%
3.41%
4.58%
n/a
n/a
n/a
Indiana School Scholarship Tax Credit
2010
Tax Credit
IN
12
N
0.08%
0.11%
0.55%
0.86%
2.03%
1.65%
1.71%
1.76%
1.89%
2.02%
Indiana Choice Scholarship
2011
Voucher
IN
10
N
0.74%
1.69%
3.64%
5.27%
5.93%
6.45%
6.87%
7.23%
7.25%
6.95%
Kansas Low Income
2015
Tax Credit
KS
7
N
0.01%
0.17%
0.26%
0.17%
0.15%
0.25%
0.83%
n/a
n/a
n/a
Louisiana School Choice Program for Certain Students with Exceptionalities
2011
Voucher
LA
9
Y
0.22%
0.24%
0.30%
0.39%
0.35%
0.40%
0.47%
0.50%
0.51%
n/a
Maryland BOOST
2016
Voucher
MD
5
N
1.25%
1.40%
1.71%
1.64%
1.35%
n/a
n/a
n/a
n/a
n/a
Mississippi Dyslexia Therapy Scholarship for Students with Dyslexia Program
2012
Voucher
MS
9
Y
0.23%
0.52%
0.83%
1.12%
1.00%
1.22%
1.42%
1.45%
1.16%
n/a
Mississippi Equal Opportunity for Students with Special Needs Program
2015
ESA
MS
6
Y
0.33%
0.67%
0.71%
0.69%
1.08%
0.96%
n/a
n/a
n/a
n/a
North Carolina Opportuniy Scholarship
2014
Voucher
NC
7
N
0.23%
0.54%
0.85%
1.15%
1.54%
1.92%
2.56%
n/a
n/a
n/a
North Carolina Special Education Scholarship Grants for Children with Disabilities
2014
Voucher
NC
8
Y
0.14%
0.31%
0.40%
0.57%
0.63%
0.88%
0.81%
0.78%
n/a
n/a
New Hampshire Education Tax Credit Program
2013
Tax Credit
NH
9
N
0.16%
0.06%
0.20%
0.29%
0.57%
0.70%
0.79%
1.25%
1.38%
n/a
Nevada Educational Choice
2015
Tax Credit
NV
6
N
0.21%
0.44%
0.84%
0.94%
0.60%
0.43%
n/a
n/a
n/a
n/a
Ohio Jon Peterson Special Needs Scholarship Program
2012
Voucher
OH
9
Y
0.52%
1.02%
1.34%
1.66%
1.88%
2.11%
2.37%
2.49%
2.79%
n/a
Ohio Income Scholarship
2013
Voucher
OH
8
N
0.09%
0.29%
0.48%
0.66%
1.43%
1.82%
2.05%
2.87%
n/a
n/a
Oklahoma Lindsey Nicole Henry Scholarships for Students with Disabilities
2010
Voucher
OK
11
Y
0.05%
0.15%
0.21%
0.27%
0.34%
0.42%
0.62%
0.64%
0.72%
0.81%
Oklahoma Equal Opportunity Education Scholarships
2013
Tax Credit
OK
9
N
0.01%
0.08%
0.14%
0.16%
0.28%
0.45%
0.46%
0.19%
0.15%
n/a
South Carolina Educational Credit for Exceptional Needs Children Fund
2014
Tax Credit
SC
7
Y
0.41%
1.16%
2.06%
1.88%
2.22%
2.15%
1.25%
n/a
n/a
n/a
South Dakota Partners in Education Tax Credit Program
2016
Tax Credit
SD
5
N
0.52%
0.88%
0.90%
1.37%
1.52%
n/a
n/a
n/a
n/a
n/a
Tennessee Individualized Education Account Program
2017
ESA
TN
5
Y
0.04%
0.07%
0.11%
0.13%
0.23%
n/a
n/a
n/a
n/a
n/a
Virginia Education Improvement Scholarships Tax Credits Program
2013
Tax Credit
VA
9
Y
0.01%
0.10%
0.25%
0.45%
0.55%
0.74%
0.79%
0.75%
0.79%
n/a
Wisconsin Racine Parental Choice
2011
Voucher
WI
10
N
2.95%
6.46%
15.57%
19.42%
23.43%
27.67%
32.85%
32.74%
35.53%
37.15%
Wisconsin Parental Choice Program (Statewide)
2013
Voucher
WI
8
N
0.32%
0.66%
1.74%
2.20%
1.16%
1.91%
2.57%
3.22%
n/a
n/a
Wisconsin Special Needs Scholarship Program
2016
Voucher
WI
5
Y
0.20%
0.21%
0.59%
0.87%
1.18%
n/a
n/a
n/a
n/a
n/a
*Starting January 1, 2015 the Alabama Department of Revenue changed its reporting requirements from a calendar year basis to a fiscal year basis. Thus, year 3 data in the analysis for Alabama’s program is based on six months. Data for subsequent years are based on fiscal years ending June 30.
Note: A program’s first year in operation is the first year that we observe students participating in the program. We also examine take-up rates by program type (ESA, voucher, and tax-credit scholarship programs), programs that exclusively serve special-needs populations, and programs that serve non-special-needs populations. The sample includes all programs that launched in 2010 or later and have been in operation for at least five years through 2021.
As seven states in the analysis have more than one program, we also estimated overall take-up rates for each state (see Table 2). These, too, remained low.
On a State-by-State Basis, Take-Up Rates Are Low (Table 2)
For all but one state, the take-up rate in a program’s initial year of operation was below 1 percent.
State
State abbrev
Number of programs
Year 1
Year 2
Year 3
Year 4
Year 5
Alabama*
AL
1
0.01%
1.81%
0.26%
1.28%
1.39%
Arizona
AZ
2
0.44%
1.32%
1.63%
2.12%
2.38%
Arkansas
AR
1
0.03%
0.23%
0.33%
0.52%
0.61%
Florida
FL
1
0.43%
1.30%
2.11%
2.64%
3.01%
Indiana
IN
2
0.85%
1.91%
4.32%
6.26%
8.04%
Kansas
KS
1
0.01%
0.17%
0.26%
0.17%
0.15%
Louisiana
LA
1
0.22%
0.24%
0.30%
0.39%
0.35%
Maryland
MD
1
1.25%
1.40%
1.71%
1.64%
1.35%
Mississippi
MS
2
0.31%
0.64%
0.74%
0.78%
1.06%
Nevada
NV
1
0.21%
0.44%
0.84%
0.94%
0.60%
New Hampshire
NH
1
0.16%
0.06%
0.20%
0.29%
0.57%
North Carolina
NC
2
0.23%
0.54%
0.84%
1.01%
1.32%
Ohio
OH
2
0.19%
0.47%
0.63%
0.84%
1.56%
Oklahoma
OK
2
0.02%
0.12%
0.15%
0.18%
0.29%
South Carolina
SC
1
0.41%
1.16%
2.06%
1.88%
2.22%
South Dakota
SD
1
0.52%
0.88%
0.90%
1.37%
1.52%
Tennessee
TN
1
0.04%
0.07%
0.11%
0.13%
0.23%
Virginia
VA
1
0.01%
0.10%
0.25%
0.45%
0.55%
Wisconsin
WI
3
0.38%
0.83%
1.62%
2.16%
1.55%
*Starting January 1, 2015 the Alabama Department of Revenue changed its reporting requirements from a calendar year basis to a fiscal year basis. Thus, year 3 data in the analysis for Alabama’s program is based on six months. Data for subsequent years are based on fiscal years ending June 30.
Note: After Year 5, the sample was reduced from prior years. For example, of programs that started in 2010 or later, Wisconsin had three programs that were operating during their fifth year. In year 6, there were two programs operating. In year 9, there was one program operating.
For all but one state, take-up rates for programs in their initial year in operation, including those in states with multiple programs, were below 1 percent, with the average being 0.30 percent. Maryland had the highest take-up rate at 1.25 percent, followed by Indiana at 0.85 percent. Because Maryland is a relatively small state, it may have been easier to disseminate information about the program to eligible families compared to other states.
Some programs are more popular than others, however (see Table 3). The overall take-up rate for all programs in the initial year was 0.26 percent. By the third year, the overall rate reached 1 percent, and by the fifth year, the overall take-up rate increased to just 1.74 percent. Take-up rates for ESA programs are slightly higher than rates for voucher and tax-credit scholarship programs over all years of operation.
Education Savings Accounts Are Somewhat More Popular Than Other Programs (Table 3)
But after five years, the average take-up rate for all programs is less than 2 percent.
Year 1
Year 2
Year 3
Year 4
Year 5
All programs
0.26%
0.68%
1.02%
1.40%
1.74%
ESA
0.29%
0.82%
1.34%
1.72%
2.16%
Tax Credit
0.18%
0.66%
0.75%
1.06%
1.32%
Voucher
0.33%
0.68%
1.23%
1.69%
2.11%
Note: The sample includes four education savings account programs, 13 voucher programs, and 10 tax-credit scholarship programs.
Tax-credit scholarship programs tend to have lower take-up rates, likely due to funding caps that are more prevalent with these kinds of programs. By the fifth year in operation, take-up rates for education savings account and voucher programs were just over 2 percent, while take up for tax-credit scholarship programs was 1.32 percent.
Take-up rates for non-special-needs programs are higher across the board compared to programs that exclusively serve students with special needs (see Tables 4 and 5). Participation in both types of programs is comparably low in their initial year (0.2 to 0.3 percent). By their fifth year, the overall take-up rate was 1.94 percent for non-special-needs programs, and 1.3 percent for special-needs programs.
Programs for Students with Special Needs Have Slightly Lower Take-Up Rates
(Tables 4 and 5)
All take-up rates are still lower than 3 percent.
Table 4: Non-special needs programs
Program type
Year 1
Year 2
Year 3
Year 4
Year 5
All programs
0.28%
0.75%
1.10%
1.54%
1.94%
ESA
n/a
n/a
n/a
n/a
n/a
Tax Credit
0.20%
0.77%
0.82%
1.17%
1.45%
Voucher
0.36%
0.74%
1.40%
1.94%
2.51%
Table 5: Special needs programs
Program type
Year 1
Year 2
Year 3
Year 4
Year 5
All programs
0.20%
0.51%
0.82%
1.07%
1.30%
ESA
0.29%
0.82%
1.34%
1.72%
2.16%
Tax Credit
0.07%
0.25%
0.50%
0.65%
0.79%
Voucher
0.26%
0.48%
0.68%
0.89%
1.01%
Note: All ESA programs in the sample are open to special-needs students only.
Among special-needs programs, tax-credit scholarship programs have a lower take-up rate than voucher and education savings account programs. ESAs have higher take-up rates over each year in operation than other program types. By the fifth year, ESA programs have take-up rates that are more than double those for voucher and tax-credit scholarship programs.
Why Are Take-Up Rates So Low?
Although this descriptive analysis does not tell us why we observe low take-up rates for most programs, there are a few plausible explanations.
A large portion of families are unaware of school choice programs.Survey work indicates that, when parents were asked why their children did not participate in their state’s education choice programs, 36 to 53 percent of parents with children in public schools in Arizona, Indiana, North Carolina, and Ohio indicated that they were unaware of them. In Indiana, program awareness was lowest among parents with children in district schools and significantly lower among rural district parents than urban district parents.
Program design includes low funding levels and limits placed on participation. On average, choice programs receive just one third of the funding that private-school systems receive. Thus, eligible families who desire other options may not be able to access alternative settings at current choice-program funding levels. Moreover, some programs limit participation by capping program enrollment, and most tax-credit scholarship programs cap tax-credit disbursements, which can limit program participation.
Families are satisfied with existing options and do not desire change.Public opinion polling indicates that about half of parents would prefer options outside the public-school system if financial costs or transportation were not factors in their decisions. A large disconnect remains between what families want for their children’s education and what they actually receive. We doubt, however, that this fully explains the low take-up rates we observe.
Contrary to dire predictions and claims from opponents about choice causing an exodus from public-school systems, take-up in private-education choice programs overall does not have a negative effect on public-school systems or their funding. In fact, research suggests that greater take-up in choice programs leads to better student outcomes for the vast majority of students choosing to remain in public schools. Looking at these facts, it seems clear that the claims of exodus and harm caused by choice programs are greatly exaggerated.
Marty Lueken is director of the Fiscal Research and Education Center at EdChoice. Michael Castro is a research assistant at EdChoice.
By: Martin Lueken Title: Tackling the “Exodus” Claim Sourced From: www.educationnext.org/tackling-the-exodus-claim-reality-take-up-rates-private-education-choice-programs/ Published Date: Thu, 26 May 2022 09:00:47 +0000
The news that a gunman killed 19 students and 2 teachers at an elementary school in Uvalde, Texas is bringing new attention to the question of school safety.
A presciently scheduled May 13 session of a virtual conference on school safety organized by the Harvard Program on Education Policy and Governance was titled “What can be done about school shootings?”
Panelist Katherine Newman, author of Rampage: The Social Roots of School Shootings, noted that “rampage school shootings are a subset of gun violence on school campuses. They are not the whole story by any means. And in fact, they are different in some striking respects from background violence in cities and even on school campuses.”
“Rampage school shootings, at least at the point we were studying them, tend to take place in communities high on social capital, very high on social capital,” said Newman, who is also system chancellor for academic programs at the University of Massachusetts. She asked, “how do we make it easier for people who hear troubling information to come forward and for that information to be properly investigated? Because there was a lot of information circulating. There generally is. And the best hope for interdicting school rampage shootings is making it possible for that information to be acted on.”
Peter Langman, a researcher with the United States Secret Service and author of the books School Shooters and Why Kids Kill, said installing metal detectors at school entryways was no solution.
“My concern is if schools are putting a lot of time and effort into what’s called target hardening, making it harder for intruders to get in or for a gun to get in, they may not be doing threat assessment. They may think they’ve handled the problem. And what a lot of people forget is many school shootings have been wholly or partly outside,” he said.
Said Langman, “If you’re not tapping into what your students know with anonymous tip line, if you’re not getting the information you need to stay on top of safety issues, hardening the target is not going to keep people safe.”
A May 13 panel of a virtual conference on school safety organized by the Harvard Program on Education Policy and Governance was titled “What can be done about school shootings?”
A third panelist, Dewey Cornell, a professor at the University of Virginia, agreed. “We’re spending far too much on security measures and not enough on school counselors, approaches that create a softer, more welcoming environment in our schools, not a harder one. And the research bears it out. That the schools with the target hardening measures are not statistically safer and the students and teachers don’t feel safer either.”
In May of 2019, I reported from New Hampshire for Education Next that, “School shootings are shaping up as a big issue on the Democratic campaign trail.”
Joe Biden and Kamala Harris, who were then rival candidates for the Democratic presidential nomination, were both emphasizing the issue. “I’m the only guy ever, nationally, to beat the NRA,” Biden said while campaigning in Nashua, N.H. “Look, the Second Amendment exists, but it doesn’t say you can own any weapon you want,” Biden said. “If you own a gun, put a damn trigger lock on it. Put it in a case.”
In May 2019, Biden called the gun issue his “single biggest priority in terms of dealing with the concerns of young people right now.”
Talking to reporters after the event, Biden said he was open to a “federal gun licensing system” or weapons that required their owner’s fingerprint to unlock. He said the biggest political obstacle to gun control law wasn’t gun owners or the National Rifle Assocation but “gun manufacturers. That’s where the money is.”
Also in May 2019, Harris said as president, she’d give Congress 100 days to act, but if it didn’t she’d take executive action to “ban the import of assault weapons into our country.”
An article in the Spring 2019 issue of Education Next (“Protecting Students from Gun Violence”), said that “target hardening” actions might contribute to student anxiety. “Some students might feel safer and calmer in hardened environments, but it is equally plausible that intensive security procedures send the message that schools are unsafe, fearful places, thus adding an element of stress to the learning environment,” the article said.
A bill to create a school-voucher program in Oklahoma failed earlier this year to win passage in the state legislature. Oklahoma is a state where 68 percent of those surveyed favor school choice, and yet this small school-choice bill, which was sponsored by the state senate’s president pro tempore and supported by the governor, was defeated.
In 2020, I was the executive director of an Oklahoma charter school authorized by the local public-school district. The district retained 5 percent of our public funding each year as its authorizing fee. When the state passed a law capping charter authorizing fees at 3 percent of public funding, the authorizer raised our rent in an amount equal to the fee reduction.
Both events highlight the critical flaw in the current K–12 education-reform movement: it underestimates the system’s hostility to innovation. Even in a school-choice-friendly state like Oklahoma, even the narrowest of reforms only occasionally survive the challenge mounted by the traditional system. When they do survive, the system easily counteracts them. Our public-education system is a bureaucratic monopoly controlled by special-interest groups and, for all intents and purposes, immune to change.
The U.S. compulsory-education system works for no one. It is expensive, achievement lags internationally, teachers are leaving the profession, and parents feel powerless. Despite 60 years of increasing costs and disappointing results, almost nothing has been done to fix the system. Adults argue and point fingers while kids and society pay the price for inaction. Progress in education has stagnated.
Meanwhile, we have made progress in virtually every other human endeavor. We are living longer and living better. We are more prosperous thanks to innovation—borne of entrepreneurs taking risks and bringing new and better ideas to market.
The enemies of innovation, however, are the drivers of our public-education system: government bureaucracy, monopoly, and special interests. Government bureaucracies do not fear failure; they crave resources and therefore serve even higher levels of the bureaucracy to obtain them. Monopolies do not fear competition; they fear failure and so avoid taking the risks necessary for change. Special interests fear competition and crave influence; they subvert market incentives by amassing disproportionate power.
In the field of education reform, those supporting the traditional system call for more resources, while reformers advocate for various forms of choice. The reformers, however, rarely describe the prerequisite political changes that need to be made to make sustainable reform possible. The solution to the ills of our education system may in fact involve more resources eventually and certainly includes greater choice, but it must be preceded by political reforms that make the system amenable to sustainable innovation.
The political processes that control the education system exist outside the established norms of our electoral system. School-board elections are commonly held at times other than when general elections are held. For example, my home state elects school-board members in February. These off-cycle elections have low voter turnout and therefore give disproportionate influence to special interests, more specifically the teachers unions. These off-cycle elections frequently produce school boards with views on education that are different from those of the community the board represents.
School-board elections also commonly omit partisan labels from the ballot. The average voter doesn’t have time to research the positions of individual school-board candidates and so, even in on-cycle elections, will leave that choice blank. Again, this gives more influence to special interests. Partisan labels inform voters about likely candidate positions.
Finally, about 25 percent of states select the top educational executive in elections independent of the governor. Running for office forces candidates to curry special-interest favor. Being elected also makes the state head of instruction a natural competitor for the governor and therefore prone to unproductive conflict.
Until these political processes are changed, we cannot expect the education system to change, either. Even minor reforms will either not survive the legislative process or be easily counteracted once implemented. Real progress can only happen after we break the hold innovation’s enemies have on the education system.
Don Parker was a charter-school board member for 15 years, served two terms as the board chair, and two years as the district’s executive director. He also served three consecutive Oklahoma department of education administrations in a variety of advisory roles.
By: Don Parker Title: Why Even Oklahoma Couldn’t Pass a School Voucher Bill Sourced From: www.educationnext.org/why-even-oklahoma-couldnt-pass-school-voucher-bill/ Published Date: Wed, 25 May 2022 09:00:46 +0000
It was white supremacists and their allies, tweeted Gabriela López, who cost her her seat on the San Francisco Board of Education after city residents voted by a three-to-one margin to remove her from office. “If you fight for racial justice, this is the consequence.”
Alison Collins, who served as vice president of the board until the surfacing of anti-Asian tweets she had written in 2016, also saw herself as a political martyr in the recall vote. She’d fought to “desegregate” the city’s selective (and majority Asian) high school, Lowell, by ending merit-based admissions.
Shamann Walton, president of the County Board of Supervisors, blamed “closet Republicans.” (In a city where 86 percent voted for Joe Biden, that’s a very large closet.)
So, is bluer-than-blue San Francisco turning red? Well, it’s not Virginia. But the school-board earthquake of 2022 has shaken up the political reality.
What’s more, the recall effort was not a conservative cause. It was launched and supported by independents, moderates, and progressives who were infuriated by a toxic mix of incompetence, arrogance, and woke rhetoric.
Residents of nearly every neighborhood voted overwhelmingly on February 15 to recall López, Collins, and Faauuga Moliga, who was far less unpopular with the city’s residents but was unable to separate himself from his colleagues. The 36 percent turnout—47 percent for those requesting a Chinese-language ballot—was higher than expected for an off-cycle election. Low-income neighborhoods posted a low turnout, and the vote in these areas was split. Voters in the wealthier neighborhoods scored a high turnout and voted heavily for the recall, perhaps because of the board’s scrapping of merit-based admissions at Lowell High School.
“The voters of this City have delivered a clear message that the School Board must focus on the essentials of delivering a well-run school system above all else,” said Mayor London Breed, who strongly endorsed the recall.
Moliga stepped down the day after the recall vote, but López and Collins stayed until March 11, when they were officially removed. That same day, the mayor replaced the ousted members with three parents, Lainie Motamedi, Lisa Weissman-Ward, and Ann Hsu, who will help choose a new superintendent in June. The three will have to win their seats in November to stay on the board.
Breed consulted with parents, community groups, and the recall organizers before making her choices for the board. Both Hsu, who campaigned for the recall, and Motamedi had served on school-district committees.
Lengthy School Closures
San Francisco’s coronavirus rates were lower than those in other cities, its vaccination rates higher. Yet the public schools remained closed longer in San Francisco than in any other major city. Elementary-school students were out for a year, and the city had to sue to force the district to reopen. Middle and high schools didn’t reopen until fall 2021. (Two high schools opened with “supervision”—but no teaching—for two weeks in May, to qualify for a state grant.)
Led by López, the board president, and Collins, the school board “put performative politics over children,” said Todd David, a father of three who created a parents’ group to support the recall campaign. “What really bothered me is that, early in the pandemic, the superintendent wanted to have a reopening consultant, funded by private donors, and the board said no because the consultant had worked for charter schools,” he said.
“There is no Plan B,” Super-intendent Vincent Matthews had warned the board. And there wasn’t.
When it was clear schools wouldn’t reopen in fall 2020, city staffers worked with community groups and nonprofits to open “hubs” where needy students could get supervised remote learning, meals, and recreation. Hubs opened in rec centers, YMCAs, Boys & Girls’ Clubs, and libraries—but not in public schools or on school playgrounds. In a study, researchers blamed resistance from the board, specifically Collins, and from the teachers union.
“The city did amazing work to open learning hubs,” said David. “The board . . . it’s rare to see a governing body so completely fail.”
While public schools were closed—and private schools were open—the board decided to rename 44 schools based on a muddled and historically inaccurate process that declared Abraham Lincoln, George Washington, Paul Revere, Dianne Feinstein, and others insufficiently pure.
Mayor called it “offensive” to rename schools that weren’t open. Even San Franciscans who supported renaming some schools thought the board should have waited until the crisis was over—and until someone could figure out whether Roosevelt Middle School was named for Teddy or FDR.
Ultimately, the board dropped the renaming effort. It also failed in its quest to whitewash a historic mural at Washington High School.
But the board’s virtue signaling also signaled an indifference to the job of running a school district.
In January 2021, nearly a year into remote education, the district reported significant learning losses for Black, Hispanic, and Asian students and students from low-income families.
López waved off these results. Students “are learning more about their families and their cultures” and “just having different learning experiences than the ones we currently measure,” she told the San Francisco Chronicle.
At a board meeting in March 2021, Collins reminded Ritu Khanna, the district’s chief of research, planning, and assessment, to use the term “learning change” instead of “learning loss.”
That infuriated Kit Lam, an immigrant from Hong Kong with two children. He saw his teenage son struggling with distance learning and knew the boy was not alone. As an investigator for the school district, Lam saw that “many students were falling way, way behind,” and others were just missing.
Lam Zoomed into school-board meetings, staying up late and hoping to hear about the reopening plan. There was no plan.
The recall effort was the brainchild of two newcomers to San Francisco, a high-tech couple with no political experience or contacts. Siva Raj’s two children were struggling with remote classes and had become frustrated, depressed, bored, and angry. Autumn Looijen’s three children were learning—happily—in person in suburban Los Altos, one of the first Bay Area districts to reopen schools.
Raj and Looijen put the recall on social media, and it caught fire. Lam reached out to them and volunteered to translate the recall site into Chinese and then to collect signatures on recall petitions and then to register voters. “At first, I wanted to be anonymous,” Lam says. “But I made a promise to my son: ‘I will speak for you.’ So I spoke out.”
When Lam’s union of school-district workers met to discuss the election, he argued in favor of the recall. He lost the first vote: staffers wanted to stand with the teachers union, he says. But, on a second vote, they decided not to contribute money or volunteers to the anti-recall campaign.
Fire in the Belly
At nearly every high school in America that admits students based on grades and test scores, hard-studying Asian-American students are well represented. For years, San Francisco has tinkered with Lowell’s admissions process to qualify more Black and Hispanic students but has made little progress.
The board used a lottery for admissions in 2020, arguing that the pandemic had disrupted grades and testing. Collins showed her disdain for the traditional test-based admissions process in a board meeting. “Merit, meritocracy, and especially meritocracy based on standardized testing . . . those are racist systems,” she said.
The next year, the board voted to turn Lowell into a comprehensive high school open to all students. Lowell alumni were furious. So were Asian immigrant parents (see “Exam-School Admissions Come Under Pressure amid Pandemic,” features, Spring 2021).
“People see the success of Asian students and think they’re advantaged,” said Lam. In Chinatown, “you can see a family of four living in a single room with a shared bathroom down the hall. We rely on good public education. We can’t afford private school.”
Lowell alumni filed a lawsuit, which ultimately succeeded. The new school board will decide Lowell’s fate. Hsu and Motamedi support merit-based admissions at Lowell. Weissman-Ward did not commit herself but said she supports “academically rigorous programs.”
Not long after the recall campaign began, someone posted tweets by Collins from 2016, before she joined the board, in which she accused Asian Americans of using “white supremacist thinking to assimilate and ‘get ahead’” and remarked that “being a house n****r is still being a n****r.”
In the uproar, Collins was ousted as vice president and was replaced by Moliga. She remained in office, but the majority of board members gave her a no-confidence vote. Collins sued the district and her board colleagues (except for López) for $87 million. Among other things, the suit charged “injury to spiritual solace.”
The suit, thrown out by a judge in August 2021, “cost the budget-strapped district some $400,000 to defend,” wrote Clara Jeffery in Mother Jones.
It would have been the last straw for San Franciscans, if there weren’t so many other last straws.
Chinese Americans, already angry about the board’s hostility to merit-based admissions, saw the tweets as proof that they were getting no respect.
“Education is a fire-in-the-belly issue” for Chinese parents, said Bayard Fong, president of the Chinese-American Democratic Club and the father of three children. His wife works for the district as an administrator.
The school board “acted as though some students mattered more than others,” said Fong. “We were being ignored or treated as though we were the problem.”
The club provided 100 volunteers to gather signatures for recall petitions.
Ann Hsu, one of the mayor’s replacements for the ousted school-board members, was a PTA president and former Silicon Valley entrepreneur who hadn’t been involved in politics before the recall effort arose. Then she saw her son languishing during 18 months of remote schooling. Unengaged by online classes, he “wasted his time all day, every day, playing video games,” she wrote in the New York Post.
Hsu helped form the Chinese/API Voter Outreach Taskforce to register voters for the recall. Many residents were not aware that noncitizen parents, empowered by a 2016 charter-amendment ballet initiative, can vote for school board in San Francisco. Volunteers signed up noncitizens too.
Chinese in America must “learn to speak up,” wrote Hsu.
“We Won’t Be Silent Anymore”
The board managed to anger a lot of other groups, too.
When the recall qualified for the ballot, Todd David, who runs the Housing Action Coalition, backed Raj and Looijen with his political savvy. He had political experience working for the election of State Senator Scott Wiener, another pro-recall liberal. “Siva and Autumn did a phenomenal job of grassroots organizing,” said David. “I knew how to do fundraising and a traditional campaign.” The recall raised an astounding $1.9 million, including large donations from high-tech investors and real-estate groups.
The “no on recall” side raised a small fraction of that, mostly from unions, and got some volunteers from the “Berniecrats,” but only Moliga really tried to fight the recall.
The school board’s defenders said wealthy “privatizers” wanted to destroy public education. One of the donors to the recall effort was the pioneer venture capitalist Arthur Rock, 95, a billionaire who has also supported charter schools.
But others say the recall was the only way to save San Francisco Unified.
“Parents who have choices are opting out,” said Patrick Wolff, a parent who runs Families for San Francisco, which launched Campaign for Better Public Schools to back the recall.
“The recall effort, while catalyzed by Covid, reflects deep discontent of the parent community with the state of the public schools,” said Wolff. “San Francisco has some of the worst achievement gaps in the state and one of the worst 3rd-grade reading levels.”
The state has threatened to take over if the district can’t balance its budget. To survive financially, the district must regain parents’ trust and stop losing students, said Wolff.
It won’t be easy.
San Francisco has the lowest percentage of children of any major city—more dogs than children—and a high percentage of those children attend private school. Before the pandemic, the school board tended to fly under the radar.
“During the pandemic, parents paid a lot more attention to the schools,” said Wolff. “Everything was on Zoom.”
Families for San Francisco will inform parents—and the whole city—of what public schools are doing, he said. The group already has challenged the district’s claim that “equity math” is working, citing missing, misleading, and cherrypicked data in the school system’s evaluations.
The Chinese-American community will have more clout going forward because of the landslide recall vote, Fong said. “We won’t be silent anymore. We’re standing up.”
School-board members will treat citizens with more respect, predicts Raj. They now know that people are watching.
The next political earthquake in San Francisco could come in June, when voters will decide whether to recall District Attorney Chesa Boudin, who some blame for the city’s crime wave.
Despite a surge in school-board recall efforts across the country in 2021, most didn’t qualify for the ballot. Ballotopedia tracked 92 such efforts naming 237 officials. Ultimately, 17 officials were subject to recall votes, and only one was recalled. More recall efforts are in the works in 2022, often motivated by disagreements on pandemic policies and how to teach about gender identity and racism. In Loudoun County, Virginia, where school-board meetings have been very contentious, a conservative parent group called Fight for Schools is leading a campaign to recall some board members. It’s a liberal county—Republican Glenn Youngkin got only 44 percent of the vote there in his winning bid for governor—but anything is possible in 2022.
Joanne Jacobs is a freelance education writer and blogger (joannejacobs.com) based in California.
By: Joanne Jacobs Title: School Board Shakeup in San Francisco Sourced From: www.educationnext.org/school-board-shakeup-san-francisco-arrogance-incompetence-woke-rhetoric-trigger-successful-recall-effort/ Published Date: Tue, 24 May 2022 09:00:31 +0000
A front-page New York Timesarticle reports that “Lucy Calkins, a leading literacy expert, has rewritten her curriculum to include a fuller embrace of phonics and the science of reading.”
Says the Times, “after decades of resistance, Professor Calkins has made a major retreat.”
The Summer 2007 issue of Education Next featured “The Lucy Calkins Project: Parsing a self-proclaimed literacy guru.” Barbara Feinberg wrote then, “Aside from grumblings from the New York City teachers required to work under her system, there has been remarkably little open debate about the basic premises behind Calkins’s approach, or even feedback on how the programs are faring in the classroom.”
By: Education Next Title: Lucy Calkins Adjusts, and the Press Takes Notice Sourced From: www.educationnext.org/lucy-calkins-adjusts-and-the-press-takes-notice/ Published Date: Mon, 23 May 2022 16:24:21 +0000
A Distinguished Senior Fellow at the Thomas B. Fordham Institute and a Senior Fellow at Stanford’s Hoover Institution, Chester E. Finn, Jr., joins Paul E. Peterson to discuss Finn’s new book, Assessing the Nation’s Report Card: Challenges and Choices for NAEP.
As I write this, representative samples of 4th and 8th graders are taking National Assessment of Educational Progress tests in math and English. These exams must be held every two years in accordance with federal law to determine how well ongoing education reforms are working, whether achievement gaps between key demographic groups are growing or shrinking, and to what extent the nation is still “at risk” due to weakness in its K–12 system. Best known as “The Nation’s Report Card,” the NAEP results have long displayed student achievement in two ways: as points on a stable vertical scale that typically runs from 0 to 300 or 500 and as the percentages of test takers whose scores reach or surpass a trio of “achievement levels.” These achievement levels—dubbed “basic,” “proficient,” and “advanced”—were established by the National Assessment Governing Board, an almost-independent 26-member body, and have resulted in the closest thing America has ever had to nationwide academic standards.
Though the NAEP achievement levels have gained wide acceptance amongst the public and in the media, they are not without their detractors. At the outset, the idea that NAEP would set any sort of achievement standards was controversial; what business had the federal government in getting involved with the responsibilities of states and localities? Since then, critics have complained that the achievement levels are too rigorous and are used to create a false sense of crisis. Now, even after three decades, the National Center for Education Statistics continues to insist that the achievement levels should be used on a “trial basis.”
How and why all this came about is quite a saga, as is the blizzard of controversy and pushback that has befallen the standards since day one.
Recognizing the Need for Performance Comparisons
In NAEP’s early days, results were reported according to how test takers fared on individual items. It was done this way both because NAEP’s original architects were education researchers and because the public-school establishment demanded that this new government testing scheme not lead to comparisons between districts, states, or other identifiable units of the K–12 system. Indeed, for more than two decades after the exams’ inception in 1969, aggregate NAEP data were generated only for the nation as a whole and four large geographic quadrants. In short, by striving to avoid political landmines while pleasing the research community, NAEP’s designers had produced a new assessment system that didn’t provide much of value to policymakers, education leaders, journalists, or the wider public.
Early critical appraisals pointed this out and suggested a different approach. A biting 1976 evaluation by the General Accounting Office said that “unless meaningful performance comparisons can be made, states, localities, and other data users are not as likely to find the National Assessment data useful.” Yet nothing changed until 1983, when two events heralded major shifts in NAEP.
The first stemmed from a funding competition held by the National Institute of Education. That led to moving the main contract to conduct NAEP to the Princeton-based Educational Testing Service from the Denver-based Education Commission of the States. ETS’s successful proposal described plans to overhaul many elements of the assessment, including how test results would be scored, analyzed, and reported.
The noisier event that year, of course, was the declaration by the National Commission on Excellence in Education that the nation was “at risk” because its schools weren’t producing adequately educated graduates. Echoed and amplified by education secretaries Terrel Bell and Bill Bennett, as well as President Reagan himself, A Nation at Risk led more state leaders to examine their K–12 systems and find them wanting. But they lacked clear, comparative data by which to gauge their shortcomings and monitor progress in reforming them. The U.S. Department of Education had nothing to offer except a chart based on SAT and ACT scores, which dealt only with a subset of students near the end of high school. NAEP was no help whatsoever. The governors wanted more.
Some of this they undertook on their own. In mid-decade, the National Governors Association, catalyzed by Tennessee governor Lamar Alexander, launched a multi-year education study-and-renewal effort called “Time for Results” that highlighted the need for better achievement data. And the Southern Regional Education Board (also prompted by Alexander) persuaded a few member states to experiment with the use of NAEP tests to compare themselves.
At about the same time, Secretary Bennett named a blue-ribbon “study group” to recommend possible revisions to NAEP. Ultimately, that group urged major changes, almost all of which were then endorsed by the National Academy of Education. This led the Reagan administration to negotiate with Senator Ted Kennedy a full-fledged overhaul that Congress passed in 1988, months before the election of George H.W. Bush, whose campaign for the Oval Office included a pledge to serve as an “education president.”
The NAEP overhaul was multi-faceted and comprehensive, but, in hindsight, three provisions proved most consequential. First, the assessment would have an independent governing board charged with setting its policies and determining its content. Second, in response to the governors’ request for better data, NAEP was given authority to generate state-level achievement data on a “trial” basis. Third, its newly created governing board was given leeway to “identify” what the statute called “appropriate achievement goals for each age and grade in each subject to be tested.” (A Kennedy staffer later explained that this wording was “deliberately ambiguous” because nobody on Capitol Hill was sure how best to express this novel, inchoate, and potentially contentious assignment.)
In September 1988, as Reagan’s second term neared an end and Secretary Bennett and his team started packing up, Bennett named the first 23 members to the new National Assessment Governing Board. He also asked me to serve as its first chair.
The Lead Up to Achievement Levels
The need for NAEP achievement standards had been underscored by the National Academy of Education: “NAEP should articulate clear descriptions of performance levels, descriptions that might be analogous to such craft rankings as novice, journeyman, highly competent, and expert… Much more important than scale scores is the reporting of the proportions of individuals in various categories of mastery at specific ages.”
Nothing like that had been done before, though ETS analysts had laid essential groundwork with their creation of stable vertical scales for gauging NAEP results. They even placed markers at 50-point intervals on those scales and used those as “anchors” for what they termed “levels of proficiency,” with names like “rudimentary,” “intermediate,” and “advanced.” Yet there was nothing prescriptive about the ETS approach. It did not say how many test takers should be scoring at those levels.
Within months of taking office, George H.W. Bush invited all the governors to join him—49 turned up—at an “education summit” in Charlottesville, Virginia. Their chief product was a set of wildly ambitious “national education goals” that Bush and the governors declared the country should reach by century’s end. The third of those goals stated that “By the year 2000, American students will leave grades 4, 8, and 12 having demonstrated competency in challenging subject matter including English, mathematics, science, history, and geography.”
It was a grand aspiration, never mind the unlikelihood that it could be achieved in a decade and the fact that there was no way to tell if progress were being made. At the summit’s conclusion, the United States had no mechanism by which to monitor progress toward that optimistic target, no agreed-upon way of specifying it, nor yet any reliable gauge for reporting achievement by state (although the new NAEP law allowed for this). But such tools were obviously necessary for tracking the fate of education goals established by the governors and president.
They wanted benchmarks, too, and wanted them attached to NAEP. In March 1990, just six months after the summit, the National Governors Association encouraged NAGB to develop “performance standards,” explaining that the “National Education Goals will be meaningless unless progress toward meeting them is measured accurately and adequately, and reported to the American people.”
Conveniently, if not entirely coincidentally, NAGB had already started moving in this direction at its second meeting in January 1989. As chair, I said that “we have a statutory responsibility that is the biggest thing ahead of us to—it says here: ‘identify appropriate achievement goals for each age and grade in each subject area to be tested.’ …It is in our assignment.”
I confess to pushing. I even exaggerated our mandate a bit, for what Congress had given the board was not so much assignment as permission. But I felt the board had to try to do this. And, as education historian Maris Vinovskis recorded, “members responded positively” and “NAGB moved quickly to create appropriate standards for the forthcoming 1990 NAEP mathematics assessment.”
In contrast to ETS’s useful but after-the-fact and arbitrary “proficiency levels,” the board’s staff recommended three achievement levels. In May 1990, NAGB voted to proceed—and to begin reporting the proportion of students at each level. Built into our definition of the middle level, dubbed “proficient,” was the actual language of the third goal set in Charlottesville: “This central level represents solid academic performance for each grade tested—4, 8 and 12. It will reflect a consensus that students reaching this level have demonstrated competency over challenging subject matter.”
Thus, just months after the summit, a standard-setting and performance-monitoring process was in the
works. I accept responsibility for nudging my NAGB colleagues to take an early lead on this, but they needed minimal encouragement.
Early Attempts and Controversies
In practice, however, this proved to be a heavy lift for a new board and staff, as well as a source of great contention. Staff testing specialist Mary Lyn Bourque later wrote that “developing student performance standards” was “undoubtedly the board’s most controversial responsibility.”
The first challenge was determining how to set these levels, and who would do it. As Bourque recounted, we opted to use “a modified Angoff method” with “a panel of judges who would develop descriptions of the levels and the cut scores on the NAEP score scale.” The term “modified Angoff method” has reverberated for three decades now in connection with those achievement levels. Named for ETS psychologist William Angoff, this procedure is widely used to set standards on various tests. At its heart is a panel of subject-matter experts who examine every question and estimate how many test takers might answer it correctly. The Angoff score is commonly defined as the lowest cutoff score that a “minimally qualified candidate” is likely to achieve on a test. The modified Angoff method uses the actual test performance of a valid student sample to adjust those predicted cutoffs in case reality doesn’t accord with expert judgments.
As the NAEP level-setting process got underway, there were stumbles, missteps, and miscalculations. Bourque politely wrote that the first round of standard-setting was a “learning experience for both the board and the consultants it engaged.” It consumed just three days, which proved insufficient, leading to follow-up meetings and a dry run in four states. It was still shaky, however, leading the board to dub the 1990 cycle a trial and to start afresh for 1992. The board also engaged an outside team to evaluate its handiwork.
Those reviewers didn’t think much of it, reaching some conclusions that in hindsight had merit but also many that did not. But the consultants destroyed their relationship with NAGB by distributing their draft critique without the board’s assent to almost 40 others, “many of whom,” wrote Bourque, “were well connected with congressional leaders, their staffs, and other influential policy leaders in Washington, D.C.” This episode led board members to conclude that their consultants were keener to kill off the infant level-setting effort than to perfect its methodology. That contract was soon canceled, but this episode qualified as the first big public dust-up over the creation and application of achievement levels.
NCLB Raises the Stakes
Working out how best to do those things took time, because the methods NAGB used, though widespread today, were all but unprecedented at the time. In Bourque’s words, looking back from 2007, using achievement-level descriptions “in standard setting has become de rigueur for most agencies today; it was almost unheard of before the National Assessment.”
Meanwhile, criticism of the achievement-level venture poured in from many directions, including such eminent bodies as the National Academy of Education, National Academy of Sciences, and General Accounting Office. Phrases like “fundamentally flawed” were hurled at NAGB’s handiwork.
The achievement levels’ visibility and combustibility soared in the aftermath of No Child Left Behind, enacted in early 2002, for that law’s central compromise left states in charge of setting their own standards while turning NAEP into auditor and watchdog over those standards and the veracity of state reports on pupil achievement. Each state would report how many of its students were “proficient” in reading and math according to its own norms as measured on its own tests. Then, every two years, NAEP would report how many of the same states’ students at the same grade levels were proficient in reading and math according to NAGB’s achievement levels. When, as often happened, there was a wide gap—nearly always in the direction of states presenting a far rosier picture of pupil attainment than did NAEP—it called into question the rigor of a state’s standards and exam scoring. On occasion, it was even said that such-and-such a state was lying to its citizens about its pupils’ reading and math prowess.
In response, of course, it was alleged that NAEP’s levels were set too high, to which the board’s response was that its “proficient” level was intentionally aspirational, much like the lofty goals framed back in Charlottesville. It wasn’t meant to shed a favorable light on the status quo; it was all about what kids ought to be learning, coupled with a comparison of present performance to that aspiration.
Some criticism was constructive, however, and the board and its staff and contractors—principally the American College Testing organization—took it seriously and adjusted the process, including a significant overhaul in 2005.
Tensions with the National Center for Education Statistics
Statisticians and social scientists want to work with data, not hopes or assertions, with what is, not what should be. They want their analyses and comparisons to be driven by scientific norms such as validity, reliability, and statistical significance, not by judgments and aspirations. Hence the National Center for Education Statistics’ own statisticians resisted the board’s standard-setting initiative for years. At times, it felt like guerrilla warfare as each side enlisted external experts and allies to support its position and find fault with the other.
As longtime NCES commissioner Emerson Elliott reminisces on those tussles, he explains that his colleagues’ focus was “reporting what students know and can do.” Sober-sided statisticians don’t get involved with “defining what students should do,” as that “requires setting values that are not within their purview. NCES folks were not just uncomfortable with the idea of setting achievement levels, they believed them totally inappropriate for a statistical agency.” He recalled that one of his senior colleagues at NCES was “appalled” when he learned what NAGB had in mind. At the same time, with the benefit of hindsight, Elliott acknowledges that he and his colleagues knew that something more than plain data was needed.
By 2009, after NAEP’s achievement levels had come into widespread use and a version of them had been incorporated into Congress’s own accountability requirements for states receiving Title I funding, the methodological furor was largely over. A congressionally mandated evaluation of NAEP that year by the Universities of Nebraska and Massachusetts finally recognized the “inherently judgmental” nature of such standards, noting the “residual tension between NAGB and NCES concerning their establishment,” then went on to acknowledge that “many of the procedures for setting achievement levels for NAEP are consistent with professional testing standards.”
That positive review’s one big caveat faulted NAGB’s process for not using enough “external evidence” to calibrate the validity of its standards. Prodded by such concerns, as well as complaints that “proficient” was set at too high a level, the board commissioned additional research that eventually bore fruit. The achievement levels turn out to be more solidly anchored to reality, at least for college-bound students, than most of their critics have supposed. “NAEP-proficient” at the 12th-grade level turns out to mean “college ready” in reading. College readiness in math is a little below the board’s proficient level.
As the years passed, NAGB and NCES also reached a modus vivendi for presenting NAEP results. Simply stated, NCES “owns” the vertical scales and is responsible for ensuring that the data are accurate, while NAGB “owns” the achievement levels and the interpretation of results in relation to those levels. The former may be said to depict “what is,” while the latter is based on judgments as to how students are faring in relation to the question “how good is good enough?” Today’s NAEP report cards incorporate both components, and the reader sees them as a seamless sequence.
Yet the tension has not entirely vanished. The sections of those reports that are based on achievement levels continue to carry this note: “NAEP achievement levels are to be used on a trial basis and should be interpreted and used with caution.” The statute still says, as it has for years, that the NCES commissioner gets to determine when “the achievement levels are reasonable, valid, and informative to the public,” based on a formal evaluation of them. To date, despite the widespread acceptance and use of those levels, that has not happened. In my view, it’s long overdue.
Looking Ahead
Accusations continue to be hurled that the achievement levels are set far too high. Why isn’t “basic” good enough? And—a concern to be taken seriously—what about all those kids, especially the very large numbers of poor and minority pupils, whose scores fall “below basic?” Shouldn’t NAEP provide much more information about what they can and cannot do? After all, the “below basic” category ranges from completely illiterate to the cusp of essential reading skills.
The achievement-level refresh that’s now underway is partly a response to a 2017 recommendation from the National Academies of Sciences, Engineering and Medicine that urged an evaluation of the “alignment among the frameworks, the item pools, the achievement-level descriptors, and the cut scores,” declaring such alignment “fundamental to the validity of inferences about student achievement.” The board engaged the Pearson testing firm to conduct a sizable project of this sort. It’s worth underscoring, however, that this is meant to update and improve the achievement levels, their descriptors, and how the actual assessments align with them, not to replace them with something different.
I confess to believing that NAEP’s now-familiar trinity of achievement levels has added considerable value to American education and its reform over the past several decades. Despite all the contention that they’ve prompted over the years, I wouldn’t want to see them replaced. But to continue measuring and reporting student performance with integrity, they do require regular maintenance.
Chester E. Finn, Jr., is a Distinguished Senior Fellow at the Thomas B. Fordham Institute and a Senior Fellow at Stanford’s Hoover Institution. His latest book is Assessing the Nation’s Report Card: Challenges and Choices for NAEP, published by the Harvard Education Press.
By: Chester E. Finn, Jr. Title: “It Felt Like Guerrilla Warfare” Sourced From: www.educationnext.org/it-felt-like-guerrilla-warfare-student-achievement-levels-nations-report-card-brief-history-basic-proficient-advanced/ Published Date: Tue, 17 May 2022 09:00:38 +0000