During the early years of the twentieth century, education was understood in a variety of ways and was considered to have multiple and diverse goals. Although preparing for a career (referred to then as “vocation”) was one of these goals, educational advisors listed a wide variety of benefits to be derived from schooling, including things like health, worthy home membership, use of leisure, and ethical character (Commission on the Reorganization of Secondary Education, 1918).

By the middle years of the century, the focus of schooling was beginning to narrow, from producing well-rounded citizens to providing the nation with future workers, or “human capital” (Becker, 1962).

Two factors accelerated this shift in focus for schooling. First was a sudden emphasis on the part of the military during WWII on training soldiers rapidly and efficiently for practical purposes related to the war effort. Many of the military trainers made strong assertions that what the nation needed was soldiers who knew how to do things and that education based on “high culture” was generally useless for getting things done (Lightfoot, 2001). This idea grew out of and encouraged a shift in national perceptions of schools.

Following the war, education began to be framed as a weapon in the “Cold War.” After victory against the Axis powers, the Western Allies began to turn their attention to the threat of “World Communism.” Even before the war ended, Winston Churchill identified education as the weapon that would allow developed countries to build and maintain empires.

In the postwar decades, as a nation, the United States occupied a position of economic preeminence and consistent growth. Many macroeconomists and educational planners in the United States began to theorize that education and economics were inextricably linked. They extrapolated from the fact that people with more education tend to make more money to the idea that the accumulated education of a nation’s citizens is the major determinant of its overall prosperity. The social theorist Daniel Lerner put the idea of human capital in personal terms—individuals need to use education to dream of a better (i.e., more prosperous) life:

Empathy . . . endows a person with the capacity to imagine himself as the proprietor of a larger grocery store in a city . . . to be interested in what is “going on in the world” and to “get out of his hole.” (1958, p. 234)

The economists Harbison and Meyers made it clear that poverty and underdevelopment are national economic issues. Getting an education and a job or successful business are individuals’ duty to society as a whole, with the implication that those people who fail to do so have a negative effect on their nation’s economy as a whole.

Countries are underdeveloped because most of their people are underdeveloped, having had no opportunity of expanding their potential capabilities in the service of society. (Harbison & Meyers, 1964, p.


It was during this immediate postwar period that the terms “First World” and “Third World” first came into common usage and were invariably linked to the concept of education. The United States was prosperous and economically dominant, and in those days before standardized international comparison tests were used as a measure of the quality of a nation’s educational system (Ravitch, 2013), planners and educational consultants rarely questioned the preeminence of the US educational system. Providing educational advisors and consultants to Third World countries was a particularly inexpensive type of foreign aid, and the theory at the time was that if countries built the right number and the right type of schools, prosperity would follow. Beginning in the late 1960s, when the World Bank began lending large amounts of money for educational projects (Goldman, 2005), many developing countries began taking out sizeable loans for educational “investments” on the assumption that a more educated population would grow the economy and pay the debt back.

From the late 1940s through the 1950s, most of the focus on education for development was in the context of low-income countries. Beginning in the 1960s, however, the United States began focusing to an increasing degree on social justice and equality. Lyndon Johnson’s War on Poverty, beginning in 1964, included a large number of social welfare and economic initiatives but much of the focus was put on education. The arguments used to fund these reforms described the poor in the United States in terms very similar to those used to describe the inhabitants of Third World countries up to this point.

Throughout this period, there was a dialogical counterpoint between arguments that, on the one hand, poverty was a structural problem and that, on the other hand, a lack of education and dysfunctional attitudes caused poverty and made poor people represent a risk to their own well-being and that of the rest of society. Consider the following two quotes from the War on Poverty era.

There is some danger in the current usage of the term “culture of poverty” because it suggests that something other than the absence of money distinguishes the poor as a group from the rest of us . . . [T]here is a danger in suggesting that these qualities are intrinsic to the poor themselves rather than the end product of remediable social ills. (Wickendon in Patterson, 1994, p. 121)

The author cited above is clear in his assertion that poverty is structural and not personal or cultural. For those who, like him, believed that poverty was a lack of money, the poor needed entitlement and jobs programs, rather than programs to “fix” them. The same era, however, saw a competing strand of discourse concerning poverty that eventually became dominant and influenced educational discourse and policy for decades to come. This discourse asserts that children in poverty come from backgrounds that fail to give them the skills and abilities necessary to become productive future citizens, and they need to be subjected to remedial measures before they can take advantage of opportunity. Compare the following bit of dialogue from the Congressional Hearings on Headstart to the immediately preceding citation.

A child’s intelligence is shaped by his experiences and his mental development is heavily determined by the conditions and the environment he encounters in his first few years of life. (US Senate, 1970, pp. 52-53)

This type of argument shows concern for the welfare of the poor, but it also identifies them as either literally or figuratively disabled. The following authors, writing in an academic journal during the same period, are explicit about the comparison of poverty to a disability and the inability of poor people to take advantage of opportunity without educational remediation.

Unemployment and sub-employment in these slums are—much more than in other areas—a matter of personal rather than economic condition . . . The problem is less one of inadequate opportunity than of inability, under existing conditions, to use opportunity. Unemployment in these areas is primarily a story of inferior education, no skills, [etc.] Fundamental to the problem seem to be the linguistic barriers Mexican-Americans must face when confronting the educational system, the labor market, and society in general . . . [Lack of proficiency in English is] as much of a handicap socially speaking as a cleft palate, deafness, etc. are in organic or physical terms. This type of handicap can be overcome by intelligent diagnosis and special instruction. (Natalicio & Natalicio, 1969, pp. 263-264, 272; italics added)

Authors like the above wrote from a social justice perspective. They were strong proponents of educational programs targeted at underserved groups. At the same time, their arguments make a point I sometimes refer to as “dancing with the devil,” that is, highlighting the weaknesses of groups that educators and social justice advocates want to help in order to receive government funding for programs targeted toward them. As this language evolved over the next several decades, members of various “underachieving” groups were not only seen as suffering “disadvantage” and injustice but, through the drag they placed on the economy overall, they also put the nation as a whole at risk. This argument, brought to its maturity, is evident in the following argument from the late 1990s.

Latinos’ current social, educational and economic status must vastly improve if their demographic power is to be translated into economic strength, both for themselves, and for the United States . . . Hispanics remain the most undereducated major segment of the U.S. population . . . Because of the size and youthfulness of the Hispanic population, its educational status has long-term social and economic consequences that will affect the development and stability of the U.S. population. (Perez & Salazar, 1997, pp. 48-49)

In other words, the general population of the United States should care about young Latinos and be willing to “invest” in educational programs benefiting them because failure to do so will cause economic damage to “the rest of us.” Proponents of educational programs must now engage in a kind of economic blackmail. “We” need to help “them” because if we do not, “our” own economic welfare will be affected. As Foucault points out, we must be constantly vigilant in the tactics we use to promote our agendas, because “everything is dangerous” (1983, p. 231), and this discursive move may have dangerous consequences.

The risks involved in this tactic began to manifest in the 1980s as people in the United States began to perceive that the country no longer held an unquestioned economic dominance in the world but, instead, was experiencing international competition—in many cases from areas, such as Asia, not traditionally considered to be competitors. A great deal of national dialogue centered on the role of education in failing to produce consistent economic progress or, in other words, not producing sufficient returns on our national investment in education. The best-known manifestation of this newly growing concern is the report, “A Nation at Risk.” The authors stated directly that the US educational system was holding the country back, and used the metaphors of international competition as “war,” and of educators as “unfriendly powers” to refer to the supposed failures of the nation’s educational system.

Our nation is at risk. Our once unchallenged preeminence in commerce, industry, science and technological innovation is being overtaken by competitors throughout the world . . . If an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war. (National Commission for Excellence in Education, 1983, p. 1)

“A Nation at Risk” is a marker of a more general change in public perceptions of education. As the United States became less and less dominant in the world economy, perceptions of the US educational system also shifted. A publication from the US Department of Education reflects this.

Deep public concerns about the Nation’s future [have] created a tidal wave of school reform which promises to renew American education. Citizens, perplexed about social, civic and economic difficulties [have] turned to education as an anchor of hope for the future of their children. (US Department of Education, 1984, p. 10)

Ironically, rather than being a point at which Americans experienced renewed hope in their educational system, the mid-1980s was a tipping point after which policy makers criticized the US educational system in stronger and stronger terms, while educators continued to promise that the schools would solve social and economic problems if only the country were willing to invest in them sufficiently. It was roughly at this point that discourse concerning US education separated into two very different strands—one promising that money invested in schools would more than pay off, while the other portrayed educators as impediments to national progress.

In an interview with the Public Broadcasting System, conducted during the 2000 presidential race between George Bush and Al Gore, William Galston, deputy assistant to the president on domestic policy, and senior advisor to the Gore campaign, expressed optimism that increased investment in public education would close the nation’s gap in achievement and income along racial and class lines and bring the nation to a renewed period of prosperity. Although Galston stressed “reform” and “expectations,” he also argued that one of the largest factors in the achievement gap is differential spending between urban and suburban schools and promised that Gore administration would help remedy that problem.

I don’t know of very many people at any point along the political spectrum who want to look forward 10 or 20 years and see an America divided along lines of race, ethnicity, and class . . . Public education reform that brings all public schools up to a common standard of achievement and expectation is one of the best ways of closing that gap. (Galston, 2000)

More recently, a report on the economic benefits of investment in preschool argues that money spent on education for young children always pays off in economic terms.

Quality preschool education is a profitable investment. Rigorous efforts to estimate whether the economic benefits of early childhood education outweigh the costs of providing these educational opportunities indicate that they are a wise financial investment. Available benefit-cost estimates . . . range from three to seven dollars saved for every dollar spent. (The Access Center, 2012)

However, at the same time, a more negative orientation toward education, especially in the public sector, was also taking hold in public discourse in which schools were being portrayed as monopolies and teacher’s unions as extortionist or even compared metaphorically to criminal or terrorist organizations.

An example of this comes from an interview with William Bennett, secretary of education from 1985 to 1988.

Just after he was appointed, Bennett says, he received a visit from a National Education Association delegation . . . Bennett had promised to push reforms in response to A Nation at Risk . . . “We hope you will have your cooperation,” one of the union leaders began after they had exchanged pleasantries. “If not it will be unfortunate.” “Unfortunate for whom?” Bennett shot back. “For you. You really don’t want to be in a fight with us.” “Are you threatening me? Are you gonna put a horse head in my bed?” [A reference to the Mafia from the movie The Godfather] . . . “You guys are the problem,” Bennett continued, . . . “and I’m coming after you.” (Brill, 2011, p. 46)

Bennett has consistently made it clear that he sees teachers and teacher’s unions as enemies to any kind of reform or progress in the US educational system and a drag on the economy as a whole.

Sound education reforms are threatened by the determined opposition they elicit. At first [the opposition] appeared as a form of denial—as a claim that things were not as bad as they seemed in our schools. A little later the opposition to reform took a different tack—admitting that things might be bad, but insisting that they could not be fixed in the schools—that first “society” or “the system” must be altered. Today we hear what might be called opposition by extortion—the false claim that to fix our schools will first require a fortune in new funding. But more and more the opposition to school reform is now manifested in the narrow, self-interested exercise of political power in statehouses and in school board meetings. (Bennett, 1988, p. 3)

One of Bennett’s successors, Rod Paige, actually compared a major teacher’s union to a terrorist organization.

Education Secretary Rod Paige said . . . that the National Education Association . . . was like “a terrorist organization” because of the way it was resisting many provisions of a school improvement law . . . Secretary Paige “was trying to point out that one reason it’s been so difficult to execute real reform is that a lot of people in teachers’ unions are trying to protect the status quo.” (Guddemi & Case, 2004)

Since the early 1980s, then, the two different groups have been using the language of human capital to argue different points. Many educators have still been using the language of education as an investment to argue for increased spending in public schools on the assumption that money invested will be returned in terms of financial progress. The other group, which has been increasingly ascendant in recent decades, focuses on output and on the need for constant assessment of whether society’s money is being wisely invested in public schools. Many members of this latter group assume that US schools have already failed, and cite both scores on international comparison tests and the changing role of the United States in the world economy as evidence. As remedies, they generally reject the idea of increased spending in favor of tighter controls over how money invested is being spent.

The passage of the No Child Left Behind bill in 2002, with its litany of repeated calls for accountability and for controls over what teachers are doing in the classroom and what students are learning, is often cited as the beginning of the era of accountability. McGuinn (2006) argues that 2002 was the year that federal focus on education shifted from an emphasis on “input” to an emphasis on “output,” or control over what was being done with money spent. However, looking at public discourse on education over the past 30 years, we can see an increase in focus on federal control over education. This is reflected in federal policy that has steadily increased control over educational output, or practice. President George H. W. Bush first began calling for national education standards and assessment measures, although he was unable to get political consensus to pass legislations requiring this. The Clinton administration continued to support the idea of increased investment but conceded this should be linked to “accountability” measures to appease those who worried that the educational system was not performing well. Clinton’s Education 2000 plan tied federal funding to voluntary compliance with national standards and accountability measures. By 2002, there was bipartisan consensus on tying federal funding for education to a requirement for standards-based assessment—a requirement that made it possible to pass the No Child Left Behind law with strong bipartisan support. The requirements for assessment and “evidence of yearly progress” made this legislation different from all previous federal education bills (McGuinn, 2006). During President Obama’s term in office the government has tied educational funding under the Race to the Top program to states adopting the “Common Core Standards”—national curricular standards and standardized evaluations linked to those standards. Theoretically these standards are voluntary, but they constitute the largest single input into a point-based rubric for obtaining federal grant money. The grant money itself is mainly targeted to stronger systems of data collection, accountability, and incentives for teachers whose students perform well on standardized tests (North Carolina Institute for Constitutional Law, 2013). What we see in this trajectory, is a steady increase in the influence of those who worry about the effectiveness and global competitiveness of the US educational system based on the assumption that the economy is largely dependent on the nation’s educational system. These worries are reflected in a steady increase in government supervision and control of public schools through evermore prevalent and standardized assessment systems.

< Prev   CONTENTS   Source   Next >