The origin of African environmental crisis hypothesis
The hypothesis of Africa’s environmental crisis emerged from various global and scientific theories (Chapter 4). The perceived apocalyptic environmental discourse that motivated the application of imperial science6’ had its origins in a global doctrine borrowed from experiences in the midwest rangelands of the United States.66 After the cattle boom of the 1880s, severe rangeland degradation and soil erosion had reached the level of an environmental disaster by the 1930s—the phenomenon referred to as ‘dust bowl.’ The overarching drivers of the crisis were droughts running over several years and ploughing up the prairie grasslands with heavy machinery, exposing the soil to wind erosion.6, Donald Worster68 suggests that the changes presented ‘irresistible’ questions as to whether the grazing or environmental drivers were the causal factors—an aspect that required close scrutiny by environmental historians. The global ‘dust bowl’ phenomenon was anticipated in Africa and agricultural policies were developed with the intention of halting the problem.69
In addition, about the time of the ‘dust bowl’ phenomenon, agronomic research had produced influential theories associated with ecological changes. Fredrick Clements from the University of Nebraska developed an ecological theory that predicted how vegetation might respond to grazing pressure by livestock, with the trajectory of change being predicted along a singular pathway until it reaches a climax.'0 According to this succession- climax theory, vegetation is most productive and vegetation communities are stable within climate limitations at the climax stage. Arthur Sampson'1 then expanded on Clements’ theory, advocating the regulation of grazing pressure in order to manage the stages of vegetation succession at optimal rangeland production levels, thus enabling better livestock performance. Accordingly, range managers would maintain succession at the desired sub- climax levels by adjusting stocking rates.'2 During the colonial period in Africa, ‘succession’ theory (hereafter referred to as equilibrium) was used as the dominant model to interpret degradation of rangelands when the vegetation shifted from the trajectory of a hypothetical ‘climax,’ at which point it was claimed that the vegetation communities had become degraded.'5 However, the equilibrium model ignores natural variability, which causes grazing lands to behave in disequilibrium in drier environments, in contrast to the temperate environments where the theory was first developed.'4
In Africa, the presumed causal factors of the crisis are multiple. In addition to ecological factors, a common claim has been increasing population growth that was said to have contributed to environmental degradation. Yet we know that in the late nineteenth century, the African population had collapsed due to epidemics, famine, warfare and slavery; and population growth did not recover sufficiently until the late 1930s, when it again declined due to famine.0 Additionally, there is no evidence that the pre- colonial African environments were degraded—or that soil erosion and gully formations had occurred on the scale described during the colonial period. Indeed, Kate Showers'6 reports that ‘gully erosion was unknown in [the Kingdom of Lesotho] by the 1830s,’ but gullies were reported from the 1890s onwards. Even then, reports of gully erosion emerged despite evidence of healthy livestock and agricultural production. In other cases, colonial reports associated soil erosion with the smelting of iron by African societies—a distant argument that Paul Lane" dismisses as merely ‘shifting blame from one set of actors to another.’ Given the energy efficiency of indigenous metal smelting works, it is unlikely that large forest areas were cleared for the purpose (which, in turn, could have contributed to soil erosion).
The underlying assumptions of the environmental crisis hypothesis was that the main drivers of environmental degradation were sociological factors, as opposed to ecological ones alone. This opinion was influenced by a controversial development theory first published by Melville Herskovits in 1936, called the ‘cattle complex.’ Herskovits proposed that pastoralists have a behavioral attachment to their livestock. According to his theory, African herders—by accumulating large herds on the rangelands—inadvertently induced environmental degradation.'8 The ‘cattle complex’ theory was most pronounced from the 1930s when global discourses of environmental degradation were described in relation to ‘dust bowl’ incidents in the American Central Plains;79 and also from the 1940s through to the 1960s when desertification became a major environmental issue in Africa.80 It was in this context that Elspeth Huxley warned, stating: ‘if man continues to follow the same destructive course that he has done in the United States and is already doing today in Africa, there can be little doubt but that the soil fertility will decline rapidly and irrevocably.’81 That warning became a major point of reference for ongoing research and development in Africa82—in particular where development drew on sociological theories of development.
Expanding on the ideas of Melville Herskovits, Garrett Hardin85 developed his theory of the ‘tragedy of the commons’ which significantly influenced the privatization of communal grazing lands. The theory argued that communal resources in general, and common pastures, encouraged individuals to add more stock to their herds,84 leading to overgrazing and degradation. The argument was that the rangelands in Africa carried more stock than their acceptable ‘carrying capacities’—which, it was claimed, were fixed for individual rangelands. In tackling the problem, colonial officials advocated forceful destocking of pastoralist herds and the establishment of large-scale soil conservation programs.8"'
Unfortunately, imperial science failed to appreciate that the productivities of African rangelands fluctuate between periods of high and low rainfall. In the former, greater volumes of forage are produced, resulting in surpluses; while in dry years, the carrying capacities of the same rangelands would decline drastically. Consequently, the controlling factor is not grazer populations, but climate variability—providing evidence that rangeland production in arid and semi-arid African environments has always been dynamic, unstable and fluctuating.86 We argue that the historical literature that inferred widespread problems of degradation due to large pastoral herds8' suffered this misreading of the ecology of the African rangelands. Accordingly, long-held predictions of the imminent collapse of traditional pastoral production due to deteriorating environments never materialized; and any disasters that did occur can be ascribed to different causes.88 Although ecological consequences were not a deliberate goal of the economic and environmental engineering schemes, development planning in this respect proved to be inadequate.89 Thus, researchers would have made better progress if they had considered what Mariam Chertow and Daniel Esty90 call ‘thinking ecologically.’ Rather than improving indigenous systems of land use, development aggravated the environmental situation91—described as the African environmental crisis. We next examine experimental and social science research for testing the African environmental crisis hypothesis.