GoldBack, A Perfect Complement To Cryptos
Main Page: https://www.goldback.com/
Folks, as I have paid my “Money” debt to you at:
I have pointed out that “Money” is just the “Medium of Exchange,” which does not requires anything but just a volitional agreement between/among people to use “something” to exchange goods and services they need, want, or just fancy! And through practical experience such “something MOE” will be modified, changed, or even replaced with “something new” that offers more freedom, convenience, more efficiency to serve people needs and wants. In this respect, gold has been going thru a profound transformation process in its divisibility to be a perfect complement to modern cryptos to form a perfect FREE (as in free speech) medium of exchange system!
In the nutshells, it’s all our human creation and perception. And after all, at the bottom of this matter, whatever supports and sustains life with freedom is the MOST VALUABLE of ALL.
To put it further, just take this (not so) “extreme” case for the sake of argument. If you are self sufficient (doesn’t need to be a stoical self-sufficient person :-)) you don’t need such “medium of exchange” (money) at all, do you?
This ,IMHO, mostly explains why in the early stage of existence, our ancestors did not need physical “medium of exchange” but “credit” to interact, socialize, and support one another, since they did not have much “needs” in such natural life, let alone “wants” as David Graeber proved in his book “Debt: the first 5000 years.” So our ancestors did not create “momey” until … well the rest id “history!”
You can download and read in PDF some of David Graeber’s books here:
https://www.mediafire.com/folder/xc7hhox2wr7r9/David_Graeber
Speaking of “us”, David Graeber before he left us, he had just finished his last book with his colleague David Wengrow - “The Dawn of Everything: A New History of Humanity” in which he exposed and debunked quite a bunch of bullshit, make believes in our current so-called social science academy, which has been bullshitting gaslighting us about … us, this humankind, our humanity ever since!
At any rate, please find time to read the late anthropologists David_Graeber’s books, folks!
Read them and form your own opinion and come to your own conclusion.
As always, the last word is all yours.
APPENDIX
This is excerpt, the “Conclusion” the last chapter of “The Dawn of Everything: A New History of Humanity”
You can listen to the Audio file I created with Natural Reader 14
12 Conclusion ...The dawn of everything
...
This book began with an appeal to ask better questions. We started out by observing that to inquire after the origins of inequality necessarily means creating a myth, a fall from grace, a technological transposition of the first chapters of the Book of Genesis – which, in most contemporary versions, takes the form of a mythical narrative stripped of any prospect of redemption. In these accounts, the best we humans can hope for is some modest tinkering with our inherently squalid condition – and hopefully, dramatic action to prevent any looming, absolute disaster. The only other theory on offer to date has been to assume that there were no origins of inequality, because humans are naturally somewhat thuggish creatures and our beginnings were a miserable, violent affair; in which case ‘progress’ or ‘civilization’ – driven forward, largely, by our own selfish and competitive nature – was itself redemptive. This view is extremely popular among billionaires but holds little appeal to anyone else, including scientists, who are keenly aware that it isn’t in accord with the facts.
It’s hardly surprising, perhaps, that most people feel a spontaneous affinity with the tragic version of the story, and not just because of its scriptural roots. The more rosy, optimistic narrative – whereby the progress of Western civilization inevitably makes everyone happier, wealthier and more secure – has at least one obvious disadvantage. It fails to explain why that civilization did not simply spread of its own accord; that is, why European powers should have been obliged to spend the last 500 or so years aiming guns at people’s heads in order to force them to adopt it. (Also, if being in a ‘savage’ state was so inherently miserable, why so many of those same Westerners, given an informed choice, were so eager to defect to it at the earliest opportunity.) During the nineteenth-century heyday of European imperialism, everyone seemed more keenly aware of this. While we remember that age as one of naive faith in ‘the inevitable march of progress’, liberal, Turgot-style progress was actually never really the dominant narrative in Victorian social theory, let alone political thought.
In fact, European statesmen and intellectuals of that time were just as likely to be obsessed with the dangers of decadence and disintegration. Many were overt racists who held that most humans are not capable of progress, and therefore looked forward to their physical extermination.
Even those who did not share such views tended to feel that Enlightenment schemes for improving the human condition had been catastrophically naive. Social theory, as we know it today, emerged largely from the ranks of such reactionary thinkers, who – looking back over their shoulders at the turbulent consequences of the French Revolution – were less concerned with disasters being visited on peoples overseas than on growing misery and public unrest at home. As a result, the social sciences were conceived and organized around two core questions: (1) what had gone wrong with the project of Enlightenment, with the unity of scientific and moral progress, and with schemes for the improvement of human society? And: (2) why is it that well-meaning attempts to fix society’s problems so often end up making things even worse? Why, these conservative thinkers asked, did it prove so difficult for Enlightenment revolutionaries to put their ideas into practice? Why couldn’t we just imagine a more rational social order and then legislate it into existence? Why did the passion for liberty, equality and fraternity end up producing the Terror? There must surely be some underlying reasons.
If nothing else, these preoccupations help to explain the continued relevance of an otherwise not particularly successful eighteenth-century Swiss musician named Jean-Jacques Rousseau. Those primarily concerned with the first question saw him as the first to ask it in a quintessentially modern way. Those mainly concerned with the second were able to represent him as the ultimate clueless villain, a simple-minded revolutionary who felt that the established order, being irrational, could simply be brushed aside. Many held Rousseau personally responsible for the guillotine. By contrast, few nowadays read the ‘traditionalists’ of the nineteenth century, but they’re actually important since it is they, not the Enlightenment philosophes, who are really responsible for modern social theory. It’s long been recognized that almost all the great issues of modern social science – tradition, solidarity, authority, status, alienation, the sacred – were first raised in the works of men like the theocratic Vicomte de Bonald, the monarchist Comte de Maistre, or the Whig politician and philosopher Edmund Burke as examples of the kind of stubborn social realities which they felt that Enlightenment thinkers, and Rousseau in particular, had refused to take seriously, with (they insisted) disastrous results.
These nineteenth-century debates between radicals and reactionaries never really ended; they keep resurfacing in different forms. Nowadays, for instance, those on the right are more likely to see themselves as defenders of Enlightenment values, and those on the left its most ardent critics. But over the course of the argument all parties have come to agree on one key point: that there was indeed something called ‘the Enlightenment’, that it marked a fundamental break in human history, and that the American and French Revolutions were in some sense the result of this rupture. The Enlightenment is seen as introducing a possibility that had simply not existed before: that of self-conscious projects for reshaping society in accord with some rational ideal. That is, of genuine revolutionary politics. Obviously, insurrections and visionary movements had existed before the eighteenth century. No one could deny that. But such pre-Enlightenment social movements could now largely be dismissed as so many examples of people insisting on a return to certain ‘ancient ways’ (that they had often just made up), or else claiming to act on a vision from God (or the local equivalent).
Pre-Enlightenment societies, or so this argument goes, were ‘traditional’ societies, founded on community, status, authority and the sacred. They were societies in which human beings did not ultimately act for themselves, individually or collectively. Rather, they were slaves of custom; or, at best, agents of inexorable social forces which they projected on to the cosmos in the form of gods, ancestors or other supernatural powers. Supposedly, only modern, post-Enlightenment people had the capacity to self-consciously intervene in history and change its course; on this everyone suddenly seemed to agree, no matter how virulently they might disagree about whether it was a good idea to do so.
All this might seem a bit of a caricature, and only a minority of authors were willing to state matters quite so bluntly. Yet most modern thinkers have clearly found it bizarre to attribute self-conscious social projects or historical designs to people of earlier epochs. Generally, such ‘non-modern’ folk were considered too simple-minded (not having achieved ‘social complexity’); or to be living in a kind of mystical dreamworld; or, at best, were thought to be simply adapting themselves to their environment at an appropriate level of technology. Anthropology, it must be confessed, did not play a stellar role here.
For much of the twentieth century, anthropologists tended to describe the societies they studied in ahistorical terms, as living in a kind of eternal present. Some of this was an effect of the colonial situation under which much ethnographic research was carried out. The British Empire, for instance, maintained a system of indirect rule in various parts of Africa, India and the Middle East where local institutions like royal courts, earth shrines, associations of clan elders, men’s houses and the like were maintained in place, indeed fixed by legislation. Major political change – forming a political party, say, or leading a prophetic movement – was in turn entirely illegal, and anyone who tried to do such things was likely to be put in prison. This obviously made it easier to describe the people anthropologists studied as having a way of life that was timeless and unchanging.
Since historical events are by definition unpredictable, it seemed more scientific to study those phenomena one could in fact predict: the things that kept happening, over and over, in roughly the same way. In a Senegalese or Burmese village this might mean describing the daily round, seasonal cycles, rites of passage, patterns of dynastic succession, or the growing and splitting of villages, always emphasizing how the same structure ultimately endured. Anthropologists wrote this way because they considered themselves scientists (‘structural-functionalists’, in the jargon of the day). In doing so they made it much easier for those reading their descriptions to imagine that the people being studied were quite the opposite of scientists: that they were trapped in a mythological universe where nothing changed and very little really happened. When Mircea Eliade, the great Romanian historian of religion, proposed that ‘traditional’ societies lived in ‘cyclical time’, innocent of history, he was simply drawing the obvious conclusion. As a matter of fact, he went even further.
In traditional societies, according to Eliade, everything important has already happened. All the great founding gestures go back to mythic times, the illo tempore,¹ the dawn of everything, when animals could talk or turn into humans, sky and earth were not yet separated, and it was possible to create genuinely new things (marriage, or cooking, or war). People living in this mental world, he felt, saw their own actions as simply repeating the creative gestures of gods and ancestors in less powerful ways, or as invoking primordial powers through ritual. According to Eliade, historical events thus tended to merge into archetypes. If anyone in what he considered a traditional society does do something remarkable – establishes or destroys a city, creates a unique piece of music – the deed will eventually end up being attributed to some mythic figure anyway. The alternative notion, that history is actually going somewhere (the Last Days, Judgment, Redemption), is what Eliade referred to as ‘linear time’, in which historical events take on significance in relation to the future, not just the past.
And this ‘linear’ sense of time, Eliade insisted, was a relatively recent innovation in human thought, one with catastrophic social and psychological consequences. In his view, embracing the notion that events unfold in cumulative sequences, as opposed to recapitulating some deeper pattern, rendered us less able to weather the vicissitudes of war, injustice and misfortune, plunging us instead into an age of unprecedented anxiety and, ultimately, nihilism. The political implications of this position were, to say the least, unsettling. Eliade himself had been close to the fascist Iron Guard in his student days, and his basic argument was that the ‘terror of history’ (as he sometimes called it) was introduced by Judaism and the Old Testament – which he saw as paving the way for the further disasters of Enlightenment thought. Being Jewish, the authors of the present book don’t particularly appreciate the suggestion that we are somehow to blame for everything that went wrong in history. Still, for present purposes, what’s startling is that anyone ever took this sort of argument seriously.
Imagine we tried applying Eliade’s distinction between ‘historical’ and ‘traditional’ societies to the full scope of the human past, on the sort of scale we’ve been covering in the preceding chapters. Would this not have to mean that most of history’s great discoveries – for example the first weaving of fabrics, or the first navigations of the Pacific Ocean, or the invention of metallurgy – were made by people who didn’t believe in discovery or in history? This seems unlikely. The only alternative would be to argue that most human societies only became ‘traditional’ more recently: perhaps they each eventually found a state of equilibrium, settled into it and all came up with a shared ideological framework to justify their newfound condition. Which would mean there actually was some kind of previous illo tempore or time of creation, when all humans were capable of thinking and acting in the kind of highly creative ways we now consider quintessentially modern; one of their major achievements apparently being to find a way of abolishing most future prospects of innovation.
Both positions are, self-evidently, quite absurd.
Why are we entertaining such ideas? Why does it seem so odd, even counter-intuitive, to imagine people of the remote past as making their own history (even if not under conditions of their own choosing)? Part of the answer no doubt lies in how we have come to define science itself, and social science in particular.
Social science has been largely a study of the ways in which human beings are not free: the way that our actions and understandings might be said to be determined by forces outside our control. Any account which appears to show human beings collectively shaping their own destiny, or even expressing freedom for its own sake, will likely be written off as illusory, awaiting ‘real’ scientific explanation; or if none is forthcoming (why do people dance?), as outside the scope of social theory entirely. This is one reason why most ‘big histories’ place such a strong focus on technology. Dividing up the human past according to the primary material from which tools and weapons were made (Stone Age, Bronze Age, Iron Age) or else describing it as a series of revolutionary breakthroughs (Agricultural Revolution, Urban Revolution, Industrial Revolution), they then assume that the technologies themselves largely determine the shape that human societies will take for centuries to come – or at least until the next abrupt and unexpected breakthrough comes along to change everything again.
Now, we are hardly about to deny that technologies play an important role in shaping society. Obviously, technologies are important: each new invention opens up social possibilities that had not existed before. At the same time, it’s very easy to overstate the importance of new technologies in setting the overall direction of social change. To take an obvious example, the fact that Teotihuacanos or Tlaxcalteca employed stone tools to build and maintain their cities, while the inhabitants of Mohenjo-daro or Knossos used metal, seems to have made surprisingly little difference to those cities’ internal organization or even size. Nor does our evidence support the notion that major innovations always occur in sudden, revolutionary bursts, transforming everything in their wake. (This, as you’ll recall, was one of the main points to emerge from the two chapters we devoted to the origins of farming.) Nobody, of course, claims that the beginnings of agriculture were anything quite like, say, the invention of the steam-powered loom or the electric light bulb. We can be fairly certain there was no Neolithic equivalent of Edmund Cartwright or Thomas Edison, who came up with the conceptual breakthrough that set everything in motion. Still, it often seems difficult for contemporary writers to resist the idea that some sort of similarly dramatic break with the past must have occurred. In fact, as we’ve seen, what actually took place was nothing like that. Instead of some male genius realizing his solitary vision, innovation in Neolithic societies was based on a collective body of knowledge accumulated over centuries, largely by women, in an endless series of apparently humble but in fact enormously significant discoveries. Many of those Neolithic discoveries had the cumulative effect of reshaping everyday life every bit as profoundly as the automatic loom or lightbulb.
Every time we sit down to breakfast, we are likely to be benefiting from a dozen such prehistoric inventions. Who was the first person to figure out that you could make bread rise by the addition of those microorganisms we call yeasts? We have no idea, but we can be almost certain she was a woman and would most likely not be considered ‘white’ if she tried to immigrate to a European country today; and we definitely know her achievement continues to enrich the lives of billions of people. What we also know is that such discoveries were, again, based on centuries of accumulated knowledge and experimentation – recall how the basic principles of agriculture were known long before anyone applied them systematically – and that the results of such experiments were often preserved and transmitted through ritual, games and forms of play (or even more, perhaps, at the point where ritual, games and play shade into each other).
‘Gardens of Adonis’ are a fitting symbol here. Knowledge about the nutritious properties and growth cycles of what would later become staple crops, feeding vast populations – wheat, rice, corn – was initially maintained through ritual play farming of exactly this sort. Nor was this pattern of discovery limited to crops. Ceramics were first invented, long before the Neolithic, to make figurines, miniature models of animals and other subjects, and only later cooking and storage vessels. Mining is first attested as a way of obtaining minerals to be used as pigments, with the extraction of metals for industrial use coming only much later.
Mesoamerican societies never employed wheeled transport; but we know they were familiar with spokes, wheels and axles since they made toy versions of them for children. Greek scientists famously came up with the principle of the steam engine, but only employed it to make temple doors that appeared to open of their own accord, or similar theatrical illusions.
Chinese scientists, equally famously, first employed gunpowder for fireworks.
For most of history, then, the zone of ritual play constituted both a scientific laboratory and, for any given society, a repertory of knowledge and techniques which might or might not be applied to pragmatic problems.
Recall, for example, the ‘Little Old Men’ of the Osage and how they combined research and speculation on the principles of nature with the management and periodic reform of their constitutional order; how they saw these as ultimately the same project and kept careful (oral) records of their deliberations. Did the Neolithic town of Çatalhöyük or the Tripolye mega-sites host similar colleges of ‘Little Old Women’? We cannot know for certain, but it strikes us as quite likely, given the shared rhythms of social and technical innovation that we observe in each case and the attention to female themes in their art and ritual. If we are trying to frame more interesting questions to ask of history, this might be one: is there a positive correlation between what is usually called ‘gender equality’ (which might better be termed, simply, ‘women’s freedom’) and the degree of innovation in a given society? Choosing to describe history the other way round, as a series of abrupt technological revolutions, each followed by long periods when we were prisoners of our own creations, has consequences. Ultimately it is a way of representing our species as decidedly less thoughtful, less creative, less free than we actually turn out to have been. It means not describing history as a continual series of new ideas and innovations, technical or otherwise, during which different communities made collective decisions about which technologies they saw fit to apply to everyday purposes, and which to keep confined to the domain of experimentation or ritual play. What is true of technological creativity is, of course, even more true of social creativity.
One of the most striking patterns we discovered while researching this book – indeed, one of the patterns that felt most like a genuine breakthrough to us – was how, time and again in human history, that zone of ritual play has also acted as a site of social experimentation – even, in some ways, as an encyclopaedia of social possibilities.
We are not the first to suggest this. In the mid twentieth century, a British anthropologist named A. M. Hocart proposed that monarchy and institutions of government were originally derived from rituals designed to channel powers of life from the cosmos into human society. He even suggested at one point that ‘the first kings must have been dead kings’,² and that individuals so honoured only really became sacred rulers at their funerals. Hocart was considered an oddball by his fellow anthropologists and never managed to secure a permanent job at a major university. Many accused him of being unscientific, just engaging in idle speculation.
Ironically, as we’ve seen, it is the results of contemporary archaeological science that now oblige us to start taking his speculations seriously. To the astonishment of many, but much as Hocart predicted, the Upper Palaeolithic really has produced evidence of grand burials, carefully staged for individuals who indeed seem to have attracted spectacular riches and honours, largely in death.
The principle doesn’t just apply to monarchy or aristocracy, but to other institutions as well. We have made the case that private property first appears as a concept in sacred contexts, as do police functions and powers of command, along with (in later times) a whole panoply of formal democratic procedures, like election and sortition, which were eventually deployed to limit such powers.
Here is where things get complicated. To say that, for most of human history, the ritual year served as a kind of compendium of social possibilities (as it did in the European Middle Ages, for instance, when hierarchical pageants alternated with rambunctious carnivals), doesn’t really do the matter justice. This is because festivals are already seen as extraordinary, somewhat unreal, or at the very least as departures from the everyday order. Whereas, in fact, the evidence we have from Palaeolithic times onwards suggests that many – perhaps even most – people did not merely imagine or enact different social orders at different times of year, but actually lived in them for extended periods of time. The contrast with our present situation could not be more stark. Nowadays, most of us find it increasingly difficult even to picture what an alternative economic or social order would be like. Our distant ancestors seem, by contrast, to have moved regularly back and forth between them.
If something did go terribly wrong in human history – and given the current state of the world, it’s hard to deny something did – then perhaps it began to go wrong precisely when people started losing that freedom to imagine and enact other forms of social existence, to such a degree that some now feel this particular type of freedom hardly even existed, or was barely exercised, for the greater part of human history. Even those few anthropologists, such as Pierre Clastres and later Christopher Boehm, who argue that humans were always able to imagine alternative social possibilities, conclude – rather oddly – that for roughly 95 per cent of our species’ history those same humans recoiled in horror from all possible social worlds but one: the small-scale society of equals. Our only dreams were nightmares: terrible visions of hierarchy, domination and the state. In fact, as we’ve seen, this is clearly not the case.
The example of Eastern Woodlands societies in North America, explored in our last chapter, suggests a more useful way to frame the problem. We might ask why, for example, it proved possible for their ancestors to turn their backs on the legacy of Cahokia, with its overweening lords and priests, and to reorganize themselves into free republics; yet when their French interlocutors effectively tried to follow suit and rid themselves of their own ancient hierarchies, the result seemed so disastrous. No doubt there are quite a number of reasons. But for us, the key point to remember is that we are not talking here about ‘freedom’ as an abstract ideal or formal principle (as in ‘Liberty, Equality and Fraternity!’).³ Over the course of these chapters we have instead talked about basic forms of social liberty which one might actually put into practice: (1) the freedom to move away or relocate from one’s surroundings; (2) the freedom to ignore or disobey commands issued by others; and (3) the freedom to shape entirely new social realities, or shift back and forth between different ones.
What we can now see is that the first two freedoms – to relocate, and to disobey commands – often acted as a kind of scaffolding for the third, more creative one. Let us clarify some of the ways in which this ‘propping-up’ of the third freedom actually worked. As long as the first two freedoms were taken for granted, as they were in many North American societies when Europeans first encountered them, the only kings that could exist were always, in the last resort, play kings. If they overstepped the line, their erstwhile subjects could always ignore them or move someplace else. The same would go for any other hierarchy of offices or system of authority.
Similarly, a police force that operated for only three months of the year, and whose membership rotated annually, was in a certain sense a play police force – which makes it slightly less bizarre that their members were sometimes recruited directly from the ranks of ritual clowns.⁴ It’s clear that something about human societies really has changed here, and quite profoundly. The three basic freedoms have gradually receded, to the point where a majority of people living today can barely comprehend what it might be like to live in a social order based on them.
How did it happen? How did we get stuck? And just how stuck are we really? ‘There is no way out of the imagined order,’ writes Yuval Noah Harari in his book Sapiens. ‘When we break down our prison walls and run towards freedom’, he goes on, ‘we are in fact running into the more spacious exercise yard of a bigger prison.’⁵ As we saw in our first chapter, he is not alone in reaching this conclusion. Most people who write history on a grand scale seem to have decided that, as a species, we are well and truly stuck and there is really no escape from the institutional cages we’ve made for ourselves. Harari, once again echoing Rousseau, seems to have captured the prevailing mood.
We’ll come back to this point, but for now we want to think a bit further about this first question: how did it happen? To some degree this must remain a matter for speculation. Asking the right questions may eventually sharpen our understanding, but for now the material at our disposal, especially for the early phases of the process, is still too sparse and ambiguous to provide definitive answers. The most we can offer are some preliminary suggestions, or points of departure, based on the arguments presented in this book; and perhaps we can also begin to see more clearly where others since the time of Rousseau have been going wrong.
One important factor would seem to be the gradual division of human societies into what are sometimes referred to as ‘culture areas’; that is, the process by which neighbouring groups began defining themselves against each other and, typically, exaggerating their differences. Identity came to be seen as a value in itself, setting in motion processes of cultural schismogenesis. As we saw in the case of Californian foragers and their aristocratic neighbours on the Northwest Coast, such acts of cultural refusal could also be self-conscious acts of political contestation, marking the boundary (in this case) between societies where inter-group warfare, competitive feasting and household bondage were rejected – as in those parts of Aboriginal California closest to the Northwest Coast – and where they were accepted, even celebrated, as quintessential features of social life.
Archaeologists, taking a longer view, see a proliferation of such regional culture areas, especially from the end of the last Ice Age on, but are often at a loss to explain why they emerged or what constitutes a boundary between them.
Still, this appears to have been an epochal development. Recall, for example, how post-Ice Age hunter-gatherers, especially in coastal or woodland regions, were enjoying something of a Golden Age. There appear to have been all sorts of local experiments, reflected in a proliferation of opulent burials and monumental architecture, the social functions of which often remain enigmatic: from shell-built ‘amphitheatres’ along the Gulf of Mexico to the great storehouses of Sannai Maruyama in Jōmon Japan, or the so-called ‘Giants’ Churches’ of the Bothnian Sea. It is among such Mesolithic populations that we often find not just the multiplication of distinct culture areas, but also the first clear archaeological indications of communities divided into permanent ranks, sometimes accompanied by interpersonal violence, even warfare. In some cases this may already have meant the stratification of households into aristocrats, commoners and slaves. In others, quite different forms of hierarchy may have taken root. Some appear to have become, effectively, fixed in place.
The role of warfare warrants further discussion here, because violence is often the route by which forms of play take on more permanent features.
For example, the kingdoms of the Natchez or Shilluk might have been largely theatrical affairs, their rulers unable to issue orders that would be obeyed even a mile or two away; but if someone was arbitrarily killed as part of a theatrical display, that person remained definitively dead even after the performance was over. It’s an almost absurdly obvious point to make, but it matters. Play kings cease to be play kings precisely when they start killing people; which perhaps also helps to explain the excesses of ritually sanctioned violence that so often ensued during transitions from one state to the other. The same is true of warfare. As Elaine Scarry points out, two communities might choose to resolve a dispute by partaking in a contest, and often they do; but the ultimate difference between war (or ‘contests of injuring’, as she puts it) and most other kinds of contest is that anyone killed or disfigured in a war remains so, even after the contest ends.⁶ Still, we must be cautious. While human beings have always been capable of physically attacking one another (and it’s difficult to find examples of societies where no one ever attacks anyone else, under any circumstances), there’s no actual reason to assume that war has always existed. Technically, war refers not just to organized violence but to a kind of contest between two clearly demarcated sides. As Raymond Kelly has adroitly pointed out, it’s based on a logical principle that’s by no means natural or self-evident, which states that major violence involves two teams, and any member of one team treats all members of the other as equal targets. Kelly calls this the principle of ‘social substitutability’⁷ – that is, if a Hatfield kills a McCoy and the McCoys retaliate, it doesn’t have to be against the actual murderer; any Hatfield is fair game. In the same way, if there is a war between France and Germany, any French soldier can kill any German soldier, and vice versa. The murder of entire populations is simply taking this same logic one step further. There is nothing particularly primordial about such arrangements; certainly, there is no reason to believe they are in any sense hardwired into the human psyche. On the contrary, it’s almost invariably necessary to employ some combination of ritual, drugs and psychological techniques to convince people, even adolescent males, to kill and injure each other in such systematic yet indiscriminate ways.
It would seem that for most of human history, no one saw much reason to do such things; or if they did, it was rare. Systematic studies of the Palaeolithic record offer little evidence of warfare in this specific sense.⁸ Moreover, since war was always something of a game, it’s not entirely surprising that it has manifested itself in sometimes more theatrical and sometimes more deadly variations. Ethnography provides plenty of examples of what could best be described as play war: either with non- deadly weapons or, more often, battles involving thousands on each side where the number of casualties after a day’s ‘fighting’ amount to perhaps two or three. Even in Homeric-style warfare, most participants were basically there as an audience while individual heroes taunted, jeered and occasionally threw javelins or shot arrows at one another, or engaged in duels. At the other extreme, as we’ve seen, there is an increasing amount of archaeological evidence for outright massacres, such as those that took place among Neolithic village dwellers in central Europe after the end of the last Ice Age.
What strikes us is just how uneven such evidence is. Periods of intense inter-group violence alternate with periods of peace, often lasting centuries, in which there is little or no evidence for destructive conflict of any kind.
War did not become a constant of human life after the adoption of farming; indeed, long periods of time exist in which it appears to have been successfully abolished. Yet it had a stubborn tendency to reappear, if only many generations later. At this point another new question comes into focus. Was there a relationship between external warfare and the internal loss of freedoms that opened the way, first to systems of ranking and then later on to large-scale systems of domination, like those we discussed in the later chapters of this book: the first dynastic kingdoms and empires, such as those of the Maya, Shang or Inca? And if so, how direct was this correlation? One thing we’ve learned is that it’s a mistake to begin answering such questions by assuming that these ancient polities were simply archaic versions of our modern states.
The state, as we know it today, results from a distinct combination of elements – sovereignty, bureaucracy and a competitive political field – which have entirely separate origins. In our thought experiment of two chapters ago, we showed how those elements map directly on to basic forms of social power which can operate at any scale of human interaction, from the family or household all the way up to the Roman Empire or the super-kingdom of Tawantinsuyu. Sovereignty, bureaucracy and politics are magnifications of elementary types of domination, grounded respectively in the use of violence, knowledge and charisma. Ancient political systems – especially those, such as the Olmec or Chavín de Huántar, that elude definition in terms of ‘chiefdoms’ and ‘states’ – can often be understood better in terms of how they developed one axis of social power to an extraordinary degree (e.g. charismatic political contests and spectacles in the Olmec case, or control of esoteric knowledge in Chavín). These are what we termed ‘first-order regimes’.
Where two axes of power were developed and formalized into a single system of domination we can begin to talk of ‘second-order regimes’. The architects of Egypt’s Old Kingdom, for example, armed the principle of sovereignty with a bureaucracy and managed to extend it across a large territory. By contrast, the rulers of ancient Mesopotamian city-states made no direct claims to sovereignty, which for them resided properly in heaven.
When they engaged in wars over land or irrigation systems, it was only as secondary agents of the gods. Instead they combined charismatic competition with a highly developed administrative order. The Classic Maya were different again, confining administrative activities largely to the monitoring of cosmic affairs, while basing their earthly power on a potent fusion of sovereignty and inter-dynastic politics.
Insofar as these and other polities commonly regarded as ‘early states’ (Shang China, for instance) really share any common features, they seem to lie in altogether different areas – which brings us back to the question of warfare, and the loss of freedoms within society. All of them deployed spectacular violence at the pinnacle of the system (whether that violence was conceived as a direct extension of royal sovereignty or carried out at the behest of divinities); and all to some degree modelled their centres of power – the court or palace – on the organization of patriarchal households. Is this merely a coincidence? On reflection, the same combination of features can be found in most later kingdoms or empires, such as the Han, Aztec or Roman. In each case there was a close connection between the patriarchal household and military might. But why exactly should this be the case? The question has proved difficult to answer in all but superficial terms, partly because our own intellectual traditions oblige us to use what is, in effect, imperial language to do so; and the language already implies an explanation, even a justification, for much of what we are really trying to account for here. That is why, in the course of this book, we sometimes felt the need to develop our own, more neutral (dare we say scientific?) list of baseline human freedoms and forms of domination; because existing debates almost invariably begin with terms derived from Roman Law, and for a number of reasons this is problematic.
The Roman Law conception of natural freedom is essentially based on the power of the individual (by implication, a male head of household) to dispose of his property as he sees fit. In Roman Law property isn’t even exactly a right, since rights are negotiated with others and involve mutual obligations; it’s simply power – the blunt reality that someone in possession of a thing can do anything he wants with it, except that which is limited ‘by force or law’. This formulation has some peculiarities that jurists have struggled with ever since, as it implies freedom is essentially a state of primordial exception to the legal order. It also implies that property is not a set of understandings between people over who gets to use or look after things, but rather a relation between a person and an object characterized by absolute power. What does it mean to say one has the natural right to do anything one wants with a hand grenade, say, except those things one isn’t allowed to do? Who would come up with such an odd formulation? An answer is suggested by the West Indian sociologist Orlando Patterson, who points out that Roman Law conceptions of property (and hence of freedom) essentially trace back to slave law.⁹ The reason it is possible to imagine property as a relationship of domination between a person and a thing is because, in Roman Law, the power of the master rendered the slave a thing (res, meaning an object), not a person with social rights or legal obligations to anyone else. Property law, in turn, was largely about the complicated situations that might arise as a result. It is important to recall, for a moment, who these Roman jurists actually were that laid down the basis for our current legal order – our theories of justice, the language of contract and torts, the distinction of public and private and so forth. While they spent their public lives making sober judgments as magistrates, they lived their private lives in households where they not only had near-total authority over their wives, children and other dependants, but also had all their needs taken care of by dozens, perhaps hundreds of slaves.
Slaves trimmed their hair, carried their towels, fed their pets, repaired their sandals, played music at their dinner parties and instructed their children in history and maths. At the same time, in terms of legal theory these slaves were classified as captive foreigners who, conquered in battle, had forfeited rights of any kind. As a result, the Roman jurist was free to rape, torture, mutilate or kill any of them at any time and in any way he had a mind to, without the matter being considered anything other than a private affair. (Only under the reign of Tiberius were any restrictions imposed on what a master could do to a slave, and what this meant was simply that permission from a local magistrate had to be obtained before a slave could be ripped apart by wild animals; other forms of execution could still be imposed at the owner’s whim.) On the one hand, freedom and liberty were private affairs; on the other, private life was marked by the absolute power of the patriarch over conquered people who were considered his private property.¹⁰ The fact that most Roman slaves were not prisoners of war, in the literal sense, doesn’t really make much difference here. What’s important is that their legal status was defined in those terms. What is both striking and revealing, for our present purposes, is how in Roman jurisprudence the logic of war – which dictates that enemies are interchangeable, and if they surrendered they could either be killed or rendered ‘socially dead’, sold as commodities – and, therefore, the potential for arbitrary violence was inserted into the most intimate sphere of social relations, including the relations of care that made domestic life possible. Thinking back to examples like the ‘capturing societies’ of Amazonia or the process by which dynastic power took root in ancient Egypt, we can begin to see how important that particular nexus of violence and care has been. Rome took the entanglement to new extremes, and its legacy still shapes our basic concepts of social structure.
Our very word ‘family’ shares a root with the Latin famulus, meaning ‘house slave’, via familia, which originally referred to everyone under the domestic authority of a single paterfamilias or male head of household.
Domus, the Latin word for ‘household’, in turn gives us not only ‘domestic’ and ‘domesticated’ but dominium, which was the technical term for the emperor’s sovereignty as well as a citizen’s power over private property. Through that we arrive at (literally, ‘familiar’) notions of what it means to be ‘dominant’, to possess ‘dominion’ and to ‘dominate’. Let us follow this line of thought a little further.
We’ve seen how, in various parts of the world, direct evidence of warfare and massacres – including the carrying-off of captives – can be detected long before the appearance of kingdoms or empires. Much harder to ascertain, for such early periods of history, is what happened to captive enemies: were they killed, incorporated or left suspended somewhere in between? As we learned from various Amerindian cases, things may not always be entirely clear-cut. There were often multiple possibilities. It’s instructive, in this context, to return one last time to the case of the Wendat in the age of Kandiaronk, since this was one society that seemed determined to avoid ambiguity in such matters.
In certain ways Wendat, and Iroquoian societies in general around that time, were extraordinarily warlike. There appear to have been bloody rivalries fought out in many northern parts of the Eastern Woodlands even before European settlers began supplying indigenous factions with muskets, resulting in the ‘Beaver Wars’. The early Jesuits were often appalled by what they saw, but they also noted that the ostensible reasons for wars were entirely different from those they were used to. All Wendat wars were, in fact, ‘mourning wars’, carried out to assuage the grief felt by close relatives of someone who had been killed. Typically, a war party would strike against traditional enemies, bringing back a few scalps and a small number of prisoners. Captive women and children would be adopted. The fate of men was largely up to the mourners, particularly the women, and appeared to outsiders at least to be entirely arbitrary. If the mourners felt it appropriate a male captive might be given a name, even the name of the original victim.
The captive enemy would henceforth become that other person and, after a few years’ trial period, be treated as a full member of society. If for any reason that did not happen, however, he suffered a very different fate. For a male warrior taken prisoner, the only alternative to full adoption into Wendat society was excruciating death by torture.
Jesuits found the details shocking and fascinating. What they observed, sometimes at first hand, was a slow, public and highly theatrical use of violence. True, they conceded, the Wendat torture of captives was no more cruel than the kind directed against enemies of the state back home in France. What seems to have really appalled them, however, was not so much the whipping, boiling, branding, cutting-up – even in some cases cooking and eating – of the enemy, so much as the fact that almost everyone in a Wendat village or town took part, even women and children. The suffering might go on for days, with the victim periodically resuscitated only to endure further ordeals, and it was very much a communal affair.¹¹ The violence seems all the more extraordinary once we recall how these same Wendat societies refused to spank children, directly punish thieves or murderers, or take any measure against their own members that smacked of arbitrary authority. In virtually all other areas of social life they were renowned for solving their problems through calm and reasoned debate.
Now, it would be easy to make an argument that repressed aggression must be vented in one way or another, so that orgies of communal torture are simply the necessary flipside of a non-violent community; and some contemporary scholars do make this point. But it doesn’t really work. In fact, Iroquoia seems to be precisely one of those regions of North America where violence flared up only during certain specific historical periods and then largely disappeared in others. In what archaeologists term the ‘Middle Woodland’ phase, for instance, between 100 BC and AD 500 – corresponding roughly to the heyday of the Hopewell civilization – there seems to have been general peace.¹² Later on, signs of endemic warfare reappear. Clearly, at some points in their history people living in this region found effective ways to ensure that vendettas didn’t escalate into a spiral of retaliation or actual warfare (the Haudenosaunee story of the Great Law of the Peace seems to be about precisely such a moment); at other times, the system broke down and the possibility of sadistic cruelty returned.
What, then, was the meaning of these theatres of violence? One way to approach the question is to compare them with what was happening in Europe around the same time. As the Quebecois historian Denys Delâge points out, Wendat who visited France were equally appalled by the tortures exhibited during public punishments and executions, but what struck them as most remarkable is that ‘the French whipped, hanged, and put to death men from among themselves’, rather than external enemies. The point is telling, as in seventeenth-century Europe, Delâge notes, … almost all punishment, including the death penalty, involved severe physical suffering: wearing an iron collar, being whipped, having a hand cut off, or being branded … It was a ritual that manifested power in a conspicuous way, thereby revealing the existence of an internal war. The sovereign incarnated a superior power that transcended his subjects, one that they were compelled to recognise … While Amerindian cannibal rituals showed the desire to take over the strength and courage of the alien so as to combat him better, the European ritual revealed the existence of a dissymmetry, an irrevocable imbalance of power.¹³ Wendat punitive actions against war captives (those not taken in for adoption) required the community to become a single body, unified by its capacity for violence. In France, by contrast, ‘the people’ were unified as potential victims of the king’s violence. But the contrasts run deeper still.
As a Wendat traveller observed of the French system, anyone – guilty or innocent – might end up being made a public example. Among the Wendat themselves, however, violence was firmly excluded from the realm of family and household. A captive warrior might either be treated with loving care and affection or be the object of the worst treatment imaginable. No middle ground existed. Prisoner sacrifice was not merely about reinforcing the solidarity of the group but also proclaimed the internal sanctity of the family and the domestic realm as spaces of female governance where violence, politics and rule by command did not belong. Wendat households, in other words, were defined in exactly opposite terms to the Roman familia.
In this particular respect, French society under the Ancien Régime presents a rather similar picture to imperial Rome – at least, when both are placed in the light of the Wendat example. In both cases, household and kingdom shared a common model of subordination. Each was made in the other’s image, with the patriarchal family serving as a template for the absolute power of kings, and vice versa.¹⁴ Children were to be submissive to their parents, wives to husbands, and subjects to rulers whose authority came from God. In each case the superior party was expected to inflict stern chastisement when he considered it appropriate: that is, to exercise violence with impunity. All this, moreover, was assumed to be bound up with feelings of love and affection. Ultimately, the house of the Bourbon monarchs – like the palace of an Egyptian pharaoh, Roman emperor, Aztec tlatoani or Sapa Inca – was not merely a structure of domination but also one of care, where a small army of courtiers laboured night and day to attend to the king’s every physical need and prevent him, as much as was humanly possible, from ever feeling anything but divine.
In all these cases, the bonds of violence and care extended downwards as well as upwards. We can do no better than put it in words made famous by King James I of England in The True Law of Free Monarchies (1598): As the father, of his fatherly duty, is to care for the nourishing, education, and virtuous government of his children; even so is the King bound to care for all his subjects … As the father’s wrath and correction on any of his children that offendeth, ought to be a fatherly chastisement seasoned with pity, so long as there is any hope of amendment in them; so ought the King towards any of his lieges that offend in that measure … As the father’s chief joy ought to be in procuring his children’s welfare, rejoicing in their weal, sorrowing and pitying at their evil, to hazard for their safety … so ought a good Prince think of his People.
Public torture, in seventeenth-century Europe, created searing, unforgettable spectacles of pain and suffering in order to convey the message that a system in which husbands could brutalize wives, and parents beat children, was ultimately a form of love. Wendat torture, in the same period of history, created searing, unforgettable spectacles of pain and suffering in order to make clear that no form of physical chastisement should ever be countenanced inside a community or household. Violence and care, in the Wendat case, were to be entirely separated. Seen in this light, the distinctive features of Wendat prisoner torture come into focus.
It seems to us that this connection – or better perhaps, confusion – between care and domination is utterly critical to the larger question of how we lost the ability freely to recreate ourselves by recreating our relations with one another. It is critical, that is, to understanding how we got stuck, and why these days we can hardly envisage our own past or future as anything other than a transition from smaller to larger cages.
In the course of writing this book, we have tried to strike a certain balance. It would be intuitive for an archaeologist and an anthropologist, immersed in our subject matter, to take on all the scholarly views about, say, Stonehenge, the ‘Uruk expansion’ or Iroquoian social organization and explain our preference for one interpretation over another, or venture a different one. This is how the search for truth is normally conducted in the academy. But had we tried to outline or refute every existing interpretation of the material we covered, this book would have been two or three times the size, and likely would have left the reader with a sense that the authors are engaged in a constant battle with demons who were in fact two inches tall. So instead we have tried to map out what we think really happened, and to point out the flaws in other scholars’ arguments only insofar as they seemed to reflect more widespread misconceptions.
Perhaps the most stubborn misconception we’ve been tackling has to do with scale. It does seem to be received wisdom in many quarters, academic and otherwise, that structures of domination are the inevitable result of populations scaling up by orders of magnitude; that is, that a necessary correspondence exists between social and spatial hierarchies. Time and again we found ourselves confronted with writing which simply assumes that the larger and more densely populated the social group, the more ‘complex’ the system needed to keep it organized. Complexity, in turn, is still often used as a synonym for hierarchy. Hierarchy, in turn, is used as a euphemism for chains of command (the ‘origins of the state’), which mean that as soon as large numbers of people decided to live in one place or join a common project, they must necessarily abandon the second freedom – to refuse orders – and replace it with legal mechanisms for, say, beating or locking up those who don’t do as they’re told.
As we’ve seen, none of these assumptions are theoretically essential, and history tends not to bear them out. Carole Crumley, an anthropologist and expert on Iron Age Europe, has been pointing this out for years: complex systems don’t have to be organized top-down, either in the natural or in the social world. That we tend to assume otherwise probably tells us more about ourselves than the people or phenomena that we’re studying.¹⁵ Neither is she alone in making this point. But more often than not, such observations have fallen on deaf ears.
It’s probably time to start listening, because ‘exceptions’ are fast beginning to outnumber the rules. Take cities. It was once assumed that the rise of urban life marked some kind of historical turnstile, whereby everyone who passed through had to permanently surrender their basic freedoms and submit to the rule of faceless administrators, stern priests, paternalistic kings or warrior-politicians – simply to avert chaos (or cognitive overload). To view human history through such a lens today is really not all that different from taking on the mantle of a modern-day King James, since the overall effect is to portray the violence and inequalities of modern society as somehow arising naturally from structures of rational management and paternalistic care: structures designed for human populations who, we are asked to believe, became suddenly incapable of organizing themselves once their numbers expanded above a certain threshold.
Not only do such views lack a sound basis in human psychology. They are also difficult to reconcile with archaeological evidence of how cities actually began in many parts of the world: as civic experiments on a grand scale, which frequently lacked the expected features of administrative hierarchy and authoritarian rule. We do not possess an adequate terminology for these early cities. To call them ‘egalitarian’, as we’ve seen, could mean quite a number of different things. It might imply an urban parliament and co-ordinated projects of social housing, as with some pre- Columbian centres in the Americas; or the self-organizing of autonomous households into neighbourhoods and citizens’ assemblies, as with prehistoric mega-sites north of the Black Sea; or, perhaps, the introduction of some explicit notion of equality based on principles of uniformity and sameness, as in Uruk-period Mesopotamia.
None of this variability is surprising once we recall what preceded cities in each region. That was not, in fact, rudimentary or isolated groups, but far-flung networks of societies, spanning diverse ecologies, with people, plants, animals, drugs, objects of value, songs and ideas moving between them in endlessly intricate ways. While the individual units were demographically small, especially at certain times of year, they were typically organized into loose coalitions or confederacies. At the very least, these were simply the logical outcome of our first freedom: to move away from one’s home, knowing one will be received and cared for, even valued, in some distant place. At most they were examples of ‘amphictyony’, in which some kind of formal organization was put in charge of the care and maintenance of sacred places. It seems that Marcel Mauss had a point when he argued that we should reserve the term ‘civilization’ for great hospitality zones such as these. Of course, we are used to thinking of ‘civilization’ as something that originates in cities – but, armed with new knowledge, it seems more realistic to put things the other way round and to imagine the first cities as one of those great regional confederacies, compressed into a small space.
Of course, monarchy, warrior aristocracies or other forms of stratification could also take hold in urban contexts, and often did. When this happened the consequences were dramatic. Still, the mere existence of large human settlements in no way caused these phenomena, and certainly didn’t make them inevitable. For the origins of these structures of domination we must look elsewhere. Hereditary aristocracies were just as likely to exist among demographically small or modest-sized groups, such as the ‘heroic societies’ of the Anatolian highlands, which took form on the margins of the first Mesopotamian cities and traded extensively with them. Insofar as we have evidence for the inception of monarchy as a permanent institution it seems to lie precisely there, and not in cities. In other parts of the world, some urban populations ventured partway down the road towards monarchy, only to turn back. Such was the case at Teotihuacan in the Valley of Mexico, where the city’s population – having raised the Pyramids of the Sun and Moon – then abandoned such aggrandizing projects and embarked instead on a prodigious programme of social housing, providing multi- family apartments for its residents.
Elsewhere, early cities followed the opposite trajectory, starting with neighbourhood councils and popular assemblies and ending up being ruled by warlike dynasts, who then had to maintain an uneasy coexistence with older institutions of urban governance. Something along these lines took place in Early Dynastic Mesopotamia, after the Uruk period: here again the convergence between systems of violence and systems of care seems critical. Sumerian temples had always organized their economic existence around the nurturing and feeding of the gods, embodied in their cult statues, which became surrounded by a whole industry and bureaucracy of welfare. Even more crucially, temples were charitable institutions. Widows, orphans, runaways, those exiled from their kin groups or other support networks would take refuge there: at Uruk, for example, in the Temple of Inanna, protective goddess of the city, overlooking the great courtyard of the city’s assembly.
The first charismatic war-kings attached themselves to such spaces, quite literally moving in next door to the residence of the city’s leading deity. In such ways, Sumerian monarchs were able to insert themselves into institutional spaces once reserved for the care of the gods, and thus removed from the realm of ordinary human relationships. This makes sense because kings, as the Malagasy proverb puts it, ‘have no relatives’ – or they shouldn’t, since they are rulers equally of all their subjects. Slaves too have no kin; they are severed from all prior attachments. In either case, the only recognized social relationships such individuals possess are those based on power and domination. In structural terms, and as against almost everyone else in society, kings and slaves effectively inhabit the same ground. The difference lies in which end of the power spectrum they happen to occupy.
We also know that needy individuals, taken into such temple institutions, were supplied with regular rations and put to work on the temple’s lands and in its workshops. The very first factories – or, at least, the very first we are aware of in history – were charitable institutions of this kind, where temple bureaucrats would supply women with wool to spin and weave, supervise the disposal of the product (much of it traded with upland groups in exchange for wood, stone and metal, unavailable in the river valleys), and provide them with carefully apportioned rations. All this was already true long before the appearance of kings. As persons dedicated to the gods, these women must originally have had a certain dignity, even a sacred status; but already by the time of the first written documents, the situation seems to have grown more complicated.
By then, some of those working in Sumerian temples were also war captives, or even slaves, who were similarly bereft of family support. Over time, and perhaps as a result, the status of widows and orphans also appears to have been downgraded, until the temple institutions came to resemble something more like a Victorian poorhouse. How, we might then ask, did the degradation of women working in the temple factories affect the status of women more generally? If nothing else, it must have made the prospect of fleeing an abusive domestic arrangement far more daunting. Loss of the first freedom meant, increasingly, loss of the second. Loss of the second meant effacement of the third. If a woman in such a situation attempted to create a new cult, a new temple, a new vision of social relations she would instantly be marked as a subversive, a revolutionary; if she attracted followers she might well find herself confronted by military force.
All this brings into focus another question. Does this newly established nexus between external violence and internal care – between the most impersonal and the most intimate of human relations – mark the point where everything begins to get confused? Is this an example of how relations that were once flexible and negotiable ended up getting fixed in place: an example, in other words, of how we effectively got stuck? If there is a particular story we should be telling, a big question we should be asking of human history (instead of the ‘origins of social inequality’), is it precisely this: how did we find ourselves stuck in just one form of social reality, and how did relations based ultimately on violence and domination come to be normalized within it? Perhaps the scholar who most closely approached this question in the last century was an anthropologist and poet named Franz Steiner, who died in 1952. Steiner led a fascinating if tragic life. A brilliant polymath born to a Jewish family in Bohemia, he later lived with an Arab family in Jerusalem until expelled by the British authorities, conducted fieldwork in the Carpathians and was twice forced by the Nazis to flee the continent, ending his career – ironically enough – in the south of England. Most of his immediate family were killed at Birkenau. Legend has it that he completed 800 pages of a monumental doctoral dissertation on the comparative sociology of slavery, only to have the suitcase containing his drafts and research notes stolen on a train. He was friends with, and a romantic rival to, Elias Canetti, another Jewish exile at Oxford and a successful suitor to the novelist Iris Murdoch – although two days after she’d accepted his proposal of marriage, Steiner died of a heart attack. He was forty-three.
The shorter version of Steiner’s doctoral work, which does survive, focuses on what he calls ‘pre-servile institutions’. Poignantly, given his own life story, it is a study of what happens in different cultural and historical situations to people who become unmoored: those expelled from their clans for some debt or fault; castaways, criminals, runaways. It can be read as a history of how refugees such as himself were first welcomed, treated as almost sacred beings, then gradually degraded and exploited, again much like the women working in the Sumerian temple factories. In essence, the story told by Steiner appears to be precisely about the collapse of what we would term the first basic freedom (to move away or relocate), and how this paved the way for the loss of the second (the freedom to disobey). It also leads us back to a point we made earlier about the progressive division of the human social universe into smaller and smaller units, beginning with the appearance of ‘culture areas’ (a fascination of ethnologists in the central European tradition, in which Steiner first trained).
What happens, Steiner asked, when expectations that make freedom of movement possible – the norms of hospitality and asylum, civility and shelter – erode? Why does this so often appear to be a catalyst for situations where some people can exert arbitrary power over others? Steiner worked his way in careful detail through cases ranging from the Amazonian Huitoto and East African Safwa to the Tibeto-Burman Lushai. Along the journey he suggested one possible answer to the question that had so puzzled Robert Lowie, and later Clastres: if stateless societies do regularly organize themselves in such a way that chiefs have no coercive power, then how did top-down forms of organization ever come into the world to begin with? You’ll recall how both Lowie and Clastres were driven to the same conclusion: that they must have been the product of religious revelation. Steiner provided an alternative route. Perhaps, he suggested, it all goes back to charity.
In Amazonian societies, not only orphans but also widows, the mad, disabled or deformed – if they had no one else to look after them – were allowed to take refuge in the chief’s residence, where they received a share of communal meals. To these were occasionally added war captives, especially children taken in raiding expeditions. Among the Safwa or Lushai, runaways, debtors, criminals or others needing protection held the same status as those who surrendered in battle. All became members of the chief’s retinue, and the younger males often took on the role of police-like enforcers. How much power the chief actually had over his retainers – Steiner uses the Roman Law term potestas, which denotes among other things a father’s power of arbitrary command over his dependants and their property – would vary, depending how easy it was for wards to run away and find refuge elsewhere, or to maintain at least some ties with relatives, clans or outsiders willing to stand up for them. How far such henchmen could be relied on to enforce the chief’s will also varied; but the sheer potential was important.
In all such cases, the process of giving refuge did generally lead to the transformation of basic domestic arrangements, especially as captured women were incorporated, further reinforcing the potestas of fathers. It is possible to detect something of this logic in almost all historically documented royal courts, which invariably attracted those considered freakish or detached. There seems to have been no region of the world, from China to the Andes, where courtly societies did not host such obviously distinctive individuals; and few monarchs who did not also claim to be the protectors of widows and orphans. One could easily imagine something along these lines was already happening in certain hunter- gatherer communities during much earlier periods of history. The physically anomalous individuals accorded lavish burials in the last Ice Age must also have been the focus of much caring attention while alive. No doubt there are sequences of development linking such practices to later royal courts – we’ve caught glimpses of them, as in Predynastic Egypt – even if we are still unable to reconstruct most of the links.
Steiner may not have foregrounded the issue, but his observations are directly relevant to debates about the origins of patriarchy. Feminist anthropologists have long argued for a connection between external (largely male) violence and the transformation of women’s status in the home. In archaeological and historical terms, we are only just beginning to gather together enough material to begin understanding how that process actually worked.
The research that culminated in this book began almost a decade ago, essentially as a form of play. We pursued it at first, it would be fair to say, in a spirit of mild defiance towards our more ‘serious’ academic responsibilities. Mainly we were just curious about how the new archaeological evidence that had been building up for the last thirty years might change our notions of early human history, especially the parts bound up with debates on the origins of social inequality. Before long, though, we realized that what we were doing was potentially important, because hardly anyone else in our fields seemed to be doing this work of synthesis. Often, we found ourselves searching in vain for books that we assumed must exist but, it turns out, simply didn’t – for instance, compendia of early cities that lacked top-down governance, or accounts of how democratic decision- making was conducted in Africa or the Americas, or comparisons of what we’ve called ‘heroic societies’. The literature is riddled with absences.
We eventually came to realize that this reluctance to synthesize was not simply a product of reticence on the part of highly specialized scholars, although this is certainly a factor. To some degree it was simply the lack of an appropriate language. What, for instance, does one even call a ‘city lacking top-down structures of governance’? At the moment there is no commonly accepted term. Dare one call it a ‘democracy’? A ‘republic’? Such words (like ‘civilization’) are so freighted with historical baggage that most archaeologists and anthropologists instinctively recoil from them, and historians tend to limit their use to Europe. Does one, then, call it an ‘egalitarian city’? Probably not, since to evoke such a term is to invite the obvious demand for proof that the city was ‘really’ egalitarian – which usually means, in practice, showing that no element of structural inequality existed in any aspect of its inhabitants’ lives, including households and religious arrangements. Since such evidence will rarely, if ever, be forthcoming, the conclusion would have to be that these are not really egalitarian cities after all.
By the same logic, one might easily conclude there aren’t really any ‘egalitarian societies’, except possibly certain very small foraging bands.
Many researchers in the field of evolutionary anthropology do, in fact, make precisely this argument. But ultimately the result of this kind of thinking is to lump together all ‘non-egalitarian’ cities or indeed all ‘non-egalitarian societies’, which is a little like saying there’s no meaningful difference between a hippie commune and a biker gang, since neither are entirely non- violent. All this achieves, at the end of the day, is to leave us literally at a loss for words when confronted with certain major aspects of human history. We fall strangely mute in the face of any kind of evidence for humans doing something other than ‘rushing headlong for their chains’. Sensing a sea change in the evidence of the past, we decided to approach things the other way round.
What this meant, in practice, was reversing a lot of polarities. It meant ditching the language of ‘equality’ and ‘inequality’, unless there was explicit evidence that ideologies of social equality were actually present on the ground. It meant asking, for instance, what happens if we accord significance to the 5,000 years in which cereal domestication did not lead to the emergence of pampered aristocracies, standing armies or debt peonage, rather than just the 5,000 in which it did? What happens if we treat the rejection of urban life, or of slavery, in certain times and places as something just as significant as the emergence of those same phenomena in others? In the process, we often found ourselves surprised. We’d never have guessed, for instance, that slavery was most likely abolished multiple times in history in multiple places; and that very possibly the same is true of war. Obviously, such abolitions are rarely definitive. Still, the periods in which free or relatively free societies existed are hardly insignificant. In fact, if you bracket the Eurasian Iron Age (which is effectively what we have been doing here), they represent the vast majority of human social experience.
Social theorists have a tendency to write about the past as if everything that happened could have been predicted beforehand. This is somewhat dishonest, since we’re all aware that when we actually try to predict the future we almost invariably get it wrong – and this is just as true of social theorists as anybody else. Nonetheless, it’s hard to resist the temptation to write and think as if the current state of the world, in the early twenty-first century, is the inevitable outcome of the last 10,000 years of history, while in reality, of course, we have little or no idea what the world will be like even in 2075, let alone 2150.
Who knows? Perhaps if our species does endure, and we one day look backwards from this as yet unknowable future, aspects of the remote past that now seem like anomalies – say, bureaucracies that work on a community scale; cities governed by neighbourhood councils; systems of government where women hold a preponderance of formal positions; or forms of land management based on care-taking rather than ownership and extraction – will seem like the really significant breakthroughs, and great stone pyramids or statues more like historical curiosities. What if we were to take that approach now and look at, say, Minoan Crete or Hopewell not as random bumps on a road that leads inexorably to states and empires, but as alternative possibilities: roads not taken? After all, those things really did exist, even if our habitual ways of looking at the past seem designed to put them at the margins rather than at the centre of things. Much of this book has been devoted to recalibrating those scales; to reminding us that people did actually live in those ways, often for many centuries, even millennia. In some ways, such a perspective might seem even more tragic than our standard narrative of civilization as the inevitable fall from grace. It means we could have been living under radically different conceptions of what human society is actually about. It means that mass enslavement, genocide, prison camps, even patriarchy or regimes of wage labour never had to happen. But on the other hand it also suggests that, even now, the possibilities for human intervention are far greater than we’re inclined to think.
We began this book with a quote which refers to the Greek notion of kairos as one of those occasional moments in a society’s history when its frames of reference undergo a shift – a metamorphosis of the fundamental principles and symbols, when the lines between myth and history, science and magic become blurred – and, therefore, real change is possible. Philosophers sometimes like to speak of ‘the Event’ – a political revolution, a scientific discovery, an artistic masterpiece – that is, a breakthrough which reveals aspects of reality that had previously been unimaginable but, once seen, can never be unseen. If so, kairos is the kind of time in which Events are prone to happen.
Societies around the world appear to be cascading towards such a point.
This is particularly true of those which, since the First World War, have been in the habit of calling themselves ‘Western’. On the one hand, fundamental breakthroughs in the physical sciences, or even artistic expression, no longer seem to occur with anything like the regularity people came to expect in the late nineteenth and early twentieth centuries. Yet at the same time, our scientific means of understanding the past, not just our species’ past but that of our planet, has been advancing with dizzying speed. Scientists in 2020 are not (as readers of mid-twentieth-century science fiction might have hoped) encountering alien civilizations in distant star systems; but they are encountering radically different forms of society under their own feet, some forgotten and newly rediscovered, others more familiar, but now understood in entirely new ways.
In developing the scientific means to know our own past, we have exposed the mythical substructure of our ‘social science’ – what once appeared unassailable axioms, the stable points around which our self- knowledge is organized, are scattering like mice. What is the purpose of all this new knowledge, if not to reshape our conceptions of who we are and what we might yet become? If not, in other words, to rediscover the meaning of our third basic freedom: the freedom to create new and different forms of social reality? Myth in itself is not the problem here. It shouldn’t be mistaken for bad or infantile science. Just as all societies have their science, all societies have their myths. Myth is the way in which human societies give structure and meaning to experience. But the larger mythic structures of history we’ve been deploying for the last several centuries simply don’t work any more; they are impossible to reconcile with the evidence now before our eyes, and the structures and meanings they encourage are tawdry, shop-worn and politically disastrous.
No doubt, for a while at least, very little will change. Whole fields of knowledge – not to mention university chairs and departments, scientific journals, prestigious research grants, libraries, databases, school curricula and the like – have been designed to fit the old structures and the old questions. Max Planck once remarked that new scientific truths don’t replace old ones by convincing established scientists that they were wrong; they do so because proponents of the older theory eventually die, and generations that follow find the new truths and theories to be familiar, obvious even. We are optimists. We like to think it will not take that long.
In fact, we have already taken a first step. We can see more clearly now what is going on when, for example, a study that is rigorous in every other respect begins from the unexamined assumption that there was some ‘original’ form of human society; that its nature was fundamentally good or evil; that a time before inequality and political awareness existed; that something happened to change all this; that ‘civilization’ and ‘complexity’ always come at the price of human freedoms; that participatory democracy is natural in small groups but cannot possibly scale up to anything like a city or a nation state.
We know, now, that we are in the presence of myths.