Janis and I are organizing a group project on climate change (I could be more precise, but it’s not the point of this post) for the first-year master students at CCC, and we picked Mike Davis’s “Who Will Build the Ark” (New Left Review) as a first assigned text. An aside: when searching for the text online just now so I could link it — you need a subscription to the NLR to read it on their site — I came across the blog of an urban planner who posted it on his blog with the intro “New Left Review article that nobody has the time to read right now (even me). But I’m posting it anyway.” I don’t even know how to respond to that.
Anyway, I did read it, as did Janis, and we thought it would be a good text for everyone to read for discussion, not because it contains any astonishing facts about climate change (besides, it was published in 2010 so many of the statistics noted have changed) but rather because of the sentiments Davis expresses regarding being “realistic” and being “optimistic”:
[T]his essay is organized as a debate with myself, a mental tournament between analytic despair and utopian possibility that is personally, and probably objectively, irresolvable. …
In the first section, ‘Pessimism of the Intellect’, I adduce arguments for believing that we have already lost the first, epochal stage of the battle against global warming. …
The second part of the essay, ‘Optimism of the Imagination’, is my self-rebuttal. I appeal to the paradox that the single most important cause of global warming—the urbanization of humanity—is also potentially the principal solution to the problem of human survival in the later twenty-first century. Left to the dismal politics of the present, of course, cities of poverty will almost certainly become the coffins of hope; but all the more reason that we must start thinking like Noah. Since most of history’s giant trees have already been cut down, a new Ark will have to be constructed out of the materials that a desperate humanity finds at hand in insurgent communities, pirate technologies, bootlegged media, rebel science and forgotten utopias.
I really needed to read this essay when I did (last week) because lately I’ve been experiencing a bit of mission drift with my research, and also feeling hopeless in the face of It All. Davis isn’t optimistic per se, but our reason for choosing this essay to kick off the project can be summed up in its last sentence:
If this sounds like a sentimental call to the barricades, an echo from the classrooms, streets and studios of forty years ago, then so be it; because on the basis of the evidence before us, taking a ‘realist’ view of the human prospect, like seeing Medusa’s head, would simply turn us into stone.
In our first group meeting yesterday we discussed the old question of individual actions being worth anything or not. We all agreed on the naivety in believing that things like recycling and changing light bulbs is going to topple a global web of economic, environmental and social injustice.
From “As the World Burns” (with Stephanie McMillan, The Derek Jensen Reader, New York: Seven Stories Press, 2012). Jensen writes in the intro to this series of comic strips:
We [McMillan and he] absolutely despise books like 100 Simple Things You Can Do to Save the Planet because they’re wrong and they trivialize tremendous suffering. They trivialize the murder of the planet. They point away from the real problems, which are capitalism, civilization, and this entire exploitative way of life. It’s absurd to think that you can inflate your tires and that’s going to stop global warming. Someone actually did that math on all the suggestions that Al Gore makes in An Inconvenient Truth, and even if every person in the United States did everything he suggests, it would only reduce carbon emissions by about 20 percent. Since Gore doesn’t speak against a growth economy, all of that would disappear in a few years anyway.
So… what does a person do, then? I’d like to know, and if I knew I’d be out doing it right now, not sitting here staring at my Ferris Bueller poster.
March in the streets? Mas and I had a long discussion about the whole NYC climate march, to which I was completely oblivious before it happened because it happened while I was in the states visiting my family, and I’d decided that I would go on technology detox while there, which is a glorious thing to do from time to time, you should try it. Except, tech detoxes occasionally cause you to miss things like climate marches going on four hours from your parents’ front door. I found out about it the day of, from my mother, who was not on technology detox. I wouldn’t have gone anyway, though, because I’m on the fence as to whether actions like that do much good. For a problem like this anyway. If anyone reading this would like to try to convince me one way or the other, go ahead. All ears. (Really.)
Though I was tempted to break my tech fast I held off until we got back to this socialist utopia called Europe and then I did some reading up ex post facto and found the following articles to be of interest in figuring out what I thought about the whole thing:
- Chris Hedges, “The Last Gasp of the Climate Change Liberals,” truthdig
- Arun Gupta, “How the Peoples’ Climate March Became a Corporate PR Campaign,” counterpunch
- Jonathan Mathew Smucker, “Radicals and the 99%: Core and Mass Movement,” beyondthechoir.org
- Jonathan Smucker (again) and Michael Premo, “What’s Wrong With the Radical Critique of the Peoples’ Climate March,” The Nation
My opinion of the march itself lies more or less among the opinions in the latter two of those links — I don’t have a problem with Facebookifying a movement (which often means dressing it up a little with flashy graphic design) if it gets more people to join in even the smallest of ways, because broadening a support base normalizes an issue — meaning more people will feel comfortable saying X or Y without fear — and that holds possibility for paving the way for further momentum. Yes, there’s a danger of watering down, but as Smucker notes:
The dead-end alternative is for radicals to work only with other radicals—and to remain stuck in a story of the righteous few, whose protagonists bravely fight the good fight but always lose. Part of our trouble is that we are at the end of a decades-long period of fragmentation and decline in the broad social justice left. Some on the left have become so accustomed to powerlessness that they have become attached to it. Success itself becomes suspect, andpolitics becomes framed only in terms of expressing values and making righteous stands—instead of as intervention in the terrain of power. Accustomed to the margins, we can have a hard time recognizing how many of our ideas have actually become popular.
So there’s that. But it’s not enough.
What is definitely not going to help anything is getting a Jesus complex and thinking one can find all the answers, wanting to save everything all at once, right now. On that note I’m going to stop racking my brain for an answer right now, go gather up my things, and head down to Fonderie Kugler for the first day of Emergency!.
EMERGENCY! looks to test out new forms of economy and sharing. Through a plurality of artistic practices focusing on the research process, we invite you to collaborate in the imagining of other forms of production.
Emergency! is organized by three friends of mine. Everyone has been asking me for the past month, “How’s Emergency going?” and I keep having to explain that I don’t know because I’m not one of the organizers. I just show up.
Ours is indeed an age of extremity. For we live under continual threat of two equally fearful, but seemingly opposed, destinies: unremitting banality and inconceivable terror. It is fantasy, served out in large rations by the popular arts, which allows most people to cope with these twin specters. For one job that fantasy can do is to lift us out of the unbearably humdrum and to distract us from terrors, real or anticipated—by an escape into exotic dangerous situations which have last-minute happy endings. But another one of the things that fantasy can do is to normalize what is psychologically unbearable, thereby inuring us to it. In the one case, fantasy beautifies the world. In the other, it neutralizes it.
The fantasy to be discovered in science fiction films does both jobs. These films reflect world-wide anxieties, and they serve to allay them. They inculcate a strange apathy concerning the processes of radiation, contamination, and destruction that I for one find haunting and depressing. The naïve level of the films neatly tempers the sense of otherness, of alien-ness, with the grossly familiar. In particular, the dialogue of most science fiction films, which is generally of a monumental but often touching banality, makes them wonderfully, unintentionally funny. Lines like: “Come quickly, there’s a monster in my bathtub”; “We must do something about this”; “Wait, Professor. There’s someone on the telephone”; “But that’s incredible”; and the old American stand-by (accompanied by brow-wiping), “I hope it works!”—are hilarious in the context of picturesque and deafening holocaust. Yet the films also contain something which is painful and in deadly earnest.
Science fiction films are one of the most accomplished of the popular art forms, and can give a great deal of pleasure to sophisticated film addicts. Part of the pleasure, indeed, comes from the sense in which these movies are in complicity with the abhorrent. It is no more, perhaps, than the way all art draws its audience into a circle of complicity with the thing represented. But in science fiction films we have to do with things which are (quite literally) unthinkable. Here, “thinking about the unthinkable”—not in the way of Herman Kahn, as a subject for calculation, but as a subject for fantasy—becomes, however inadvertently, itself a somewhat questionable act from a moral point of view. The films perpetuate clichés about identity, volition, power, knowledge, happiness, social consensus, guilt, responsibility which are, to say the least, not serviceable in our present extremity. But collective nightmares cannot be banished by demonstrating that they are, intellectually and morally, fallacious. This nightmare—the one reflected in various registers in the science fiction films—is too close to our reality.
A typical science fiction film has a form as predictable as a Western, and is made up of elements which are as classic as the saloon brawl, the blonde schoolteacher from the East, and the gun duel on the deserted main street.
One model scenario proceeds through five phases:
- The arrival of the thing. (Emergence of the monsters, landing of the alien space-ship, etc.) This is usually witnessed, or suspected, by just one person, who is a young scientist on a field trip. Nobody, neither his neighbors nor his colleagues, will believe him for some time. The hero is not married, but has a sympathetic though also incredulous girlfriend.
- Confirmation of the hero’s report by a host of witnesses to a great act of destruction. (If the invaders are beings from another planet, a fruitless attempt to parley with them and get them to leave peacefully.) The local police are summoned to deal with the situation and massacred.
- In the capital of the country, conferences between scientists and the military take place, with the hero lecturing before a chart, map, or blackboard. A national emergency is declared. Reports of further atrocities. Authorities from other countries arrive in black limousines. All international tensions are suspended in view of the planetary emergency. This stage often includes a rapid montage of news broadcasts in various languages, a meeting at the UN, and more conferences between the military and the scientists. Plans are made for destroying the enemy.
- Further atrocities. At some point the hero’s girlfriend is in grave danger. Massive counterattacks by international forces, with brilliant displays of rocketry, rays, and other advanced weapons, are all unsuccessful. Enormous military casualties, usually by incineration. Cities are destroyed and/or evacuated. There is an obligatory scene here of panicked crowds stampeding along a highway or a big bridge, being waved on by numerous policemen who, if the film is Japanese, are immaculately white-gloved, preternaturally calm, and call out in dubbed English, “Keep moving. There is no need to be alarmed.”
- More conferences, whose motif is: “They must be vulnerable to something.” Throughout, the hero has been experimenting in his lab on this. The final strategy, upon which all hopes depend, is drawn up; the ultimate weapon—often a super-powerful, as yet untested, nuclear device—is mounted. Countdown. Final repulse of the monster or invaders. Mutual congratulations, while the hero and girlfriend embrace cheek to cheek and scan the skies sturdily. “But have we seen the last of them?”
The film I have just described should be in technicolor and on a wide screen. Another typical scenario is simpler and suited to black-and-white films with a lower budget. It has four phases:
- The hero (usually, but not always, a scientist) and his girlfriend, or his wife and children, are disporting themselves in some innocent ultra-normal middle-class house in a small town, or on vacation (camping, boating). Suddenly, someone starts behaving strangely or some innocent form of vegetation becomes monstrously enlarged and ambulatory. If a character is pictured driving an automobile, something gruesome looms up in the middle of the road. If it is night, strange lights hurtle across the sky.
- After following the thing’s tracks, or determining that It is radioactive, or poking around a huge crater—in short, conducting some sort of crude investigation—the hero tries to warn the local authorities, without effect; nobody believes anything is amiss. The hero knows better. If the thing is tangible, the house is elaborately barricaded. If the invading alien is an invisible parasite, a doctor or friend is called in, who is himself rather quickly killed or “taken possession of” by the thing.
- The advice of anyone else who is consulted proves useless. Meanwhile, It continues to claim other victims in the town, which remains implausibly isolated from the rest of the world. General helplessness.
- One of two possibilities. Either the hero prepares to do battle alone, accidentally discovers the thing’s one vulnerable point, and destroys it. Or, he somehow manages to get out of town and succeeds in laying his case before competent authorities. They, along the lines of the first script but abridged, deploy a complex technology which (after initial setbacks) finally prevails against the invaders.
Another version of the second script opens with the scientist-hero in his laboratory, which is located in the basement or on the grounds of his tasteful, prosperous house. Through his experiments, he unwittingly causes a frightful metamorphosis in some class of plants or animals, which turn carnivorous and go on a rampage. Or else, his experiments have caused him to be injured (sometimes irrevocably) or “invaded” himself. Perhaps he has been experimenting with radiation, or has built a machine to communicate with beings from other planets or to transport him to other places or times.
Another version of the first script involves the discovery of some fundamental alteration in the conditions of existence of our planet, brought about by nuclear testing, which will lead to the extinction in a few months of all human life. For example: the temperature of the earth is becoming too high or too low to support life, or the earth is cracking in two, or it is gradually being blanketed by lethal fallout.
A third script, somewhat but not altogether different from the first two, concerns a journey through space—to the moon, or some other planet. What the space-voyagers commonly discover is that the alien terrain is in a state of dire emergency, itself threatened by extra-planetary invaders or nearing extinction through the practice of nuclear warfare. The terminal dramas of the first and second scripts are played out there, to which is added a final problem of getting away from the doomed and/or hostile planet and back to Earth.
I am aware, of course, that there are thousands of science fiction novels (their heyday was the late 1940’s), not to mention the transcriptions of science fiction themes which, more and more, provide the principal subject matter of comic books. But I propose to discuss science fiction films (the present period began in 1950 and continues, considerably abated, to this day) as an independent sub-genre, without reference to the novels from which, in many cases, they were adapted. For while novel and film may share the same plot, the fundamental difference between the resources of the novel and the film makes them quite dissimilar. Anyway, the best science fiction movies are on a far higher level, as examples of the art of the film, than the science fiction books are, as examples of the art of the novel or romance. That the films might be better than the books is an old story. Good novels rarely make good films, but excellent films are often made from poor or trivial novels.
Certainly, compared with the science fiction novels, their film counterparts have unique strengths, one of which is the immediate representation of the extraordinary: physical deformity and mutation, missile and rocket combat, toppling skyscrapers. The movies are, naturally, weak just where the science fiction novels (some of them), are strong—on science. But in place of an intellectual workout, they can supply something the novels can never provide—sensuous elaboration. In the films it is by means of images and sounds, not words that have to be translated by the imagination, that one can participate in the fantasy of living through one’s own death and more, the death of cities, the destruction of humanity itself.
Science fiction films are not about science. They are about disaster, which is one of the oldest subjects of art. In science fiction films, disaster is rarely viewed intensively; it is always extensive. It is a matter of quantity and ingenuity. If you will, it is a question of scale. But the scale, particularly in the wide-screen Technicolor films (of which the ones by the Japanese director, Inoshiro Honda, and the American director, George Pal, are technically the most brilliant and convincing, and visually the most exciting), does raise the matter to another level.
Thus, the science fiction film (like a very different contemporary genre, the Happening) is concerned with the aesthetics of destruction, with the peculiar beauties to be found in wreaking havoc, making a mess. And it is in the imagery of destruction that the core of a good science fiction film lies. This is the disadvantage of the cheap film—in which the monster appears or the rocket lands in a small dull-looking town. (Hollywood budget needs usually dictate that the town be in the Arizona or California desert. In The Thing from Another World , the rather sleazy and confined set is supposed to be an encampment near the North Pole.) Still, good black-and-white science fiction films have been made. But a bigger budget, which usually means Technicolor, allows a much greater play back and forth among several model environments. There is the populous city. There is the lavish but ascetic interior of the space ship—either the invaders’ or ours—replete with streamlined chromium fixtures and dials, and machines whose complexity is indicated by the number of colored lights they flash and strange noises they emit. There is the laboratory crowded with formidable machines and scientific apparatus. There is a comparatively old-fashioned looking conference room, where the scientist brings charts to explain the desperate state of things to the military. And each of these standard locales or backgrounds is subject to two modalities—intact and destroyed. We may, if we are lucky, be treated to a panorama of melting tanks, flying bodies, crashing walls, awesome craters and fissures in the earth, plummeting spacecraft, colorful deadly rays; and to a symphony of screams, weird electronic signals, the noisiest military hardware going, and the leaden tones of the laconic denizens of alien planets and their subjugated earthlings.
Certain of the primitive gratifications of science fiction films—for instance, the depiction of urban disaster on a colossally magnified scale—are shared with other types of films. Visually there is little difference between mass havoc as represented in the old horror and monster films and what we find in science fiction films, except (again) scale. In the old monster films, the monster always headed for the great city where he had to do a fair bit of rampaging, hurling buses off bridges, crumpling trains in his bare hands, toppling buildings, and so forth. The archetype is King Kong, in Schoedsach’s great film of 1933, running amok, first in the African village (trampling babies, a bit of footage excised from most prints), then in New York. This is really not any different from Inoshiro Honda’sRodan (1957), where two giant reptiles—with a wingspan of five-hundred feet and supersonic speeds—by flapping their wings whip up a cyclone that blows most of Tokyo to smithereens. Or, the tremendous scenes of rampage by the gigantic robot who destroys half of Japan with the great incinerating ray which shoots forth from his eyes, at the beginning of Honda’s The Mysterians (1959). Or, the destruction, by the rays from a fleet of flying saucers of New York, Paris and Tokyo, in Battle in Outer Space (1960). Or, the inundation of New York in When Worlds Collide (1951). Or, the end of London in 1968 depicted in George Pal’s The Time Machine (1960). Neither do these sequences differ in aesthetic intention from the destruction scenes in the big sword, sandal, and orgy color spectaculars set in Biblical and Roman times—the end of Sodom in Aldrich’sSodom and Gomorrah, of Gaza in de Mille’s Samson and Delilah, of Rhodes in The Colossus of Rhodes, and of Rome in a dozen Nero movies. D. W. Griffith began it with the Babylon sequence in Intolerance, and to this day there is nothing like the thrill of watching all those expensive sets come tumbling down.
In other respects as well, the science fiction films of the 1950’s take up familiar themes. The famous movie serials and comics of the 1930’s of the adventures of Flash Gordon and Buck Rogers, as well as the more recent spate of comic book super-heroes with extraterrestrial origins (the most famous is Superman, a foundling from the planet, Krypton, currently described as having been exploded by a nuclear blast) share motifs with more recent science fiction movies. But there is an important difference. The old science fiction films, and most of the comics, still have an essentially innocent relation to disaster. Mainly they offer new versions of the oldest romance of all—of the strong invulnerable hero with the mysterious lineage come to do battle on behalf of good and against evil. Recent science fiction films have a decided grimness, bolstered by their much greater degree of visual credibility, which contrasts strongly with the older films. Modern historical reality has greatly enlarged the imagination of disaster, and the protagonists—perhaps by the very nature of what is visited upon them—no longer seem wholly innocent.
The lure of such generalized disaster as a fantasy is that it releases one from normal obligations. The trump card of the end-of-the-world movies—like The Day the Earth Caught Fire (1962)—is that great scene with New York or London or Tokyo discovered empty, its entire population annihilated. Or, as in The World, the Flesh, and the Devil(1959), the whole movie can be devoted to the fantasy of occupying the deserted city and starting all over again—Robinson Crusoe on a world-wide scale.
Another kind of satisfaction these films supply is extreme moral simplification—that is to say, a morally acceptable fantasy where one can give outlet to cruel or at least amoral feelings. In this respect, science fiction films partly overlap with horror films. This is the undeniable pleasure we derive from looking at freaks, at beings excluded from the category of the human. The sense of superiority over the freak conjoined in varying proportions with the titillation of fear and aversion makes it possible for moral scruples to be lifted, for cruelty to be enjoyed. The same thing happens in science fiction films. In the figure of the monster from outer space, the freakish, the ugly, and the predatory all converge—and provide a fantasy target for righteous bellicosity to discharge itself, and for the aesthetic enjoyment of suffering and disaster. Science fiction films are one of the purest forms of spectacle; that is, we are rarely inside anyone’s feelings. (An exception to this is Jack Arnold’s The Incredible Shrinking Man .) We are merely spectators; we watch.
But in science fiction films, unlike horror films, there is not much horror. Suspense, shocks, surprises are mostly abjured in favor of a steady inexorable plot. Science fiction films invite a dispassionate, aesthetic view of destruction and violence—a technologicalview. Things, objects, machinery play a major role in these films. A greater range of ethical values is embodied in the décor of these films than in the people. Things, rather than the helpless humans, are the locus of values because we experience them, rather than people, as the sources of power. According to science fiction films, man is naked without his artifacts. They stand for different values, they are potent, they are what gets destroyed, and they are the indispensable tools for the repulse of the alien invaders or the repair of the damaged environment.
The science fiction films are strongly moralistic. The standard message is the one about the proper, or humane, uses of science, versus the mad, obsessional use of science. This message the science fiction films share in common with the classic horror films of the 1930’s, like Frankenstein, The Mummy, The Island of Doctor Moreau, Dr. Jekyll and Mr. Hyde. (Georges Franju’s brilliant Les Yeux Sans Visage , called here The Horror Chamber of Doctor Faustus, is a more recent example.) In the horror films, we have the mad or obsessed or misguided scientist who pursues his experiments against good advice to the contrary, creates a monster or monsters, and is himself destroyed—often recognizing his folly himself, and dying in the successful effort to destroy his own creation. One science fiction equivalent of this is the scientist, usually a member of a team, who defects to the planetary invaders because “their” science is more advanced than “ours.”
This is the case in The Mysterians, and, true to form, the renegade sees his error in the end, and from within the Mysterian space ship destroys it and himself. In This Island Earth (1955), the inhabitants of the beleaguered planet Metaluna propose to conquer Earth, but their project is foiled by a Metalunan scientist named Exeter who, having lived on Earth a while and learned to love Mozart, cannot abide such viciousness. Exeter plunges his space ship into the ocean after returning a glamorous pair (male and female) of American physicists to Earth. Metaluna dies. In The Fly (1958), the hero, engrossed in his basement-laboratory experiments on a matter-transmitting machine, uses himself as a subject, accidentally exchanges head and one arm with a housefly which had gotten into the machine, becomes a monster, and with his last shred of human will destroys his laboratory and orders his wife to kill him. His discovery, for the good of mankind, is lost.
Being a clearly labeled species of intellectual, the scientists in science fiction films are always liable to crack up or go off the deep end. In Conquest of Space (1955), the scientist-commander of an international expedition to Mars suddenly acquires scruples about the blasphemy involved in the undertaking, and begins reading the Bible mid-journey instead of attending to his duties. The commander’s son, who is his junior officer and always addresses his father as “General,” is forced to kill the old man when he tries to prevent the ship from landing on Mars. In this film, both sides of the ambivalence toward scientists are given voice. Generally, for a scientific enterprise to be treated entirely sympathetically in these films, it needs the certificate of utility. Science, viewed without ambivalence, means an efficacious response to danger. Disinterested intellectual curiosity rarely appears in any form other than caricature, as a maniacal dementia that cuts one off from normal human relations. But this suspicion is usually directed at the scientist rather than his work. The creative scientist may become a martyr to his own discovery, through an accident or by pushing things too far. The implication remains that other men, less imaginative—in short, technicians—would administer the same scientific discovery better and more safely. The most ingrained contemporary mistrust of the intellect is visited, in these movies, upon the scientist-as-intellectual.
The message that the scientist is one who releases forces which, if not controlled for good, could destroy man himself seems innocuous enough. One of the oldest images of the scientist is Shakespeare’s Prospero, the over-detached scholar forcibly retired from society to a desert island, only partly in control of the magic forces in which he dabbles. Equally classic is the figure of the scientist as satanist (Dr. Faustus, stories of Poe and Hawthorne). Science is magic, and man has always known that there is black magic as well as white. But it is not enough to remark that contemporary attitudes—as reflected in science fiction films—remain ambivalent, that the scientist is treated both as satanist and savior. The proportions have changed, because of the new context in which the old admiration and fear of the scientist is located. For his sphere of influence is no longer local, himself or his immediate community. It is planetary, cosmic.
One gets the feeling, particularly in the Japanese films, but not only there, that mass trauma exists over the use of nuclear weapons and the possibility of future nuclear wars. Most of the science fiction films bear witness to this trauma, and in a way, attempt to exorcise it.
The accidental awakening of the super-destructive monster who has slept in the earth since prehistory is, often, an obvious metaphor for the Bomb. But there are many explicit references as well. In The Mysterians, a probe ship from the planet Mysteroid has landed on earth, near Tokyo. Nuclear warfare having been practiced on Mysteroid for centuries (their civilization is “more advanced than ours”), 90 per cent of those now born on the planet have to be destroyed at birth, because of defects caused by the huge amounts of Strontium 90 in their diet. The Mysterians have come to earth to marry earth women and possibly to take over our relatively uncontaminated planet. . . . In The Incredible Shrinking Man, the John Doe hero is the victim of a gust of radiation which blows over the water, while he is out boating with his wife; the radiation causes him to grow smaller and smaller, until at the end of the movie he steps through the fine mesh of a window screen to become “the infinitely small. . . .” In Rodan, a horde of monstrous carnivorous prehistoric insects, and finally a pair of giant flying reptiles (the prehistoric Archeopteryx), are hatched from dormant eggs in the depths of a mine shaft by the impact of nuclear test explosions, and go on to destroy a good part of the world before they are felled by the molten lava of a volcanic eruption. . . . In the English film, The Day the Earth Caught Fire, two simultaneous hydrogen bomb tests by the U.S. and Russia change by eleven degrees the tilt of the earth on its axis and alter the earth’s orbit so that it begins to approach the sun.
Radiation casualties—ultimately, the conception of the whole world as a casualty of nuclear testing and nuclear warfare—is the most ominous of all the notions with which science fiction films deal. Universes become expendable. Worlds become contaminated, burnt out, exhausted, obsolete. In Rocketship X-M (1950), explorers from Earth land on Mars, where they learn that atomic warfare has destroyed Martian civilization. In George Pal’s The War of the Worlds (1953), reddish spindly alligator-skinned creatures from Mars invade Earth because their planet is becoming too cold to be habitable. In This Island Earth, also American, the planet Metaluna, whose population has long ago been driven underground by warfare, is dying under the missile attacks of an enemy planet. Stocks of uranium, which power the force-shield shielding Metaluna, have been used up; and an unsuccessful expedition is sent to Earth to enlist earth scientists to devise new sources of nuclear power.
There is a vast amount of wishful thinking in science fiction films, some of it touching, some of it depressing. Again and again, one detects the hunger for a “good war,” which poses no moral problems, admits of no moral qualifications. The imagery of science fiction films will satisfy the most bellicose addict of war films, for a lot of the satisfactions of war films pass, untransformed, into science fiction films. Examples: the dogfights between earth “fighter rockets” and alien spacecraft in the Battle of Outer Space (1959); the escalating firepower in the successive assaults upon the invaders in The Mysterians, which Dan Talbot correctly described as a nonstop holocaust; the spectacular bombardment of the underground fortress in This Island Earth.
Yet at the same time the bellicosity of science fiction films is neatly channeled into the yearning for peace, or for at least peaceful coexistence. Some scientist generally takes sententious note of the fact that it took the planetary invasion or cosmic disaster to make the warring nations of the earth come to their senses, and suspend their own conflicts. One of the main themes of many science fiction films—the color ones usually, because they have the budget and resources to develop the military spectacle—is this UN fantasy, a fantasy of united warfare. (The same wishful UN theme cropped up in a recent spectacular which is not science fiction, Fifty-Five Days at Peking . There, topically enough, the Chinese, the Boxers, play the role of Martian invaders who unite the earthmen, in this case the United States, Russia, England, France, Germany, Italy, and Japan.) A great enough disaster cancels all enmities, and calls upon the utmost concentration of the earth’s resources.
Science—technology—is conceived of as the great unifier. Thus the science fiction films also project a Utopian fantasy. In the classic models of Utopian thinking—Plato’s Republic, Campanella’s City of the Sun, More’s Utopia, Swift’s land of the Houyhnhnms, Voltaire’s Eldorado—society had worked out a perfect consensus. In these societies reasonableness had achieved an unbreakable supremacy over the emotions. Since no disagreement or social conflict was intellectually plausible, none was possible. As in Melville’s Typee, “they all think the same.” The universal rule of reason meant universal agreement. It is interesting, too, that societies in which reason was pictured as totally ascendant were also traditionally pictured as having an ascetic and/or materially frugal and economically simple mode of life. But in the Utopian world community projected by science fiction films, totally pacified and ruled by scientific concensus, the demand for simplicity of material existence would be absurd.
But alongside the hopeful fantasy of moral simplification and international unity embodied in the science fiction films, lurk the deepest anxieties about contemporary existence. I don’t mean only the very real trauma of the Bomb—that it has been used, that there are enough now to kill everyone on earth many times over, that those new bombs may very well be used. Besides these new anxieties about physical disaster, the prospect of universal mutilation and even annihilation, the science fiction films reflect powerful anxieties about the condition of the individual psyche.
For science fiction films may also be described as a popular mythology for the contemporary negative imagination about the impersonal. The other-world creatures which seek to take “us” over, are an “it,” not a “they.” The planetary invaders are usually zombie-like. Their movements are either cool, mechanical, or lumbering, blobby. But it amounts to the same thing. If they are non-human in form, they proceed with an absolutely regular, unalterable movement (unalterable save by destruction). If they are human in form-dressed in space suits, etc.—then they obey the most rigid military discipline, and display no personal characteristics whatsoever. And it is this regime of emotionlessness, of impersonality, of regimentation, which they will impose on the earth if they are successful. “No more love, no more beauty, no more pain,” boasts a converted earthling in The Invasion of the Body Snatchers (1956). The half earthling-half alien children in The Children of the Damned (1960) are absolutely emotionless, move as a group and understand each others’ thoughts, and are all prodigious intellects. They are the wave of the future, man in his next stage of development.
These alien invaders practice a crime which is worse than murder. They do not simply kill the person. They obliterate him. In The War of the Worlds, the ray which issues from the rocket ship disintegrates all persons and objects in its path, leaving no trace of them but a light ash. In Honda’s The H-Men (1959), the creeping blob melts all flesh with which it comes in contact. If the blob, which looks like a huge hunk of red jello, and can crawl across floors and up and down walls, so much as touches your bare boot, all that is left of you is a heap of clothes on the floor. (A more articulated, size-multiplying blob is the villain in the English film The Creeping Unknown .) In another version of this fantasy, the body is preserved but the person is entirely reconstituted as the automatized servant or agent of the alien powers. This is, of course, the vampire fantasy in new dress. The person is really dead, but he doesn’t know it. He’s “undead,” he has become an “unperson.” It happens to a whole California town in The Invasion of the Body Snatchers, to several earth scientists in This Island Earth, and to assorted innocents in It Came from Outer Space, Attack of the Puppet People (1961), and The Brain Eaters(1961). As the victim always backs away from the vampire’s horrifying embrace, so in science fiction films the person always fights being “taken over”; he wants to retain his humanity. But once the deed has been done, the victim is eminently satisfied with his condition. He has not been converted from human amiability to monstrous “animal” blood lust (a metaphoric exaggeration of sexual desire), as in the old vampire fantasy. No, he has simply become far more efficient—the very model of technocratic man, purged of emotions, volitionless, tranquil, obedient to all orders. The dark secret behind human nature used to be the upsurge of the animal—as in King Kong. The threat to man, his availability to dehumanization, lay in his own animality. Now the danger is understood as residing in man’s ability to be turned into a machine.
The rule, of course, is that this horrible and irremediable form of murder can strike anyone in the film except the hero. The hero and his family, while grossly menaced, always escape this fact and by the end of the film the invaders have been repulsed or destroyed. I know of only one exception, The Day That Mars Invaded Earth (1963), in which, after all the standard struggles, the scientist-hero, his wife, and their two children are “taken over” by the alien invaders—and that’s that. (The last minutes of the film show them being incinerated by the Martians’ rays and their ash silhouettes flushed down their empty swimming pool, while their simulacra drive off in the family car.) Another variant but upbeat switch on the rule occurs in The Creation of the Humanoids (1964), where the hero discovers at the end of the film that he, too, has been turned into a metal robot, complete with highly efficient and virtually indestructible mechanical insides, although he didn’t know it and detected no difference in himself. He learns, however, that he will shortly be upgraded into a “humanoid” having all the properties of a real man.
Of all the standard motifs of science fiction films, this theme of dehumanization is perhaps the most fascinating. For, as I have indicated, it is scarcely a black-and-white situation, as in the vampire films. The attitude of the science fiction films toward depersonalization is mixed. On the one hand, they deplore it as the ultimate horror. On the other hand, certain characteristics of the dehumanized invaders, modulated and disguised—such as the ascendancy of reason over feelings, the idealization of teamwork and the consensus-creating activities of science, a marked degree of moral simplification—are precisely traits of the savior-scientists. For it is interesting that when the scientist in these films is treated negatively, it is usually done through the portrayal of an individual scientist who holes up in his laboratory and neglects his fiancée or his loving wife and children, obsessed by his daring and dangerous experiments. The scientist as a loyal member of a team, and therefore considerably less individualized, is treated quite respectfully.
There is absolutely no social criticism, of even the most implicit kind, in science fiction films. No criticism, for example, of the conditions of our society which create the impersonality and dehumanization which science fiction fantasies displace onto the influence of an alien It. Also, the notion of science as a social activity, interlocking with social and political interests, is unacknowledged. Science is simply either adventure (for good or evil) or a technical response to danger. And, typically, when the fear of science is paramount—when science is conceived of as black magic rather than white—the evil has no attribution beyond that of the perverse will of an individual scientist. In science fiction films the antithesis of black magic and white is drawn as a split between technology, which is beneficent, and the errant individual will of a lone intellectual.
Thus, science fiction films can be looked at as thematically central allegory, replete with standard modern attitudes. The theme of depersonalization (being “taken over”) which I have been talking about is a new allegory reflecting the age-old awareness of man that, sane, he is always perilously close to insanity and unreason. But there is something more here than just a recent, popular image which expresses man’s perennial, but largely unconscious, anxiety about his sanity. The image derives most of its power from a supplementary and historical anxiety, also not experienced consciously by most people, about the depersonalizing conditions of modern urban society. Similarly, it is not enough to note that science fiction allegories are one of the new myths about—that is, ways of accommodating to and negating—the perennial human anxiety about death. (Myths of heaven and hell, and of ghosts, had the same function.) Again, there is a historically specifiable twist which intensifies the anxiety, or better, the trauma suffered by everyone in the middle of the 20th century when it became clear that from now on to the end of human history, every person would spend his individual life not only under the threat of individual death, which is certain, but of something almost unsupportable psychologically—collective incineration and extinction which could come any time, virtually without warning.
From a psychological point of view, the imagination of disaster does not greatly differ from one period in history to another. But from a political and moral point of view, it does. The expectation of the apocalypse may be the occasion for a radical disaffiliation from society, as when thousands of Eastern European Jews in the 17th century gave up their homes and businesses and began to trek to Palestine upon hearing that Shabbethai Zevi had been proclaimed Messiah and that the end of the world was imminent. But peoples learn the news of their own end in diverse ways. It is reported that in 1945 the populace of Berlin received without great agitation the news that Hitler had decided to kill them all, before the Allies arrived, because they had not been worthy enough to win the war. We are, alas, more in the position of the Berliners than of the Jews of 17th-century Eastern Europe; and our response is closer to theirs, too. What I am suggesting is that the imagery of disaster in science fiction films is above all the emblem of aninadequate response. I do not mean to bear down on the films for this. They themselves are only a sampling, stripped of sophistication, of the inadequacy of most people’s response to the unassimilable terrors that infect their consciousness. The interest of the films, aside from their considerable amount of cinematic charm, consists in this intersection between a naively and largely debased commercial art product and the most profound dilemmas of the contemporary situation.
Busy last week or so, plus came down with a bug, and now Alvaro and I are off to discover the New World. Will be eating my mom’s cooking till further notice.
Own-use production work
116. Production of goods and services for own final use is one of the oldest forms of work. Prior to the spread of markets for goods and services, households mainly produced their own food, shelter and other necessities, caring for the household members, premises and durables. As these products have become increasingly available through markets, the prevalence of production for own final use has steadily declined. Nonetheless, it remains widespread in countries at different levels of development. Such production, as in subsistence agriculture, continues to be central to survival in impoverished and remote areas throughout the world and is also a common strategy for supplementing household income, as in the case of kitchen gardens in many urban and rural areas alike. In more developed settings and among higher income groups, it predominantly covers unpaid household services, do-it-yourself work, crafts, backyard gardening and suchlike. (Report II Statistics of work, employment and labour underutilisation, ILO 2013)
Last Friday I went to have a coffee and a talk with Sophia Lawrence, a recently retired statistician for the International Labour Organisation. We met thanks to her daughter, a friend of mine who told me that for the good of my research I should talk to her mom. How right she was. Below is the transcript of our discussion as it related to my interest in the aforementioned form of work that I now know labor statisticians call own-use production.
SOPHIA: I’m so happy to hear that there are young people thinking about these things, because this is something I’ve been trying to push through the UN system for years now. I was a statistician with the International Labour Office, so with the agency that’s responsible for setting international standards on labor statistics. What we actually adopt are resolutions. They’re not legally binding, unlike the conventions of the UN, but they do set up standards and best practices for labor. There are seven core conventions on labor, which, if you become a member of that agency those are, you could say, the basic rules of labor.
ME: Is the US a member?
SOPHIA: Yes, and the US has adopted the fewest conventions. The US, Saudi Arabia, and one other that’s slipped my mind. It’s very sad, pathetic really. Anyway, those are the conventions of the ILO, and those are ratified and do become law. Resolutions, on the other hand, in statistics, are a good best practice, and they do really help countries to align themselves to a system, but they are not ratified and they are not binding. Nonetheless, in the statistical world, we do have a very strong weight with countries, and they all do look to these standards, because they are established on the basis of best practice in the countries themselves.
So, unfortunately, until 2013 most of the resolutions on statistics were very much in line with the problem you’re working on. Our resolution on work statistics has just changed, and the missing part that you’re looking at had also been missing in the resolutions. The simple definition of employment was very much based on GDP, based on the so-called idea of production, which was minus most of the kinds of contributions you’re looking at. It made sense to align employment with GDP calculations because you want to know what’s going into making those goods that you’re qualifying as being part of national production. However, because national production was ignoring all unpaid household work, all volunteer work, for example, employment was ignoring it, too. Which, in the end, we’ve decided is actually an okay thing — employment is what it is — but we have now said, employment is not all work. In 2013 we finally got a new resolution on work statistics adopted, which is bigger than employment and unemployment, and looks into and defines all those types of contributions that interest you, and others.
That doesn’t mean that the world today is beginning to measure all this, though some countries have been measuring it already. But the standards and objectives are there, and countries should start working on changing their national statistical programs. Because of course, it’s a question of how do you measure it, and that will require a certain amount of input, and financial input, for countries to change their surveys, their questionnaires, to begin to address these other issues. In the resolution we made it quite forceful, and it became a bit more watered down through the negotiation process in the conference of labor statisticians — which takes place every five years and all member states get together, with their national statistics office representatives, and we debate — so it became watered down to some extent, a bit forced by the industrialized countries, which already have strong systems [for labor statistics] put in place. And statisticians can be very conservative people, so it’s been a battle to change their ideas. But now that resolution is out there and that’s what I would recommend you read.
We created new categories of work, and so everything contributes to production. Everything has a value. Everything is worthwhile. In countries that have already been measuring the contribution of this type of work to the GDP — take Mexico, take the Scandinavian countries, the US, Australia, other countries as well — they’ve found that that type of production, that component of the economy which until now has been basically invisible or marginalized, ignored, represents over 30% of production.
For example, in Mexico in 2008 they measured all that contribution of unpaid household services, of volunteer work, etc., and they found that it was higher, in terms of GDP, than their petroleum exporting industry, which is what they’re known for.
ME: How do they quantify — well, how do they quantify anything — but how do they quantify and calculate something like that, what all household work, volunteer work, produces in terms of revenue toward GDP?
SOPHIA: There are different types of calculations that can be done. If you’re looking for more information on this when you look things up, usually what it’s called is “satellite accounts” — there are “national accounts” and “satellite accounts,” which I refused to use as a term. It is used in the resolution but I put it in quotation marks because it gives this sense that the World is here, and then there are all the little satellites.
And so the satellite accounts have been measured in countries, through various evaluation methods. There are three types of ways to go about that. They’re quite complex all of them, but basically one is that you can take a replacement value for each of the types of activities.
ME: So like what a maid would be paid –
SOPHIA: A maid, a cook, a seamstress, all of those. Of course that would give you a very high value because you’re combining so many occupations, and certain occupations like chef would be paid more than a maid who’s doing some cooking on the side. You could take those different values, or else you could just take an overall value for the lowest-level maid type of activity or different levels of maid activities. The third evaluation method is giving a replacement value for the outputs of work. What many countries have been complaining about is that statisticians will give a value to this or that individually, not taking into account production that’s taking place at the same time — multitasking. How can you give a value to things that are being done at the same time? They have the same value as tasks that are being done separately.
But people who are employed in occupations that involve supplying a service — an accountant helping you with your taxes, for example — countries know perfectly well what that equals in terms of money. This complaint that it’s hard to give a value for a service is baloney. It’s put there as an excuse because a) it’s not considered to be important work, b) since much of this work is being done within a household, it’s more difficult to send someone in and say “quantify that.”
There is an evaluation technique, a survey technique, for households called a time-use survey, and that’s when they go with a questionnaire divided in 15-minute portions of time, and they have you recording — or they record you — what you’re doing. This will happen over a 24-hour period, and they’ll sometimes do it on a weekday and a weekend. Time-use surveys are fascinating because they recognize, in developing countries for example, that so much production is going on but is not accounted for because it was in no way considered employment. It wasn’t paid for by somebody else. It was invisible.
In sub Saharan Africa because so many people are illiterate at the household level, women and children, what they did was a very expensive means of carrying out the survey — they sent a person who stayed and lived with a family for a week, and they would observe and write it all down. All this started giving some really interesting results, and of course the poorer you are the more time you generally spend on this kind of work, because you don’t have the technology to support you.
ME: And you can’t just go out and buy it. You have to make it. Or else first grow it, then make it.
SOPHIA: Or feed it, then kill it, then pluck it, then cook it. Can you imagine? Instead of just going to the supermarket and buying a filet of chicken. Some kids here don’t even know that a filet of chicken comes from a real chicken.
So these types of time-use surveys were being developed and those are what most countries who did satellite accounts used to figure out what people were involved in, and then how did that measure up.
You can no longer say it’s not possible to do it. Of course it’s possible to do it, and countries have to start thinking in terms of giving value, giving data, recognizing the status of this work. One element that I’m also really, really concerned with, is that recognition is not enough. From recognition we have to go toward what I term as — it’s not my own invented word, I heard it in the Latin American region — co-responsibility. Men and women in households and in society have to begin to take co-responsibility for all these functions, or for whatever stems from these functions. Until that happens it’s going to always be marginalized. One of the reasons why I think it’s so important is that in certain occupations considered in one context to be high-status and high-paying — take lawyers — as soon as women begin to “invade” that occupation, guess what? The status goes down and the pay goes down. It’s no longer such a desired occupation.
ME: You used the term “invisible labor” before — is that a term the ILO uses?
SOPHIA: No. We used to use “unemployment,” talking about all the people who did some kind of production but were still looking for work, but we’ve gone beyond that, trying to get rid of that, because that was vocabulary which was accepting the status quo.
ME: Right, because it’s saying somehow that you’re not really employed. You’re employed in the sense that you’re active, but not active in the way most people understand the word, as in employed for pay.
SOPHIA: Visible / invisible… In a way it’s not a bad idea [for a terminology], but it is actually visible. It’s visible all over the world.
ME: But it’s invisible in the accounting.
SOPHIA: Exactly. But then using that term gives the idea that if it’s invisible it’s because it can be invisible, it could be forgotten, because it could be marginalized, because it’s not important. And in human connections and communication, words mean something.
ME: Even the word “visible,” too, is a problem because it makes it sound like because it’s visible it’s valued.
SOPHIA: Exactly, right.
ME: So what is the term, then?
SOPHIA: We call everything “work.”
SOPHIA: Just, “work.” It’s all work. In the resolution on work statistics we break it down into types of work. We kept the word employment because everybody knows it but we’ve changed the definition.
ME: And so the word “work” applies to all the sorts of work we’re talking about, but then also to different kinds of salaried work…
SOPHIA: Well here, I brought this (pulls out the report and starts flipping through it). There’s one chart … This [on page 17] is one chart that’s interesting. So, we say the whole population is this [indicates heading "Total Population"], then there’s the working age population, however that’s defined in the country, and that brings up the question of needing to recognize that children are working, producing, etc. Even if you don’t condone it, you still need to measure it.
Then the “Labor Force” is the employed and the unemployed, and we’ve broken down those categories into others. Time-related Underemployment — that means they don’t have enough work.
ME: Part-time and not making enough money to get by.
SOPHIA: Mm hmm. And then there’s the Potential Employment Force, people who are “seeking” but they’re not necessarily available. With the way the statistics of the labor force have been measured, it’s been very much based on the fact that men are available immediately, pretty much, because that’s how society works. Therefore they [statisticians] imposed the same sort of strict requirements on everyone. And of course, women have children usually. They can’t just leave their children from one day to the next to start a new job. They have to make provisions for their children. Of course men do, too, especially in developing countries. They have to find the money to buy the uniform, or they have to figure out how they’re going to get from home to where they’re going to work.
This issue of availability was not recognized in the old standards, because it was very much based on the idea of an industrialized society where everybody is “available” — you jump on the truck to go to work, you get a car, you’re available. Your wife was taking care of the children, or like in the USSR, the system was taking care of your children. But that did not take into consideration all these different categories of persons. So we’re saying there is a potential labor force that could easily go into these categories if the necessary infrastructure is there. Very few people now are outside of the labor force.
ME: I was just going to ask, actually, because looking at this — so under “Potential Labor Force,” and “seeking” and “not available” and “not seeking,” etc. etc. This to me means “unemployed” so what’s the difference between someone unemployed and not seeking work, and someone outside the labor force?
SOPHIA: Because “unemployed” is actually a very strict definition. You have to fulfill certain criteria, otherwise you’re not considered as being within unemployment.
A quick aside, the ILO’s definition of “unemployment”:
The unemployed comprise all persons above a specified age who during the reference period were:
- without work, that is, were not in paid employment or self employment during the reference period;
- currently available for work, that is, were available for paid employment or self-employment during the reference period; and
- seeking work, that is, had taken specific steps in a specified recent period to seek paid employment or self-employment.
The specific steps may include registration at a public or private employment exchange; application to employers; checking at worksites, farms, factory gates, market or other assembly places; placing or answering newspaper advertisements; seeking assistance of friends or relatives; looking for land, building, machinery or equipment to establish own enterprise; arranging for financial resources; applying for permits and licences, etc.
This then is how we came up with a way of redefining everything, a revised classifications of persons. So you have your total population, like I said, with people engaged in a variety of productive activities, meaning any kind of production. Then we have people who are “exclusively in non-productive activities,” [See far right of the chart] and you almost find nobody there. People who are rich and don’t work and don’t do anything. Some people who are severely handicapped and can’t do anything. Old-age pensioners who don’t do anything anymore — but even then, they might knit a sweater or some sort of activity like that, right? So there are really very few people who are not doing anything, not engaged in the SNA [System of National Accounts -- the standards for how to compile economic statistics], neither seeking nor available for work, etc. All these other persons [indicating the other sections of the chart] who are doing something, they’re engaged either in so-called productive activities within the SNA — that’s the GDP calculation that is excluding all this so-called household production work — and so then we say they’re in employment, that means they’re working for pay or profit. Or they’re in own-use production work. So this new term that you were asking me what it’s called, we call it “own-use production work.”
People who are used to working in the labor force, they have to turn their minds around. I’ve found the majority of people who are not used to the old system found this new proposal intuitively quite correct. You have to get over the resistance holding on to the previous way.
So, own-use production work. And then there’s volunteer work. We put those two together.
ME: And why were those two put together?
SOPHIA: Well, volunteer work is not for your own use, it’s for the use of others. It could be contributing to the production of goods or services, which is usually what they do, but they’re not receiving any pay or profit for it, that’s the big distinction with volunteers.
And another thing, it used to be that if you were in employment and had one hour of work for the reference period [for the statistical data gathering] which was either a day or a week, then you could not be simultaneously in unemployment, because they had these priorities for measurement purposes. So we say, okay, somebody who has only one hour of work is not fully employed and therefore obviously is probably looking for more work at the same time. What you can see here with the new classifications for statistics is an innovation in that you can be in employment but you can also be doing your own production work, or volunteering. Then in this whole group of persons who are doing own-use production work and volunteering, they could also be looking for work on the side. Or you could be unemployed and also doing some sort of production. We’re trying to capture the manifold types of activities that humans do, and especially humans in a system which is not all employed from 9 to 5, because that’s disappearing even in the First World.
ME: I have a question, not at all tongue in cheek, I’m just curious. I’m looking at the headings saying productive activities and non-productive activities, and I’m interested in what falls under productive, what is considered a productive activity. For example, a politician works, but doesn’t actually produce anything. Or there are other examples. I was talking about this with my husband last night, thinking of trades and jobs that people do that don’t actually produce anything that we need. And how many different jobs are there in this world that are just that. Someone making a useless product, and then someone doing marketing for that useless product. But that person is still then producing something that is then quantified in GDP calculations, but a politician doesn’t produce anything. What’s considered productive then?
SOPHIA: Well, politicians provide a service. Production is goods and services, all activities that lead to, either directly or indirectly, the production of goods and services. That’s the SNA definition. But then what they did was they took this statement apart, saying that, actually, some goods are not included and some services are not included.
You’re interested in the sort of survival work, building your own house, etc. That kind of activity, even in the past, was considered part of SNA, was part of production, whereas cooking meals, preparing food, was not. Clearly they’re both productions, so many people have been asking for many, many years, why this distinction? How did they come up with that? Basically, little old men in the UK, in the USA, France, Germany, the big nations, thought that, well, building a house –
ME: is man’s work –
SOPHIA: Yes, and it’s big, it’s a structure. And the rest of the work was housewives’ work. It’s very much a simplistic representation of reality, and yet we [statisticians] have been turning around in circles trying to find things now to fit that stupid standard, which is entirely unrealistic. And people got very comfortable with that gymnastics, thought it was normal, so when we came along and tried to change it, they thought No!
Now, there are measurement issues, it’s not passed into law, but that’s the whole purpose of the resolution, and that’s the purpose of my colleagues who will continue on with the work.
ME: Is there any sort of itemized list for what counts as goods and services? You used the example of an elderly retired lady knitting something. Is knitting actually considered then a productive activity?
SOPHIA: Mm hmm.
SOPHIA: There are lists of what is production, if we look here together at the resolution text. So, own-use production. Of services, of goods. Goods could be the sweater, it’s for yourself. Services, that’s the cooking and the cleaning. Funnily enough in the definition of services, a meal is considered a service. It’s not a good.
ME: That surprises me, actually, because it is a food product.
SOPHIA: Yeah. That was done because it’s easier to exclude then. They put the house as good, but the food is not. The food is a service. Which is baloney. There are all these excuses.
119. It has been argued that an advantage of treating own-use production of goods and of services as a single form of work is that it will be less likely for household production to be omitted during data collection than is the case at present (Goldschmidt-Clermont, 2000). Collection of the information by activity clusters, as recommended, will also reduce the problem of having to establish a boundary between goods and services. For example, fetching firewood, the processing of food for preservation, making butter or cheese, husking rice, slaughtering animals and grinding grain are all considered as production of goods, while cooking a meal is a service on the grounds that the meal is consumed immediately. In practice, the dividing line between cooking and these other activities is often difficult to draw, especially where fresh food is prepared daily. Similarly, construction and improvement of one’s dwelling is considered as fixed capital formation and thus included within the SNA production boundary, whereas smaller repairs are viewed as services and hence excluded. Yet it is difficult to distinguish between repair, improvement and construction, particularly where dwellings are built of materials such as mud, palm, wood and other perishables (Anker, 1983). (Report II, p.28)
ME: A lot of this [examples of work given under the report's own-use production heading] is household sorts of tasks that women typically do. I was thinking before coming here about what I would consider “invisible” work, and I thought of graduate students and interns. Is that also considered in any of the new categories?
SOPHIA: For trainees, that was a big contention between countries — Australia, UK, others, their trainee and apprenticeship systems are very, very formalized and integrated, and they said there’s no way they would exclude those from their statistics because it would bring their employment rates down and their unemployment rates up. But people who are in training, they’re not paid. Interns are basically free workers. Unfortunately they got considered to be in employment, because supposedly what they’re being “paid” in is experience.
Trainees, a lot of times even in very informal systems, in sub Saharan Africa for example, they get food, they are sometimes paid in a little bit of something. Volunteers for the International Red Cross, they do get something, a stipend sometimes. A lot of times that’s another problem with volunteer workers, that in the West what you might get is compensation for going and doing a particular task or project, and it’s sometimes higher than the salaries of government people in very poor countries. So what’s work, what’s volunteer work — the national context of wage levels etc. comes into play. So the idea is that volunteer pay should take into account the average local wages.
ME: How did you get interested in this?
SOPHIA: Well, I think I started to get into this because I’m a feminist from way back. For me, injustice against the poorest exists and is worrisome, but more so is the injustice done against women whether they’re rich or poor. And of course the worst is the poor women. If you look at a hierarchy anywhere in the world, whatever category you’re going to put down the line, it’s the women in that category who will be the least well-off, invariably. Women are more than half the world’s population, and so addressing this is fundamental. No matter what epoch you’re talking about, there has always been discrimination against women. I’ve gone to many parts of the world and given talks or done trainings, and I often like to take statistics with me that show how women are faring in the so-called developed world. People are shocked to see what’s happening in Scandanavia, Sweden, the UK, where they thought everything for women was fine. When you show the sort of basic statistics of employed, unemployed, household labor as it was called in the old days, it’s true that women are working less in employment for pay or profit, and that women who are in employment are more absent from work than men, and men work longer hours, so it’s seems like men are “carrying the burden of production,” and that women are having an easy time of it.
ME: But in a lot of cases men are only able to do that — work longer hours — because their partner is home taking care of things, children, etc.
SOPHIA: Exactly. In statistics you need to break down the data because otherwise statistics are meaningless. For example: household composition. Women in households with children below age seven have much higher absentee rates because — guess what — they take care of the kids when they’re sick, whereas the men continue working. Calculate what women are doing per day, their own-use production work — including caring for children, caring for the elderly — and their employment work if they have it. All of this counts. [...]
We soon embarked on a long and very interesting side discussion about water sanitation as a fundamental feminist issue, especially as it concerns menstruation. It was fascinating but pretty well outside the scope of this blog, and this transcription work is pretty rough because there’s all sorts of background noise in the recording. (I need to get a decent sound recorder.) So do what she suggested I do, and read Gloria Steinem’s “If Men Could Menstruate,” and also this and this, for starters.
ME: I’d like to know what you think about something. There’s … I don’t know that I would call it a movement, because I’m not sure it’s big enough, but there’s an idea I’ve seen being floated around in some of my reading, called “radical housewifery” or “radical homemaking.” It’s the idea that this sort of work, keeping house, can be done as a radical economic act, which goes against what a lot of women think, that you have to be working outside the home in order to be a “feminist.” When my mother had me, it was 1981. She’s a feminist and she always has been. She was a teacher and worked up until she was eight months pregnant or so, and then she went on maternity leave, intending to go back to work. She said that when she saw me she changed her mind. It was a hard decision, not one that was easy financially, but she still felt like for stability and emotions and whatnot that she would stay home for an undetermined amount of time. And her very, very good friend and co-conspirator in so much was horrified, and would constantly send her job announcements.
SOPHIA: That’s exactly this kind of thinking that status can only come from activities outside the home.
ME: Right. So related to that, this whole idea of “radical homemaking” goes against that idea, says that action can come from the home. But the thing is, I don’t know that it’s critical enough of itself. This decision to be a one-income family, or a half- or no-income family and then we’ll try to produce everything we need. My problem is that I don’t think it takes into consideration that a lot of times a person’s only political voice is in her workplace, and so if you remove yourself from the position of being a woman in a male-dominated workforce, I think you lose your agency in a sense.
SOPHIA: Well, I think it’s unfortunate — and I’ve seen this with a lot of well-intentioned people, myself included — that in proposing something new we still tend to accept as a fait accompli the dominant situation as being the right one. I read something, someone in politics, an American woman, who published this thing saying “Yes We Can,” or “No We Can’t,” or something like that — you know this Yes We Can business, and she said that no, actually, we can’t have it all, we can’t do the high-powered job and raise the kids and whatever else. [I'm was at first thinking that she was talking about Anne-Marie Slaughter but Slaughter addresses a lot of the systemic failures we were discussing, so I'm at a loss now... Will look into this.] It seems to me that she wasn’t thinking critically about how the system functions already, and she was kind of accepting still that women have the dominant role in taking care of the children, and that comes down to a relationship of co-responsibility. Until we’ve really internalized that, it’s true, no matter what you do you’re going to marginalize yourself, you’re going to become poorer, you’re going to lose your voice, and you’re going to undermine your own confidence because you’re not being given status because you’re not considered as because because because, right? So … radical housewifery. House husbandry, that means something different, doesn’t it? Again, this is the status that is given to words, and we need to find new words. So this is radical own-use production work.
It’s all a question of power. We give power to all these things, who has the money, ideas and words we have about status — but it has to become a co-responsibility. Whatever contribution to GDP is being produced has to be equally recognized.
Thanks to stuff that went wrong with clay casserole breads #1 and #2, I’ve learned a thing or two and was feeling pretty good going into #3. It didn’t disappoint. Pat on the back in order. My bread quest will never be complete and I will never stop experimenting (and failing), but I think I’ve got a pretty decent formula and (very flexible) routine down now. This makes me feel happy and capable, and I believe this clay casserole thing was the clincher in the whole game. Bread #3 is basically my standard bread that I’ve been making for a while without the casserole, and with different hardware it comes out way, way better: soft and light on the inside with a thin, very crispy crust. I wish I’d gotten the casserole sooner, but then maybe this discovery wouldn’t be as gratifying.
The bread pictured above came out of the oven this morning. I made the sponge on Friday morning, but then didn’t get to making the dough that day, so I stuck it in the fridge and did it Saturday morning, the very idea of which would probably make a lot of bread people croak — 24 hour sponge fermentation??? In the fridge?? The horror! It’s going to be too sour! … But it wasn’t. (Tangent: when I made the sponge on Friday, since my starter was out anyway I decided to attempt pita bread for the first time in many years and it ROSE! I used the recipe linked to there, plus tried out a tip I got from an Armenian baker I interviewed last year about lavash — crank the oven heat all the way up, but set it to heat from the top of the oven only. This was the first time I tried doing that, and the first time I succeeded in getting pita bread to puff up large and have fillable pockets. It was exciting.)
Once I finally made the dough I let it rise from Saturday morning till later in the afternoon, and upon realizing that I was not going to get around to baking it the same day I decided to stick it in the refrigerator, again, to retard the fermentation. It stayed there through Sunday morning and afternoon. Sunday evening when I got back from picking apples at Utopiana, Alvaro and I were both starving and lazy, and behold, there was bread dough in the fridge, so we pulled it out and made some killer pizzas with half of it. I put the remaining dough back in the fridge, and this morning used it for its intended purpose, to make bread.
I feel very unscientific whenever I read things online about amateur bakers and their carefully measured bread, because the descriptions of the process are often centered around indecipherable charts of hydration ratios and a bunch of other stuff that I know is important to a good loaf of bread, but I don’t bother with that level of exactness. I like the idea of being precise and scientific about things, but I’m not a precise kind of person, and I’ve realized that although fastidiousness in record-keeping may be essential if you’re looking to bake identical loaves of bread, it’s really not important at all if you’re just looking to make good bread. I kept a notebook briefly when I first started getting serious with sourdough, as I was under the influence of the Tartine book at the time, but then I realized I was getting distracted by what the numbers in my notebook said and not paying attention to what my head said. So I stopped it. Now — and I’m by no means an expert, whatever that means, “expert” — I know when a dough is too wet or too dry and I don’t need a kitchen scale to tell me. I can also generally tell in advance what a loaf is going to look like when it comes out of the oven based on the way the dough is acting and how it feels before it goes in. This makes me feel very unscientific, like I said, what with all those people out there doing hard bread science, but I’m also puzzled by that style of bread baking. I don’t really get why people keep such close tabs on things, to the point of making charts, unless they just like charts. Bread is so forgiving once you kind of get a hold of the basic stuff, and get to know what dough needs to feel like if you want it to come out a certain way.
The bread above, for example, was basically submitted to my personal whims the entire weekend. If it were a child I would have given it lifelong psychological complexes with all my inconsistency, and it still came out well. Hmm… I think I’ll bake bread… nah, I don’t feel like it, so just hang out for a while, dough … Right, okay now I’m going to bake bread… Ooh but there’s a cumbia DJ at Pointe de la Jonction, never mind! Back in the fridge! … Oh hello bread, I forgot you were there. Hi. I’ll get to you when I get to you…. Actually I think I’ll make pizza with you… Oh wait, there’s actually a lot of dough, so I’ll only make pizza with some of you. … Okay, I’m back, into the oven you go, like I said I was going to do three days ago, before I changed my mind four times.
A few people in my life have asked me for my “bread recipe,” and I always tell them I don’t have one but that we can bake bread some time together and I’ll show them what I do. It always feels kind of snobby when I say this, like: oh darling, my method is so very complex that it can’t possibly be reduced to mere words. You must observe the gesture. … But it kind of is like that when it comes to making bread, except bread’s not complex. It’s very simple, the kind of thing that’s very hard to explain and understand.
Busily transcribing the interview I had this morning with a recently retired International Labour Organisation statistician. Her area was work that has not historically, culturally, statistically been considered “real” work, nor factored into GDPs, i.e., “invisible” economies of goods and services that the ILO as of last fall refers to as “own-use production.” Its recently adopted resolution on work statistics very openly declares own-use production to be considered work. With “own-use production,” we’re talking homesteading, housework, even, to use one of Sophia’s examples, knitting a sweater. With this resolution the International Conference of Labour Statisticians has redefined productive work in a literal sense — as not just production that leads to growth on paper, but also growth in communities, growth in families.
If you’re so inclined you can read the “Resolution concerning statistics of work, employment and labour underutilisation,” published in fall 2013, here. And I hope you’re inclined, because as for me, my mind was fairly well blown by it. It’s never occurred to me to double-check what labor statisticians or the ILO considered knitting to be because, well, what else is it but a “hobby”? Hardcore crafters and craftivists, radical homemakers and farmwives, and urban homesteaders aside, the entire world thinks stuff like knitting is for leisure time and bored women, and therefore of no concern to the serious calculations involved in Real Economics. Naturally I assumed that’s what all the big, important international organizations thought, if they even thought about it.
The ILO adopts resolutions for best practices; they’re non-binding, so countries don’t have to ratify anything, though there is some international pressure involved in these things. And in general I don’t hold much faith in UN agencies, bloated and slow-moving as they often are. But labor statisticians calling for a redefinition of work to include, among things, gardening, cooking, food preservation, home building projects, recycling, knitting, sewing, and caring for children and elderly parents? This is revolutionary. This makes me very happy. The world’s not going to change overnight, but I will very gladly embrace this development.
Our talk ended with a discussion on bread — Sophia’s an avid bread baker who grinds her own grains and experiments wildly with all things flour and yeast. For example: as we were wrapping up, she said, “Oh, I have to show you the bread I just made. Milena said you’re into bread.” (Milena’s her daughter, a friend and former co-worker of mine.) As she was flicking back through the photos on her phone, she told me about how her son had some friends over the other day to hang out and drink a few beers, and several half-empty cans were still in evidence the next day. “It drives me crazy,” she said. “I cannot stand wasting anything,” so she gathered up the cans and looked in her bread books to see how she might be able to repurpose flat beer. Finally settling on a recipe for rye bread that specifically called for flat beer, she mixed the lone soldiers with some buckwheat flour and let it ferment for four days, then mixed this starter with some whole wheat flour, baked it, and came out with two absolutely beautiful loaves. How many facets of awesome did you count in that story?
The rest of our discussion will be up here soon.
This may look like a bread failure:
But I am going to tell you why it is anything but a bread failure.
You see, I think I’ve finally left the realm of the nervous beginner who obsesses about following dictates and worries about making mistakes, gets frustrated at the slightest imperfection in a final product, is impatient for the day when mastery will be reached. I don’t generally like making such bold declarations, but in this case I don’t think I’m overstating things. I really do think I’ve stopped worrying about my bread “failing.” (Maybe because it happens so often and so I’m used to it? Ha.) I’ve realized that even when a loaf comes out of the oven looking absolutely nothing like the pretty loaves of bread in all my cookbooks, it is almost always perfectly edible, and often tastes very good despite appearances. Like the loaf of bread pictured above, for example.
After my first experience baking with the clay cooker, I was excited to try again as soon as possible, figuring that I’d pinpointed three key things to change in order to improve my results (make a smaller loaf so it doesn’t stick to the sides of the cooker, bake at a much higher oven temperature, and take off the top of the cooker earlier on in the baking so the crust can harden more). It’s not often that we can so clearly identify areas of improvement like that, which in itself I think is a sign of progress. The first clay cooker loaf was eaten within three days, and part-way through day two I got another starter going. I didn’t realize I was running so low on flour until then, but the next day when it came time to make the dough I didn’t have time to run out and get more of my go-to flour so I just used what we had, which was about a 1/2 or 2/3 cup of white whole wheat and then enough buckwheat flour to make a dough. It was more buckwheat than wheat, which I knew would result in something weird because buckwheat (which is not actually wheat; it’s related to rhubarb) is gluten free, meaning it’s not going to rise much, and it also absorbs a hell of a lot more water than your typical wheat flour, meaning the dough was going to be sopping wet. However, that’s all I had on hand so I went with it — with glee and abandon no less. This was mad science at its finest.
Here’s the dough after rising overnight:
Thick, gray soup. Impossible to shape into anything remotely resembling a loaf of bread so I had to pour it into the clay cooker like cake batter.
I preheated the clay cooker in the oven at 425F/218C, which is quite a bit hotter than the temperature I normally use to bake bread.
Into the oven it went, for about 15-20 minutes of eager anticipation.
Tick tick tick… I was dying to see what was going on in there. In my dreams the super wet dough was going to create a soft, billowy mass dotted with perfectly placed pockets of air, and I was going to take a photo of it to post all over the internet saying Look At Me and my beautiful bread!!
Then the covered baking time was up and I lifted off the lid to discover this:
I left it to bake for another half an hour or so and then removed it from the oven.
That, my friends, is a very, very flat loaf of bread. (But at least it didn’t stick to the pot.)
I let it cool while I was doing other things, and come lunch time I decided to cut into it lengthwise and try to make a sandwich out of it. This is when I discovered that the interior was actually really nice looking.
More importantly, it was delicious. And, most importantly, this loaf of bread that some might call a failure confirmed that I had been right to crank up the oven heat and take the lid off sooner in the baking. I’ve already got another starter going and will make a dough tomorrow (with my regular flour).
So, in the end, this bread is definitely not winning any ribbons at the state fair, but I’d say that out of all of the several hundred loaves of bread that I’ve baked, this is one of the loaves that has taught me the most. Experimentation and failure for the win.