The City in History (Modern Library Nonfiction #76)

(This is the twenty-fourth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: Battle Cry of Freedom.)

I’ve been a city man all my life. Certainly all of my adult life. At the age of twenty, I escaped from the dour doldrums of suburban Sacramento — the kind of hideous Flintstones-style recurring backdrop that seems to encourage broken dreams, angry tears, and rampant abuse behind model home replica doors — for the bright foggy beauty and the joyful pastels of San Francisco.

That gorgeous place from the not so distant past — with the co-op movie theatres playing weirdass indie flicks you couldn’t find on video or teevee, the cafes pattering with political idealism and the streets rattling with the chatty pugnacious jingle of strange conceptual punks, the crumbling encyclopedic bookstores and the boldly strange dive bars of the Tenderloin, and the wonderful mariachi players serenading Valencia Street taquerias for a quick buck, a Mexicoke, and a smile — was exactly the urban realm I needed at the time. Only real souls committed to an increasingly rarefied inclusiveness like Michelle Tea and William T. Vollmann knew how to capture these meat-and-potatoes freak-friendly details in their novels. What I didn’t know, as San Francisco became an unaffordable playground invaded by elitist and not especially perspicacious techbro affluents, was that this coastal metropolis was no longer a place for weirdos like me. I was outpriced and outmatched, like so many who bolted to Oakland, Los Angeles, and elsewhere. It was an all-too-common tale of gentrification and migration, of a city permanently regurgitating its most promising inhabitants and falling victim to an influx of wealth that forever altered its essence. Like any foolish romantic, I fell in love with someone who was absolutely wrong for me and became seduced by the Brooklyn brownstones, the skyscrapers spiring along the rivers, and the giddy pace of a megacity demanding all of its inhabitants to make something of themselves. I’ve been in New York City now for fourteen years — most of my thirties and all of my forties. I hope to continue to live here. But like anything in life, it’s largely the luck of the draw, hoping that the law of averages will work out in your favor. Especially in this age of mass unemployment and pandemic uncertainties and anybody who doesn’t make more than $200,000 a year left in the cold and declared the enemy.

I mention these bona-fides in advance of my thoughts on the great Lewis Mumford to give you a sense of why his amazing book, The City in History, took me much longer to read than I anticipated. The problem with an encyclopedic smartypants like Mumford is that he’ll drop a casual reference that is supremely interesting if you are even remotely curious. One paragraph will send you down an Internet rabbit hole. The next thing you know, you’ve just spent hours of your life trying to find any information on the ancient Greek artisans who hustled their goods in the agora and why slavery was simply accepted as a part of city life for centuries. An email correspondent, learning that I was taking a deep dive into Mumford, urged me to plunge into the four volumes kick-started by Technics and Civilization. And I have to say, given the many months I spent not so much reading The City in History but taking in the many references orbiting its scholarship, I will probably have to wait until perhaps my seventies — should I live that long — for such an enormous undertaking. I could easily see myself as an old bachelor on a beach — filling in crossword puzzles, tendering stories about my misspent youth to any sympathetic ear, respectfully flirting with any lingering divorcé with the decency to not see me as invisible, and carrying along the four Mumford volumes with me (along with whatever will then pass for a tablet to look up all the references) in a satchel.

This is my roundabout way of saying that Lewis Mumford’s The City in History is a wonderfully robust and often grumbly tome from a dude who spent most of his years considering how cities thrive through technological and architectural development. One of the book’s charms is seeing Mumford gradually becoming more pissed off as he gets closer to the modern age. It’s almost as if he resents what the city transformed into in the twentieth century. For example, in a weird aside, Mumford complains about the increased number of windows in residential buildings after the seventeenth century, bemoaning the lack of privacy with a touch of principle rarely remembered by people who grew up with nothing but the Internet’s exhibitionistic cadences. He also has a healthy aversion to the “often disruptive and self-defeating” nature of constant growth. It is, after all, possible for a city or a small town to develop too much. Once cities ditched their walls, there were no longer any physical boundaries to how far any teeming area could spread while arguably become lesser the further it rolled along. (See, for example, the anarchic sprawl of Texas today. Everyone from the likes of the Manhattan Institute’s Michael Hendrix to James Howard Kuntsler has spoken, in varying degrees of horror, about this endless expansion.) On this point, Mumford pushes back against the myth of the medieval town as a place of static boredom. He points to religious edifices somehow transforming these clusters where, for the first time in history, “the majority of the inhabitants of a city were free men.” Even when mercantile centers dried up as trade died, Mumfurod points to the limitless evolution of the countryside. Feudalism subsided for a stabler and more available food supply and new forms of home-spun industry that made many of these smaller villages special. Textile industries flourished in northern Italy and not only resulted in innovations such as the spinning wheel, but some healthy revolutionary pushback against tyrants — such as the weavers rebelling against the ruling elite in 1370-1371. In short, Mumford argues that a reasonably confined city was capable of nearly anything.

But what of the modern metropolis? The cities that called to people like me as a young man? Mumford’s view was that the enormity of a place like Paris or Rome or London or New York City wasn’t merely the result of technological progress. As he argues:

…the metropolitan phase became universal only when the technical means of congestion had become adequate — and their use profitable to those who manufactured or employed them. The modern metropolis is, rather, an outstanding example of a peculiar cultural lag within the realm of technics itself: namely, the continuation by highly advanced technical means of the obsolete forms and ends of a socially retarded civilization.

Well, that doesn’t sound too nice. So the punks who I jammed with in Mission District warrens and the scrappy filmmakers piecing together stories and the bizarre theatre we were putting on while eating ramen and Red Vines were cultural atavists? Gee, thanks, Lewis! Would Mumford apply this same disparaging tone to the CBGB punk crowd and artists who flourished in the East Village and arguably altered the trajectory of popular music? Or, for that matter, the 1990s hip-hop artists who flourished in Bed-Stuy and Compton? This is where Mumford and I part ways. Who are any of us to dictate what constitutes cultural lag? In my experience, obsolete forms tend to square dance with current mediums and that’s usually how the beat rolls on. Small wonder that Jane Jacobs and Mumford would get involved in a philosophical brawl that lasted a good four decades.

It’s frustrating that, for all the right criticism Mumford offers, he can be a bit of a dowdy square. He’s so good at showing us how the office building, as we still know it today, initiated in Florence thanks to Giorgio Vasari. It turns out that this amazing Italian Renaissance man wasn’t just committed to corridors. He designed an interior with an open-floor loggia — those reception areas that can now be found in every damned bureaucratic entity. We now have someone to blame for them! Mumford offers us little details — such as the tendency of early cities to repave streets over the layers of trash that had been thrown over the past twenty years. This resulted in developments such as doorways increasingly becoming lower — often submerged beneath the grade entirely — as history carried on. There are very useful asides in Mumford’s book on the history of multistory buildings. We learn how Roman baths and gymnasiums did make efforts to accommodate the rabble, despite the rampant exploitation of humans. Calvino was only scratching the surface. As long as cities have been around, humans have created new structures and new innovations. For all we know, the Coronavirus pandemic could very well lead to some urban advancement that humankind had hitherto never considered.

Because of all this, I can’t square Mumford’s elitism with the beautiful idealism that he lays down here:

The final mission of the city is to further man’s cautious participation in the cosmic and the historic process. Through its own complex and enduring structure, the city vastly augments man’s ability to interpret these processes and take an active, formative part in them, so that every phase of the drama it stages shall have, to the highest degree possible, the illumination of consciousness, the stamp of purpose, the color of love. That magnification of all the dimensions of life, through emotional communion, rational communication, technological mastery, and, above, all, dramatic representation, has been the supreme office of the city in history. And it remains the chief reason for the city’s continued existence.

Who determines the active and formative development of the city? Do we leave it to anarchy? Do we acknowledge the numerous forces duking it out over who determines the topography? I can certainly get behind Mumford railing against mercantilism. But who establishes the ideal? One of the most underrated volumes contending with such a struggle between social community and the kind of “high-minded” conservative finger-wagging that Mumford too often espouses is Samuel R. Delany’s excellent book, Times Square Red, Times Square Blue, a brilliant portrait of the undeniable “color of love” practiced in the Times Square adult movie theatres through the mid-1990s — until Mayor Giuliani declared war on what he deemed unseemly. In a sidebar, Delany, buttressing Jane Jacobs, observes that the problem here is that this sort of idealism assumes two conditions: (1) that cities are fundamentally repugnant places and that we must therefore hide the poor and the underprivileged and (2) the city is defined by the big and the monumental.

The sheer amount of suffering undergone by the impoverished is something that Mumford, to his credit, does broach — particularly the unsanitary conditions that those in London and New York lived in as these cities expanded. (For more on the working stiffs and those who struggled, especially in New York, I highly recommend Luc Sante’s excellent book Low Life.) But while Mumford is willing to go all in on the question of bigness, he’s a little too detached and diffident on the issue of how the have nots contribute to urban growth, although he does note how “the proletariat had their unpremeditated revenge” on the haves as New York increasingly crammed people like sardines into airless cloisters. And, as such, I found myself pulling out my Jane Jacobs books, rereading passages, and saying, with my best Mortal Kombat announcer voice, “Finish him!”

But maybe I’m being a little too hard on Mumford. The guy wasn’t a fan of architect Leon Battista Alberti’s great rush for suburban development, with this funny aside: “one must ask how much he left for the early twentieth-century architect to invent.” Mumford had it in for Le Corbusier and his tower-centric approach to urban planning (which is perhaps best observed in Chandigarh, India — a place where Le Corbusier was given free reign), but he was also a huge fan of Ebeneezer Howard and his “Garden City” movement, whereby Howard suggested that some combination of city and country represented the best living conditions. Even if you side with Jane Jacobs, as I do, on the whole Garden City question, believing that there can be some real beauty in staggering and urban density, you can’t help but smile at his prickliness:

For the successor of the paleotechnic town has created instruments and conditions potentially far more lethal than those which wiped out so many lives in the town of Donora, Pennsylvania, through a concentration of toxic gases, or that which in December 1952 killed in one week an estimated five thousand extra of London’s inhabitants.

Oh, Mumford! With endearingly bleak observations like this, why couldn’t you be more on the side of the people?

Next Up: Paul Fussell’s The Great War and Modern Memory!

Battle Cry of Freedom (Modern Library Nonfiction #77)

(This is the twenty-third entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: Why We Can’t Wait.)

In his 1966 essay “The White Man’s Guilt,” James Baldwin — never a man to mince words or to refrain from expressing searing clarity — declared that white Americans were incapable of facing the deep wounds suppurating in the national fabric because of their refusal to acknowledge their complicity in abusive history. Pointing to the repugnant privilege that, even today, hinders many white people from altering their lives, their attitudes, and the baleful bigotry summoned by their nascent advantages, much less their relationships to people of color, Baldwin noted:

For history, as nearly no one seems to know, is not merely something to be read. And it does not refer merely, or even principally, to the past. On the contrary, the great force of history comes from the fact that we carry it within us, are unconsciously controlled by it in many ways, and history is literally present in all that we do. It could scarcely be otherwise, since it is to history that we owe our frames of reference, our identities, and our aspirations.

Fifty-four years after Baldwin, America now finds itself enmired within its most seminal (and long delayed) civil rights movement in decades, awakened from its somnambulistic malaise through the neck-stomping snap of systemic racism casually and ignobly practiced by crooked cops who are afforded impunity rather than significant consequences. The institution of slavery has been replaced by the indignities of racial profiling, income disparity, wanton brutality, constant belittlement, and a crass cabal of Karens who are more than eager to rat out people of color so that they can scarf down their soy milk lattes and avocado toast, rarely deviating from the hideous cues that a culture — one that prioritizes discrimination first and equality last — rewards with all the perfunctory mechanics of a slot machine jackpot.

Thus, one must approach James McPherson’s mighty and incredibly impressive Civil War volume with mindfulness and assiduity. It is not, as Baldwin says, a book that can merely be read — even though it is something of a miracle that McPherson has packed as much detail and as many considerations as he has within more than 900 pages. McPherson’s volume is an invaluable start for anyone hoping to move beyond mere reading, to significantly considering the palpable legacy of how the hideous shadow of white supremacy and belittlement still plagues us in the present. Why does the Confederate flag still fly? Why do imperialist statues — especially monuments that celebrate a failed and racist breakaway coalition of upstart states rightly starved and humiliated and destroyed by Grant and Sherman — still stand? Battle Cry of Freedom beckons us to pay careful attention to the unjust and bestial influences that erupted before the war and that flickered afterwards. It is thankfully not just a compilation of battle summaries — although it does do much to mark the moments in which the North was on the run and geography and weather and lack of supplies often stood in its way. The book pays welcome scrutiny to the underlying environment that inspired the South to secede and required a newly inaugurated Lincoln to call for 75,000 volunteers a little more than a month after he had been sworn in as President and just after the South Carolina militia had attacked Fort Sumter.

* * *

It was technological innovation in the 1840s and the 1850s — the new machines putting out watches and furniture and bolts and damn near anything into the market at a rapid clip previously unseen — that helped sow the seeds of labor unrest. To use the new tools, a worker had to go to a factory rather than operating out of his home. To turn the most profit possible and sustain his venal wealth, the aspiring robber baron had to exploit the worker at subhuman wages. The South was more willing to enslave people. A barbaric racist of that era ranting in a saloon could, much like one of Trump’s acolytes today, point to the dip in the agricultural labor force from 1800 to 1860. In the North, 70% of labor was in agriculture, but this fell to 40%. But in the South, the rate remained steady at 80%. But this, of course, was an artificial win built on the backs of Black lives.

You had increasing territory in the West annexed to the United States and, with this, vivacious crusaders who were feeling bolder about their causes. David Wilmot, a freshman Congressional Representative, saw the Mexican War as an opportunity to lay down a proviso on August 8, 1846. “[N]either slavery nor involuntary servitude shall ever exist in any part of said territory” were the words that Wilmot added to an appropriations bill amendment. Like any politician, Wilmot was interested in settling scores. The Wilmot Proviso was as much the result of long pent-up frustration among a cluster of Northern Democrats who cared more about holding onto power than pushing forward abolition. The proviso kept being reintroduced and the Democratic Party of the time — much of it composed of racists from the South — began to splinter.

Northern Democrats shifted their support from the Wilmot Proviso to an idea known as popular sovereignity, which placed the decision on whether to sustain or abolish slavery into the hands of settlers moving into the new territories. But Wilmot’s more universal abolition approach still had the enthusiastic support of northern Whigs. The Whigs, for those who may not recall, were essentially middle-class conservatives living it large. They represented the alternative to Democrats before the Republican Party was created in 1854. The Whigs emerged from the ashes of the Nullification Crisis of 1832 — which you may recall me getting into when I was tackling Herbert Croly a few years ago. Yes, Andrew Jackson was responsible for (a) destroying the national bank, thus creating an economically volatile environment and (b) creating enough fury for Henry Clay and company to form an anti-Jackson opposition party. What’s most interesting here is that opposing Jackson also meant opposing one of his pet causes: slavery. And, mind you, these were pro-business conservatives who wanted to live the good life. This is a bit like day trading bros dolled up in Brooks Brothers suits suddenly announcing that they want universal healthcare. Politics may make strange bedfellows, but sometimes a searing laser directed at an enemy who has jilted you in the boudoir creates an entirely unexpected bloc.

Many of the “liberals” of that era, especially in the South, were very much in favor of keeping slavery going. (This historical fact has regrettably caused many Republicans to chirp “Party of Lincoln!” in an attempt to excuse the more fascistic and racist overtures that these same smug burghers wallow in today.) Much like Black Lives Matter today and the Occupy Wall Street movement nine years ago, a significant plurality of the Whigs, who resented the fact that their slave-owning presidential candidate Zachary Taylor refused to take a position on the Wilmot Proviso, were able to create a broad coalition at the Free Soil convention of 1848. Slavery then became one of the 1848 presidential election’s major issues.

In Battle Cry, McPherson nimbly points to how all of these developments led to a great deal of political unrest that made the Civil War inevitable. Prominent Republican William H. Seward (later Lincoln’s Secretary of State) came out swinging against slavery, claiming that compromise on the issue was impossible. “You cannot roll back the tide of social progress,” he said. The 1854 Kansas-Nebraska Act (authored by Stephen Douglas) repealed the Missouri Compromise, which in turn led to “Bleeding Kansas” — a series of armed and violent struggles over the legality of slavery that carried on for the next seven years. (Curiously, McPheron downplays Daniel Webster’s 1850 turncoat “Seventh of March” speech, which signaled Webster’s willingness to enforce the Fugitive Slave Act and forever altered his base and political career.) And while all this was happening, cotton prices in the South were rising and a dying faction of Southern unionists led the Southern states to increasingly consider secession. The maps of 1860 reveal the inescapable problem:

* * *

The Whigs were crumbling. Enter Lincoln, speaking eloquently on a Peroria stage on October 16, 1854, and representing the future of the newly minted Republican Party:

When the white man governs himself that is self-government; but when he governs himself, and also governs another man, that is more than self-government — that is despotism. If the negro is a man, why then my ancient faith teaches me that “all men are created equal;” and that there can be no moral right in connection with one man’s making a slave of another.

Enter the Know Nothings, a third party filling a niche left by the eroding Whigs and the increasingly splintered Democratic Party. The Know Nothings were arguably the Proud Boys of their time. They ushered in a wave of nationalism and xenophobia that was thoughtfully considered by the Smithsonian‘s Lorraine Boissoneault. What killed the Know Nothings was their failure to take a stand on slavery. You couldn’t afford to stay silent on the issue when the likes of Dred Scott and John Brown were in the newspapers. The Know Nothings further scattered political difference to the winds, giving Lincoln the opportunity to unite numerous strands under the new Republican Party and win the Presidency during the 1860 election, despite not being on the ballot in ten Southern states.

With Lincoln’s win, seven slave states seceded from the union. And the beginnings of the Confederacy began. Historians have been arguing for years over the precise reasons for this disunion. If you’re a bit of a wonk like me, I highly recommend this 2011 panel in which three historians offer entirely different takeaways. McPherson, to his credit, allows the events to unfold and refrains from too much editorializing. Although throughout the book, McPherson does speak from the perspective of the Union.

* * *

As I noted when I tackled John Keegan’s The Face of Battle, one of my failings as an all-encompassing dilettante resides with military history, which I find about as pleasurable to read as sprawling myself naked, sans hat or suntan lotion, upon some burning metal bed on a Brooklyn rooftop during a hot August afternoon — watching tar congeal over my epidermis until I transform into some ugly onyx crust while various spectators, saddled with boredom and the need to make a quick buck, film me with their phones and later email me demands to pay up in Bitcoin, lest my mindless frolicking be publicly uploaded to the Internet and distributed to every pornographic website from here to Helsinki.

That’s definitely laying it on thicker than you need to hear. But it is essential that you understand just how much military history rankles me.

Anyway, despite my great reluctance to don a tricorne of any sort, McPherson’s descriptions of battles (along with the accompanying illustrations) did somehow jolt me out of my aversion and make me care. Little details — such as P.G.T. Beauregard designing a new Confederate battle flag after troops could not distinguish between the Confederate “stars and bars” banner from the Union flag in the fog of battle — helped to clarify the specific innovations brought about by the Civil War. It also had never occurred to me how much the history of ironclad vessels began with the Civil War, thanks in part to the eccentric marine engineer John Ericsson, who designed the famed USS Monitor, designed as a counterpoint to the formidable Confederate vessel Virginia, which had been created to hit the Union blockade at Ronoake Island. What was especially amazing about Ericsson’s ship was that it was built and launched rapidly — without testing. After two hours of fighting, the Monitor finally breached the Virginia‘s hull with a 175-pound shot, operating with barely functioning engines. For whatever reason, McPherson’s vivid description of this sea battle reminded me of the Mutara Nebula battle at the end of Star Trek II: The Wrath of Khan.

But even for all of McPherson’s synthesizing legerdemain, the one serious thing I have to ding him on is his failure to describe the horrors of slavery in any form. Even William L. Shirer in The Rise and Fall of the Third Reich devoted significant passages to depicting what was happening in the Holocaust death camps. Despite my high regard for McPherson’s ability to find just the right events to highlight in the Civil War timeline, and his brilliantly subtle way of depicting the shifting fortunes of the North and the South, can one really accept a volume about the Civil War without a description of slavery? McPherson devotes more time to covering Andersonville’s brutal statistics (prisoner mortality was 29% and so forth) before closing his paragraph with this sentence:

The treatment of prisoners during the Civil War was something that neither side could be proud of.

But what of the treatment of Black people? Why does this not merit so much as a paragraph? McPherson is so good at details — such as emphasizing the fact that Grant’s pleas to have all prisoners exchanged — white and Black — in the cartel actually came a year after negotiations had stopped. He’s good enough to show us how southern historians have perceived events (often questionably). Why then would he shy away from the conditions of slavery?

The other major flaw: Why would McPherson skim over the Battle of Gettysburg in just under twenty pages? This was, after all, the decisive battle of the war. McPherson seems to devote more time, for example, on the Confederate raids in 1862. And while all this is useful to understanding the War, it’s still inexplicable to me.

But these are significant nitpicks for a book that was published in 1988 and that is otherwise a masterpiece. Still, I’m not the only one out here kvetching about this problem. The time has come for a new historian — ideally someone who isn’t a white male — to step up to the challenge and outdo both Ken Burns and James McPherson (and Shelby Foote, who I’ll be getting to when we hit MLNF #15 in perhaps a decade or so) and fully convey the evils and brutality of slavery and why this war both altered American life and exacerbated the problems we are still facing today.

Next Up: Lewis Mumford’s The City in History!

Why We Can’t Wait (Modern Library Nonfiction #78)

(This is the twenty-second entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: The Rise of Theodore Roosevelt.)

It was a warm day in April when Dr. Martin Luther King was arrested. It was the thirteenth and the most important arrest of his life. King, wearing denim work pants and a gray fatigue shirt, was manacled along with fifty others that afternoon, joining close to a thousand more who had bravely submitted their bodies over many weeks to make a vital point about racial inequality and the unquestionable inhumanity of segregation.

The brave people of Birmingham had tried so many times before. They had attempted peaceful negotiation with a city that had closed sixty public parks rather than uphold the federal desegregation law. They had talked with businesses that had debased black people by denying them restaurant service and asking them to walk through doors labeled COLORED. Some of these atavistic signs had been removed, only for the placards to be returned to the windows once the businesses believed that their hollow gestures had been fulfilled. And so it became necessary to push harder — peacefully, but harder. The Birmingham police unleashed attack dogs on children and doused peaceful protesters with high-pressure water hoses and seemed hell-bent on debasing and arresting the growing throngs who stood up and said, without raising a fist and always believing in hope and often singing songs, “Enough. No more.”

There were many local leaders who claimed that they stood for the righteous, but who turned against King. White leaders in Birmingham believed — not unlike pro-segregation Governor George Wallace just three months earlier — that King’s nonviolent protests against segregation would incite a torrent of violence. But the violence never came from King’s well-trained camp and had actually emerged from the savage police force upholding an unjust law. King had been very careful with his activists, asking them to sign a ten-point Commitment Card that included these two vital points:

6. OBSERVE with both friend and foe the ordinary rules of courtesy.

8. REFRAIN from the violence of fist, tongue, or heart.

Two days before King’s arrest, Bull Connor, the racist Birmingham Commissioner of Public Safety and a man so vile and heartless that he’d once egged on Klansmen to beat Freedom Riders to a pulp for fifteen minutes as the police stood adjacent and did not intervene, had issued an injunction against the protests. He raised the bail bond from $200 to $1,500 for those who were arrested. (That’s $10,000 in 2019 dollars. When you consider the lower pay and the denied economic opportunities for Birmingham blacks, you can very well imagine what a cruel and needless punishment this was for many protesters who lived paycheck to paycheck.)

And so on Good Friday, it became necessary for King, along with his invaluable fellow leaders Ralph Abernathy and Fred Shuttlesworth, to walk directly to Birmingham Jail and sing “We Shall Overcome.” King took a very big risk in doing so. But he needed to set an example for civil disobedience. He needed to show that he was not immune to the sacrifices of this very important fight. The bondsman who provided the bail for the demonstrators told King that he was out as King pondered the nearly diminished funds for the campaign. In jail, King would not be able to use his contacts and raise the money that would keep his campaign going. Despite all this, and this is probably one of the key takeaways from this remarkable episode in political history, King was dedicated to practicing what he preached. As he put it:

How could my failure now to submit to arrest be explained to the local community? What would be the verdict of the country about a man who had encouraged hundreds of people to make a stunning and then excused himself?

Many who watched this noble march, the details of which are documented in S. Jonathan Bass’s excellent book Blessed Are the Peacemakers, dressed in their Sunday best out of respect for King’s efforts. Police crept along with the marchers before Connor gave the final order. Shuttlesworth had left earlier. King, Abernathy, and their fellow protestors were soon surrounded by paddy wagons and motorcycles and a three-wheel motorcart. They dropped to their knees in peaceful prayer. The head of the patrol squeezed the back of King’s belt and escorted him into a police car. The police gripped the back of Abernathy’s shirt and steered him into a van.

King was placed in an isolation cell. Thankfully, he did not suffer physical brutality, but the atmosphere was dank enough to diminish a weaker man’s hope. As he wrote, “You will never know the meaning of utter darkness until you have lain in such a dungeon, knowing that sunlight is streaming overhead and still seeing only darkness below.” Jail officials refused a private meeting between King and his attorney. Wyatt Tee Walker, King’s chief of staff, sent a telegram to President Kennedy. The police did not permit King to speak to anyone for at least twenty-four hours.

As his confidantes gradually gained permission to speak to King, King became aware of a statement published by eight white clergy members in Birmingham — available here. This octet not only urged the black community to withdraw support for these demonstrations, but risibly suggested that King’s campaign was “unwise and untimely” and could be settled by the courts. They completely missed the point of what King was determined to accomplish.

King began drafting a response, scribbling around the margins of a newspaper. Abernathy asked King if the police had given him anything to write on. “No,” King replied, “I’m using toilet paper.” Within a week, he had paper and a notepad. King’s “Letter from Birmingham Jail,” contained in his incredibly inspiring book Why We Can’t Wait, is one of the most powerful statements ever written about civil rights. It nimbly argues for the need to take direct action rather than wait for injustice to be rectified. It remains an essential text for anyone who professes to champion humanity and dignity.

* * *

King’s “Letter” against the eight clergymen could just as easily apply to many “well-meaning” liberals today. He expertly fillets the white clergy for their lack of concern, pointing out that “the superficial kind of social analysis that deal with effects and does not grapple with underlying causes.” He points out that direct action is, in and of itself, a form of negotiation. The only way that an issue becomes lodged in the national conversation is when it becomes dramatized. King advocates a “constructive, nonviolent tension that is necessary for growth” — something that seems increasingly difficult for people on social media to understand as they block viewpoints that they vaguely disagree with and cower behind filter bubbles. He is also adamantly, and rightly, committed to not allowing anyone’s timetable to get in the way of fighting a national cancer that had then ignobly endured for 340 years. He distinguishes between the just and the unjust law, pointing out that “one has a moral responsibility to obey unjust laws.” But he is very careful and very clear about his definitions:

An unjust law is a code that a numerical or power majority group compels a minority group to obey but does not make binding on itself. This is difference made legal. By the same token, a just law is a code that a majority compels a minority to follow and that it is willing to follow itself. This is sameness made legal.

This is a cogent philosophy applicable to many ills beyond racism. This is radicalism in all of its beauty. This is precisely what made Martin Luther King one of the greatest Americans who ever lived. For me, Martin Luther King remains a true hero, a model for justice, humility, peace, moral responsibility, organizational acumen, progress, and doing what’s right. But it also made King dangerous enough for James Earl Ray, a staunch Wallace supporter, to assassinate him on April 4, 1968. (Incidentally, King’s family have supported Ray’s efforts to prove his innocence.)

* * *

Why We Can’t Wait‘s scope isn’t just limited to Birmingham. The book doesn’t hesitate to cover a vast historical trajectory that somehow stumps for action in 1963 and in 2019. It reminds us that much of what King was fighting for must remain at the forefront of today’s progressive politics, but also must involve a government that acts on behalf of the people: “There is a right and a wrong side in this conflict and the government does not belong the middle.” Unfortunately, the government has doggedly sided against human rights and against the majestic democracy of voting. While Jim Crow has thankfully been abolished, the recent battle to restore the Voting Rights Act of 1965, gutted by the Supreme Court in 2013, shows that systemic racism remains very much alive and that the courts for which the eight white Birmingham clergy professed such faith and fealty are stacked against African-Americans. (A 2018 Harvard study discovered that counties freed from federal oversight saw a dramatic drop in minority voter turnout.)

Much as the end of physical slavery inspired racists to conjure up segregation as a new method of diminishing African-Americans, so too do we see such cavalier and dehumanizing “innovations” in present day racism. Police shootings and hate crimes are all driven by the same repugnant violence that King devoted his life to defeating.

The economic parallels between 1963 and 2019 are also distressingly acute. In Why We Can’t Wait, King noted that there were “two and one-half times as many jobless Negros as whites in 1963, and their median income was half that of the white man.” Fifty-six years later, the Bureau of Labor Statistics informs us that African Americans are nearly twice as unemployed as whites in a flush economic time with a low unemployment rate, with the U.S. Census Bureau reporting that the median household income for African-Americans in 2017 was $40,258 compared to $68,145 for whites. In other words, a black family now only makes 59% of the median income earned by a white family.

If these statistics are supposed to represent “progress,” then it’s clear that we’re still making the mistake of waiting. These are appalling and unacceptable baby steps towards the very necessary racial equality that King called for. White Americans continue to ignore these statistics and the putatively liberal politicians who profess to stand for fairness continue to demonstrate how tone-deaf they are to feral wrongs that affect real lives. As Ashley Williams learned in February 2016, white Democrats continue to dismiss anyone who challenges them on their disgraceful legacy of incarcerating people of color. The protester is “rude,” “not appropriate,” or is, in a particularly loaded gerund, “trespassing.” “Maybe you can listen to what I have to say” was Hillary Clinton’s response to Williams, to which one rightfully replies in the name of moral justice, “Hillary, maybe you’re the one here who needs to listen.”

Even Kamala Harris, now running for President, has tried to paint herself as a “progressive prosecutor,” when her record reveals clear support for measures that actively harm the lives of black people. In 2015, Harris opposed a bill that demanded greater probing into police officer shootings. That same year, she refused to support body cams, only to volte-face with egregious opportunism just ten days before announcing her candidacy. In the case of George Gage, Harris held back key exculpatory evidence that might have freed a man who did not have criminal record. Gage was forced to represent himself in court and is now serving a 70-year sentence. In upholding these savage inequities, I don’t think it’s a stretch to out Kamala Harris as a disingenuous fraud. Like many Democrats who pay mere lip service to policies that uproot lives, she is not a true friend to African Americans, much less humanity. It was a hardly a surprise when Black Lives Matter’s Johnetta Elzie declared that she was “not excited” about Harris’s candidacy back in January. After rereading King and being reminded of the evils of casual complicity, I can honestly say that, as someone who lives in a neighborhood where the police dole out regular injustices to African-Americans, I’m not incredibly thrilled about Harris either.

But what we do have in this present age is the ability to mobilize and fight, to march in the streets until our nation’s gravest ills become ubiquitously publicized, something that can no longer be ignored. What we have today is the power to vote and to not settle for any candidate who refuses to heed the realities that are presently eating our nation away from the inside. If such efforts fail or the futility of protesting makes one despondent, one can still turn to King for inspiration. King sees the upside in a failure, galvanizing the reader without ever sounding like a Pollyanna. Pointing to the 1962 sit-ins in Albany, Georgia, King observes that, while restaurants remained segregated after months of protest, the activism did result in more African-Americans voting and Georgia at long last electing “the first governor [who] pledged to respect and enforce the law equally.”

It’s sometimes difficult to summon hope when the political clime presently seems so intransigent, but I was surprised to find myself incredibly optimistic and fired up after rereading Why We Can’t Wait for the first time in more than two decades. This remarkable book from a rightfully towering figure seems to have answered every argument that milquetoasts produce against radicalism. No, we can’t wait. We shouldn’t wait. We must act today.

Next Up: James M. McPherson’s Battle Cry of Freedom!

The Rise of Theodore Roosevelt (Modern Library Nonfiction #79)

(This is the twenty-first entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: Studies in Iconology.)

One of many blistering tangerines contained within Mark Twain’s juicy three volume Autobiography involves his observations on Theodore Roosevelt: “We have never had a President before who was destitute of self-respect and of respect for his high office; we have had no President before who was not a gentleman; we have had no President before who was intended for a butcher, a dive-keeper or a bully, and missed his mission of compulsion of circumstances over which he had no control.”

He could just as easily have been discussing the current doddering charlatan now forcing many otherwise respectable citizens into recuperative nights of heavy drinking and fussy hookups with a bespoke apocalyptic theme, but Twain’s sentiments do say quite a good deal about the cyclical American affinity for peculiar outsiders who resonate with a populist base. As I write these words, Bernie Sanders has just decided to enter the 2020 Presidential race, raising nearly $6 million in 24 hours and angering those who perceive his call for robust social democracy to be unrealistic, along with truth-telling comedians who are “sick of old white dudes.” Should Sanders run as an independent, the 2020 presidential race could very well be a replay of Roosevelt’s Bull Moose Party run in 1912.

Character ultimately distinguishes a Chauncey Gardner couch potato from an outlier who makes tangible waves. And it is nearly impossible to argue that Teddy Roosevelt, while bombastic in his prose, often ridiculous in his obsessions, and pretty damn nuts when it came to the Rough Riders business in Cuba, did not possess it. Edmund Morris’s incredibly compelling biography, while subtly acknowledging Teddy’s often feral and contradictory impulses, suggests that Roosevelt was not only the man that America wanted and perhaps needed, but reminds us that Roosevelt also had the good fortune of being in the right place at the right time. Had not Vice President Garret Hobart dropped dead because of a bum ticker on November 21, 1899, and had not a sour New York Republican boss named Tom Platt been so eager to run Teddy out of Albany, there is a good chance that Roosevelt might have ended up as a serviceable two-term Governor of New York, perhaps a brasher form of Nelson Rockefeller or an Eliot Spitzer who knew how to control his zipper. Had not a Russian anarchist plugged President McKinnley two times in the chest at the Temple of Music, it is quite possible that Roosevelt’s innovative trust busting and his work on food safety and national parks, to say nothing of his crazed obsession with military might and giving the United States a new role as international police force, would have been delayed altogether.

What Roosevelt had, aside from remarkable luck, was a relentless energy which often exhausts the 21st century reader nearly as much as it fatigued those surrounding Teddy’s orbit. Here is a daily timetable of Teddy’s activities when he was running for Vice President, which Morris quotes late in the book:

7:00 A.M. Breakfast 
7:30 A.M. A speech
8:00 A.M. Reading a historical work
9:00 A.M. A speech
10:00 A.M.  Dictating letters
11:00 A.M. Discussing Montana mines
11:30 A.M. A speech
12:00 Reading an ornithological work
12:30 P.M. A speech
1:00 P.M. Lunch
1:30 P.M. A speech
2:30 P.M. Reading Sir Walter Scott
3:00 P.M. Answering telegrams
3:45 P.M. A speech
4:00 P.M. Meeting the press
4:30 P.M. Reading
5:00 P.M. A speech
6:00 P.M. Reading
7:00 P.M. Supper
8-10 P.M. Speaking
11:00 P.M. Reading alone in his car
12:00 To bed

That Roosevelt was able to do so much in an epoch before instant messages, social media, vast armies of personal assistants, and Outlook reminders says a great deal about how he ascended so rapidly to great heights. He could dictate an entire book in three months, while also spending his days climbing mountains and riding dozens of miles on horseback (much to the chagrin of his exhausted colts). Morris suggests that much of this energy was forged from the asthma he suffered as a child. Standing initially in the shadow of his younger brother Elliott (whose later mental collapse he callously attempted to cover up to preserve his reputation), Teddy spent nearly his entire life doing, perhaps sharing Steve Jobs’s “reality distortion field” in the wholesale denial of his limitations:

In between rows and rides, Theodore would burn off his excess energy by running at speed through the woods, boxing and wrestling with Elliott, hiking, hunting, and swimming. His diary constantly exults in physical achievement, and never betrays fear that he might be overtaxing his strength. When forced to record an attack of cholera morbus in early August, he precedes it with the phrase, “Funnily enough….”

Morris is thankfully sparing about whether such superhuman energy (which some psychological experts have suggested to be the result of undiagnosed bipolar disorder) constitutes genius, only reserving the word for Roosevelt in relation to his incredible knack for maintaining relations with the press — seen most prominently in his fulsome campaign speeches and the way that he courted journalistic reformer Jacob Riis during his days as New York Police Commissioner and invited Riis to accompany him on his nighttime sweeps through various beats, where Roosevelt micromanaged slumbering cops and any other layabout he could find. The more fascinating question is how such an exuberant young autodidact, a voracious reader with preternatural recall eagerly conducting dissections around the house when not running and rowing his way with ailing lungs, came to become involved in American politics.

Some of this had to do with his hypergraphia, his need to inhabit the world, his indefatigable drive to do everything and anything. Some of it had to do with deciding to attend Columbia Law School so he could forge a professional career with his new wife Alice Hathaway Lee, who had quite the appetite for social functions (and whose inner life, sadly, is only superficially examined in Morris’s book). But much of it had to do with Roosevelt regularly attending Morton Hall, the Republican headquarters for his local district. Despite being heckled for his unusual threads and side-whiskers, Roosevelt kept showing up until he was accepted as a member. The Roosevelt family disapproved. Teddy reacted in anger. And from moment forward, Morris writes, Roosevelt desired political power for the rest of his life. Part of this had to do with the need for family revenge. Theodore Roosevelt, Sr. suffered a swift decline in health (and quickly died) after Roscoe Conklin and New York State Republicans set out to ruin him over a customs collector position.

These early seeds of payback and uncompromising individualism grew Roosevelt into a fiery oleander who garnered a rep as a fierce and feisty New York State Assemblyman: the volcanic fuel source that was to sustain him until mortality and dowdiness sadly caught up with him during the First World War. But Roosevelt, like Lyndon B. Johnson later with the Civil Rights Act (documented in an incredibly gripping chapter in Robert A. Caro’s Master of the Senate), did have a masterful way of persuading people to side with him, often through his energy and speeches rather than creepy lapel-grabbing. As New York Police Commissioner, Roosevelt upheld the unpopular blue laws and, for a time, managed to get both the grumbling bibulous public and the irascible tavern keepers on his side. Still, Roosevelt’s pugnacity and tenacity were probably more indicative of the manner in which he fought his battles. He took advantage of any political opportunity — such as making vital decisions while serving as Acting Secretary of the Navy without consulting his superior John Davis Long. But he did have a sense of honor, seen in his refusal to take out his enemy Andrew D. Parker when given a scandalous lead during a bitter battle in New York City (the episode was helpfully documented by Riis) and, as New York State Assemblyman, voting with Democrats on March 7, 1883 to veto the Five-Cent Bill when it was discovered to be unconstitutional by then Governor Grover Cleveland. Perhaps his often impulsive instincts, punctuated by an ability to consider the consequences of any action as it was being carried out, is what made him, at times, a remarkable leader. Morris documents one episode during Roosevelt’s stint as Assistant Secretary of the Navy in which he was trying to build up an American navy and swiftly snapped up a Brazilian vessel without a letter. When the contract was drafted for the ship, dealer Charles R. Flint noted, “It was one of the most concise and at the same time one of the cleverest contracts I have ever seen.”

Morris is to be praised for writing about such a rambunctious figure with class, care, and panache. Seriously, this dude doesn’t get enough props for busting out all the biographical stops. If you want to know more about Theodore Roosevelt, Morris’s trilogy is definitely the one you should read. Even so, there are a few moments in this biography in which Morris veers modestly into extremes that strain his otherwise eloquent fairness. He quotes from “a modern historian” who asks, “Who in office was more radical in 1899?” One zips to the endnotes, only to find that the “historian” in question was none other than the partisan John Allen Gable, who was once considered to be the foremost authority on Teddy Roosevelt. Morris also observes that “ninety-nine percent of the millions of words he thus poured out are sterile, banal, and so droningly repetitive as to defeat the most dedicated researcher,” and while one opens a bountiful heart to the historian prepared to sift through the collected works of a possible madman, the juicy bits that Morris quotes are entertaining and compelling. Also, to be fair, a man driven to dictate a book-length historical biography in a month is going to have some litters in the bunch.

But these are extremely modest complaints for an otherwise magnificent biography. Edmund Morris writes with a nimble focus. His research is detailed, rigorous, and always on point, and he has a clear enthusiasm for his subject. Much of Morris’s fall from grace has to do with the regrettable volume, Dutch, in which Morris abandoned his exacting acumen and inserted a version of himself in a biography of Reagan. This feckless boundary-pushing even extended into the endnotes, in which one Morris inserted references to imaginary people. He completely overlooked vital periods in Reagan’s life and political career, such as the Robert Bork episode. Given the $3 million advance and the unfettered access that Morris had to Reagan, there was little excuse for this. Yet despite returning valiantly to Roosevelt in two subsequent volumes (without the weirdass fictitious asides), Morris has been given the Wittgenstein treatment (“That whereof we cannot speak, thereof we must remain silent”) by his peers and his colleagues. And I don’t understand why. Morris, much like Kristen Roupenian quite recently, seems to have been needlessly punished for being successful and not living up to a ridiculous set of expectations. But The Rise of Theodore Roosevelt, which rightfully earned the Pulitzer Prize, makes the case on its own merits that Morris is worthy of our time, our consideration, and our forgiveness and that the great Theodore Roosevelt himself is still a worthwhile figure for contemporary study.

Next Up: Martin Luther King’s Why We Can’t Wait!

A Farewell to Arms (Modern Library #74)

(This is the twenty-seventh entry in the The Modern Library Reading Challenge, an ambitious project to read the entire Modern Library from #100 to #1. Previous entry: Scoop.)

You likely know the basics: An American goes to Italy and enlists as a “tenente.” He drives a battlefield ambulance just before his nation enters World War I. He gets wounded. He meets a nurse at a hospital. He falls in love. He feels free as he recovers. He feels trapped as he returns to the front. He gets disillusioned. He flees. He finds her again. Bad things happen. But A Farewell to Arms is so much more than this. It is a heartbreaking love story. It is a remarkably subtle indictment of war. It shows how people bury their romantic longings behind duty and how there’s a greater bravery in fulfilling what you owe to your heart. It argues for life and love. Its final paragraph is devastating. It zooms along with masterly prose that is buried with treasure. It is one of the greatest novels of the early 20th century. This statement is not hyperbole.

It is now quite fashionable to bash Hemingway rather than praise him, as the flip Paul Levy recently did in his oh so hip and not very bright “hot take”: “The Hemingway corpus is full of artistic failure.” Well, sure it is. I’ve read it all three times at different periods in my life and I don’t think any honest reader would deny that. When I was an obnoxious punk in my twenties, I resisted Hem big time, feeling that he could not teach me to be a man in the way that James Baldwin and F. Scott Fitzgerald had, yet I somehow held onto his books, sensing that I could be colossally wrong. (I was.) Even today, I have to acknowledge that To Have and Have Not is an embarrassment. The Garden of Eden is an interesting but unconsummated train wreck. For Whom the Bell Tolls has its moments, but the Old English verbs and the lack of subtlety can be risible. I’ve never quite been able to leap into The Old Man and the Sea, but that says more about me than Hem. The upshot is that there are quite a few clunkers in Hem’s collected works and some of the Nick Adams tales ain’t all that, but one could make this claim about any author. In the end, when you have a masterpiece like A Farewell to Arms that never grows tedious no matter how many times you reread it, who in the hell cares about the misses? There’s no profit in calculating a shallow statement when the crown jewels shine bright in your face.

The other way that people ding Hem these days is by singling out his macho posturing or peering at his pages through the prism of unbridled masculine hubris. The naysayers dismiss Lady Brett Astley in The Sun Also Rises as an archetype without recognizing her enigma or the way she aptly epitomized the Lost Generation. They don’t acknowledge how Hem had to prostrate himself before Beryl Markham in a letter to Maxwell Perkins and that he did get on (for a time) with Martha Gellhorn, who neither suffered fools nor caved to condescension.

Yet there is certainly something to Hemingway’s women problem, especially as seen in the correspondence between Fitzgerald and Hemingway. In June 1929, F. Scott Fitzgerald sent Hem a letter and observed how, in his early work, “you were really listening to women — here you’re only listening to yourself, to your own mind beating out facily a sort of sense that isn’t really interesting.” (Hemingway’s reply: “Kiss my ass.”)

Scott’s warning remains a very shrewd assessment on what’s so fascinating and frustrating about Hemingway. I’d argue that one of the best ways to ken Hem is to recognize that he was a wildly accomplished giant when he placed his own ego last and that any transgressions that today’s readers detect only emerged when Hem became overly absorbed in his own self. And on this point, one can find a strange sympathy for the man, thanks in part to Andrew Farah’s recent biography, Hemingway’s Brain, which points to Ernest’s many head injuries (which included nine concussions) and concludes that he suffered from CTE, the brain disease seen in professional football players after too many years of violent tackles. This theory, which takes into account the decline of Hemingway’s handwriting in his latter years, would also offer an explanation for the wildly disparate writing quality and thus invalidates Mr. Levy’s foolish pronouncement.

* * *

The world breaks every one and afterward many are strong at the broken places. But those that will not break it kills. It kills the very good and the very gentle and the very brave impartially. If you are none of these you can be sure it will kill you too but there will be no special hurry.

A Farewell to Arms thankfully places us shortly after the rising sun of Hem’s career and, like its predecessor, the book contains razor-sharp prose, keen observations (ranging from Umberto Notani’s infamous The Black Pig, trains packed with soldiers, and the repugnant wartime indignity of a hopped up tyrant fiercely questioning a man who is fated to be shot), and a beautiful epitomization of the famous “iceberg theory” that Hemingway posited in Death in the Afternoon:

If a writer of prose knows enough of what he is writing about he may omit things that he knows and the reader, if the writer is writing truly enough, will have a feeling of those things as strongly as though the writer had stated them. The dignity of movement of an ice-berg is due to only one-eighth of it being above water. A writer who omits things because he does not know them only makes hollow places in his writing.

Much has been spilled over Hemingway’s declarative sentences, which are beautifully honed in this masterpiece. (Hem wrote 47 versions of the ending.) But I’d like to single out “was,” the most frequently used word in this novel. On a surface level, “was” is the most expedient way to hurl us into Frederic’s world: a simple verb of action and hard deets, but one that likewise deflects interior thought. It’s easy to dis Hem as a man’s man summing up life and the earth and the grit and all else that makes us want to ape him even though there can be only one, but the key to seeing the beauty of “was” is knowing that this book is all about pursuing a lost and deeply moving romantic vision, one kept carefully hidden from the beginning. Style advances the perspective and keeps us curious and lets us in and “was” is the way Hem gets us there.

Hemingway uses language with extraordinary command to clue us in on the distinct possibility that this story is in some sense a dream — indeed, a dream involving death based on what Hem was never able to make with the nurse Agnes von Kurowsky while holed up in a ward. There’s the makeshift hospital office, with its “many marble busts on painted wooden pillars,” which is further compared to a cemetery. In the novel’s first part, there are very few adverbs — save “winefully” early on and “evidently” and “directly” in the same sentence as guns rupture Frederic’s existence. The first rare simile (“seeing it all ahead like moves in a chess game”) occurs when Frederic first tries to kiss Catherine and is greeted with a slap (which Catherine apologizes for). This is a far cry indeed from what The Daily Beast‘s Allen Barra recently claimed, without citing a single example, as “flowery and overwritten.” A Farewell to Arms basks in the same beautiful realm between the real and the ethereal that The Great Gatsby does, albeit in a different landscape altogether, but it offers enough ambiguity to speculate about the characters while encouraging numerous rereads.

Language also carries the deep resonances of what people mean to each other. Catherine cannot stand a triple-wounded vet named Ettore and repeats “dreadful” twice and “bore” four times when she vents to Frederic. The words “She won’t die” are also repeated in one harrowing paragraph near the end. (Indeed, if you see a word or a phrase repeated in Hemingway’s fiction, there’s a good chance that something bad will happen.) Shortly after Frederic is moved to the freshly built hospital in Milan (itself a marvelous metaphor for the fresh start of Frederic’s blossoming love for Catherine), he takes to Dr. Valentini, who speaks in a series of short sentences over the course of a paragraph (a small sample: “A fine blonde like she is. That’s fine. That’s all right. What a lovely girl.”) and who Frederic later calls “grand.” The syntax, chopped and sheared and housed within manageable units, represents a telegraph from the human heart like no other.

Frederic acknowledges that he lies to Catherine when he tells her that she’s the first woman he’s loved. Now it’s tempting to roll your eyes over the “I’ll be a good girl” business that often comes from Catherine, but it’s also a safe bet to speculate that Frederic is likewise lying about what Catherine has actually told him, much as Hem himself has fudged the full extent of his “affair” with Agnes von Kurowsky through fiction. (“Now, Ernest Hemingway has a case on me, or thinks he has,” wrote von Kurowsky in her diary on August 25, 1918. “He is a dear boy & so cute about it.”)

An enduring romance is often built on a pack of lies. We often fail to recognize the full totality of who a lover was until we are well outside of the relationship. As for friendship, I’d like to argue that Miss Gage is a fascinating side character who stands up for this. She’s someone who ribs Frederic about not fully understanding what friendship is. Later, when Frederic returns to the front lines, Rinaldi tells him, “I don’t want to be your friend. I am your friend.” And if Frederic can’t recognize friendship, does he really know how to read the room when Cupid shows up with a puckish smile? Hem’s subtle acknowledgment of these basic truths allows us to trust and become invested in Frederic’s voice. And I’d like to think that even Hem’s opponents could get behind such idyllic imagery as Frederic and Catherine “putting thoughts in the other one’s head while we were in different rooms” or agreeing to sneak off to Switzerland together or even the funny “winter sport” business with customs. These are endearing and beautiful romantic moments that certainly show that Hem is far more than a repugnant hulk.

Love is a high stakes game, but it’s always a game worth playing. If you beat the odds, the payout is incalculable. Small wonder that the happy couple ends up throwing their lire into a rigged horse race. Indeed, Frederic’s early days with Catherine are a game like bridge where “you had to pretend you were playing for money or playing for some stakes.” For all of Frederic’s apparent confidence in not knowing the stakes, he does not reveal his name for a while — on its first mention, Frederic only partially spills his name as he is drinking. He is also more taken with the allure of being alone — as seen later in a Donnean nod when he says that “[w]hat made [Ireland] pretty was that it sounded like Island.” His loneliness is further cemented when Miss Ferguson says that Catherine cannot see him.

Is this the loneliness of war? We learn later that Frederic came to Rome to be an architect, although this is likely a lie, given that it is repeated a second time to a customs officer. But it does suggest that Frederic cannot build his own life without another. Perhaps this is the solitude that comes from the relentless pursuit of manly vigor (boxing, bullfighting, hunting) that Hemingway was to explore throughout his life? There is one clue late in the book when Hemingway writes, “The war seemed as far away as the football game of someone else’s college,” and another midway through when Frederic wonders if major league baseball will be shut down if America entered the war. (Fun fact: There was indeed a World War I deadline put into place, but the two leagues squeezed in numerous doubleheaders to ensure that the season could play out.) If the First World War arose in part because humanity was involved in a vicious game, then Hemingway seems to be suggesting that further games rooted in play and peace must be promulgated to restore the human condition. Frederic cynically quips to the 94-year-old Count Greffi, “No, that is the great fallacy; the wisdom of old men. They do not grow wise. They grow careful.” But if being careful is the true measure of existence, why then do we celebrate valor that often emerges from reckless circumstances? Indeed, Hemingway sends up the very nature of heroism up when Frederic wakes up in the hospital and is greeted by Rinaldi, who presses him to confess the specific act he committed to earn his medal. “No,” replies Frederic. “I was blown up while we were eating cheese.”

In an age where razor blade ads are urging us to question what manhood should represent, there’s something to be said about studying what’s contained within masculinity’s ostensible ur-texts and with how careful men are in saying nothing but everything. A Farewell to Arms is a far more sophisticated and deeply beautiful novel when you start examining its sentences and questioning its motivations. Caught in a mire between love and war, Frederic opts for the laconic rather than the prolix. And in doing so, he tells us far more about what it means to love and lose than most authors can convey in a lifetime.

Next Up: Nathanael West’s Day of the Locust!

Studies in Iconology (Modern Library Nonfiction #80)

(This is the twentieth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: The Face of Battle.)

Titian’s Sacred and Profane Love (pictured above) is one of my favorite paintings of the 16th century, in large part because its unquestionable beauty is matched by its bountiful and alluring enigma. We see two versions of love at opposing ends of a fountain — one nearly naked without apology, but still partially clad in a windswept dark salmon pink robe and holding an urn of smoke as she languorously (and rebelliously?) leans on the edge of a fountain; meanwhile the other Love sits in a flowing white gown on the other end, decidedly more dignified, with concealed legs that are somehow stronger and more illustrious than her counterpart, and disguising a bowl that, much like the Kiss Me Deadly box or the Pulp Fiction suitcase, could contain anything.

We know that the Two Loves are meant to coexist because Titian is sly enough to imbue his masterpiece with a sartorial yin-yang. Profane Love matches Sacred with a coiled white cloth twisting around her waist and slipping down her left leg, while Sacred has been tinctured by Profane’s pink with the flowing sleeve on her right arm and the small slipper on her left foot. Meanwhile, Cupid serves as an oblivious and possibly mercenary middleman, his arm and his eyes deeply immersed in the water and seemingly unconcerned with the Two Loves. We see that the backdrops behind both Loves are promisingly bucolic, with happy rabbits suggesting prolific promiscuity and studly horsemen riding their steeds with forelegs in the air, undoubtedly presaging the stertorous activity to commence sometime around the third date.

Sacred’s backdrop involves a castle situated on higher ground, whereas Profane’s is a wider valley with a village, a tableau that gives one more freedom to roam. The equine motif carries further on Sacred’s side with a horse prancing from Sacred to Profane in the marble etching just in front of the fountain, while Profane’s side features equally ripe rapacity, a near Fifty Shades of Grey moment where a muscled Adonis lusts over a plump bottom, hopefully with consensual limits and safewords agreed upon in advance. Titian’s telling takeaway is that you have to accept both the sublime and the salacious when you’re in love: the noble respect and vibrant valor that you unfurl upon your better half with such gestures as smoothing a strand of hair from the face along with the ribald hunger for someone who is simultaneously desirable and who could very well inspire you to stock up on entirely unanticipated items that produce rather pleasurable vibrations.

There are few works of art that are so dedicated to such a dichotomous depiction of something we all long for. And Titian’s painting endures five centuries later because this Italian master was so committed to minute details that, rather incredibly, remain quite universal about the human condition.

But what the hell does it all mean? We can peer into the canvas for hours, becoming intoxicated by Titian’s fascinating ambiguities. But might there be more helpful semiotics to better grasp what’s going on? Until I read Panofsky’s Studies in Iconology, I truly had no clue that Titian had been influenced by Bembo’s Asolani or that the Two Loves were a riff on Cesare Ripa’s notion of Eternal Bliss and Transient Bliss, which was one of many efforts by the Neoplatonic movement to wrestle with a human state that occupied two modes of shared existence. Panofsky also helpfully points out that Cupid’s stirring of the fountain water was a representation of love as “a principle of cosmic ‘mixture,’ act[ing] as an intermediary between heaven and earth” and that the fountain can also be looked upon as a revived sarcophagus, meaning that we are also looking at life and love springing from a coffin. And this history added an additional context for me to expand my own quasi-smartypants, recklessly dilletantish, and exuberantly instinctive appreciation of Titian. In investigating iconology, I recalled my 2016 journey into The Golden Bough (ML NF #90), in which Frazer helpfully pointed to the symbolic commonality of myths and rituals throughout multiple cultures and across human history, and, as I examined how various symbolic figures morphed over time, I became quite obsessed with Father Time’s many likenesses (quite usefully unpacked by Waggish‘s David Auerbach).

Any art history student inevitably brushes up against the wise and influential yet somewhat convoluted views of Erwin Panofsky. Depending upon the degree to which the prof resembles Joseph Mengele in his teaching style, there is usually a pedagogical hazing in which the student is presented with “iconology” and “iconography.” The student winces at both words, nearly similar in look and sound, and wonders if the distinction might be better understood after several bong hits and unwise dives into late night snacks, followed by desperate texts to fellow young scholars that usually culminate in more debauchery which strays from understanding the text. Well, I’m going to do my best to explicate the difference right now.

The best way to nail down what iconography entails is to think of a painting purely in terms of its visuals and what each of these elements means. Some obvious examples of iconography in action is the considerable classroom time devoted to interpreting the green light at the end of The Great Gatsby or the endless possibilities contained within the Mona Lisa‘s smile. It is, in short, being that vociferous museum enthusiast pointing at bowls and halos buried in oil and doing his best to impress with his alternately entertaining and infuriating interpretations. All this is, of course, fair game. But Panofsky is calling for us to think bigger and do better.

Enter iconology, which is more specifically concerned with the context of this symbolism and the precise technical circumstances and historical influences that created it. Let me illustrate the differences between iconography and iconology using Captain James T. Kirk from Star Trek.

Here are the details everyone knows about Kirk. He is married to his ship. He is a swashbuckling adventurer who gets into numerous fights and is frequently seen in a torn shirt. He is also a nomadic philanderer, known to swipe right and hookup with nearly every alien he encounters. (In the episode “Wink of an Eye,” there is a moment that somehow avoided the censors in which Kirk was seen putting on his boots while Deela brushes her hair.) This is the iconography of Kirk that everyone recognizes.

But when we begin to examine the origins of these underlying iconographic qualities, we begin to see that there is a great deal more than a role popularized by William Shatner through booming vocal delivery, spastic gestures, and an unusual Canadian hubris. When Gene Roddenberry created Star Trek, he perceived Captain Kirk as “Horatio Hornblower in Space.” We know that C.S. Forester, author of the Hornblower novels, was inspired by Admiral Lord Nelson and a number of heroic British authors who fought during the Napoleonic Wars. According to Bryan Perrett’s The Real Hornblower, Forester read three volumes of The Naval Chronicle over and over. But Forester eventually hit upon a trope that he identified as the Man Alone — a solitary individual who relies exclusively on his own resources to solve problems and who carries out his swashbuckling, but who is wedded to this predicament.

Perhaps because the free love movement of the 1960s made the expression of sexuality more open, Captain Kirk was both a Man Alone and a prolific philanderer. But Kirk was fundamentally married to his ship, the Enterprise. In an essay collected in Star Trek as Myth, John Shelton Lawrence ties this all into a classic American monomyth, suggesting that Kirk also represented

…sexual renunciation, a norm that reflects some distinctly religious aversions to intimacy. The protagonist in some mythical sagas must renounce previous sexual ties for the sake of their trials. They must avoid entanglements and temptations that inevitably arise from satyrs, sirens, or Loreleis in the course of their travels…The protagonist may encounter sexual temptation symbolizing ‘that pushing, self-protective, malodorous, carnivorous, lecherous fever which is the very nature of the organic cell,’ as Campbell points out. Yet the ‘ultimate adventure’ is the ‘mystical marriage…of the triumphant hero-soul with the Queen Goddess” of knowledge.

All of a sudden, Captain Kirk has become a lot more interesting! And moments such as Kirk eating the apple in Star Trek II: The Wrath of Khan suddenly make more sense beyond the belabored Project Genesis metaphor. We now see how Roddenberry’s idea of a nomad philanderer and Forester’s notion of the Man Alone actually takes us to a common theme of marriage with the Queen Goddess of the World. One could very well dive into the Kirk/Hornblower archetype at length. But thanks to iconology, we now have enough information here to launch a thoughtful discussion — ideally with each of the participants offering vivacious impersonations of William Shatner — with the assembled brainiacs discussing why the “ultimate adventure” continues to crop up in various cultures and how Star Trek itself was a prominent popularizer of this idea.

Now that we know what iconology is, we can use it — much as Panofsky does in Studies in Iconology — to understand why Piero di Cosimo was wilder and more imaginative than many of his peers. (And for more on this neglected painter, who was so original that he even inspired a poem from Auden, I recommend Peter Schjeldahl’s 2015 New Yorker essay.) Panofsky points out how Piero’s The Finding of Vulcan on Lemnos (pictured above) differs in the way that it portrays the Hylas myth, whereby Hylas went down to the river Ascunius to fetch some water and was ensnared by the naiads who fell in love with his beauty. (I’ve juxtaposed John William Waterhouse’s Hylas and the Nymphs with Piero so that you can see the differences. For my money, Piero edges out Waterhouse’s blunter version of the tale. But I also chose the Waterhouse painting to protest the Manchester Art Gallery’s passive-aggressive censorship from last year. You can click on the above image to see a larger version of both paintings.) For one thing, Piero’s painting features no vase or vessel. There is also no water or river. The naiads are not seductive charmers at all, but more in the Mean Girls camp. And Hylas himself is quite helpless. (The naiad patting Hylas on the head is almost condescending, which adds a macabre wit to this landlocked riff.) Piero is almost the #metoo version of Hylas to Waterhouse’s more straightforward patriarchal approach. And it’s largely because not only did Piero have a beautifully warped imagination, but he was relying, like many Renaissance painters, upon post-classical commentaries rather than the direct source of the myths themselves. And we are able to see how a slight shift in an artist’s inspiration can produce a sui generis work of art.

Panofsky is on less firm footing when he attempts to apply iconology to sculptures and architecture. His attempts to ramrod Michelangelo into the Neoplatonic school were unpersuasive to me. In analyzing the rough outlines of a monkey just behind two of Michelangelo’s Slaves (the “dying” and the “rebellious” ones) in the Louvre, Panofsky rather simplistically ropes the two slaves into a subhuman class and then attempts to suggest that Ficino’s concept of the Lower Soul — which is a quite sophisticated concept — represents the interpretive smoking gun. This demonstrates the double-edged sword of iconology. It may provide you a highly specific framework for which to reconsider a great work of art, but it can be just as clumsily mistaken for the absolute truth as any lumbering ideology.

Then again, unless you’re an insufferable narcissist who needs to be constantly reminded how “right” you are, it’s never any fun to discuss art and ideas with people who you completely agree with. Panofsky’s impact on art analysis reminds us that iconology is one method of identifying the nitty-gritty and arguing about it profusely and jocularly for hours, if not decades or centuries.

Next Up: Edmund Morris’s The Rise of Theodore Roosevelt!

Scoop (Modern Library #75)

(This is the twenty-sixth entry in the The Modern Library Reading Challenge, an ambitious project to read the entire Modern Library from #100 to #1. Previous entry: The Prime of Miss Jean Brodie.)

When I last dived into Evelyn Waugh’s exquisite comic fiction for this crazy project nearly six years ago, I wrote a sour essay in which I permitted my hostility towards Waugh’s pugnacious life and his reactionary politics to overshadow my appreciation for his art. Perhaps the way I read fiction has changed or the idea of completely discounting a writer’s achievements with the histrionic tone of an upbraiding Pollyanna who doesn’t possess a scintilla of self-awareness fills me with a dread I usually associate with wincing at a tax bill or standing in a needlessly long line for a pizza slice. Whatever the case, I allowed myself to zero in on Brideshead Revisited‘s weaker elements (namely, the deplorable gay stereotype Anthony Blanche) without possessing the decency to praise that novel’s excellent prose in any way. This was decidedly uncharitable of me. For Waugh was, for all of his faults, a master stylist. That I was also bold enough to rank Wodehouse over Waugh was likewise problematic (although I would still rather read Pip and I have never been able to get into the Sword of Honour trilogy and I still feel that Waugh was more or less finished as an author after The Loved One; incidentally, Waugh himself called Wodehouse “the Master”). At the time, the eminently reasonable Cynthia Haven offered what I now deem to be appropriate pushback, observing that I brought a lot of “post-modern baggage” into my reading. My “take” on that novel’s Catholic dialogue was, I now realize after diving into Waugh again, driven by a cocky yahooism that is perhaps better deployed while knocking back pints in a sports bar and claiming that you’re a big fan of the team everybody else is cheering for. Never mind that the names of the players are only lodged in your memory by the blinding Chryon reminders and the bellowing cries of histrionic announcers that work together to perfect a sense-deadening television experience.

Anyway, I’ll leave cloud cuckoos like Dave Eggers to remain dishonest and pretend they never despised great novels. I’d rather be candid about where I may have strayed in my literary judgement and how I have tried to reckon with it. In a literary climate of “No haters” (and thus no chances), we are apparently no longer allowed to (a) voice dissenting opinions or (b) take the time to reassess our youthful follies and better appreciate a novel that rubbed us the wrong way on the first read. Wrestling with fiction should involve expressing our hesitations and confessing our evolving sensibilities and perceiving what a problematic author did right. And so here we are. It has taken many months to get here, but it does take time to articulate a personal contradiction.

So here goes: As much as I appreciate Scoop‘s considerable merits (particularly the fine and often hilarious satire when the book takes place on Waugh’s home turf), I cannot find it within me to endorse this novel’s abysmally tone-deaf observations on a fictitious Abyssinia — here, Ishmaelia. There are unsophisticated thoughts cloaked beneath the light fluidity of Waugh’s exacting pen that many of his acolytes — including The Observer‘s Robert McCrum and NPR’s Alexander Nazaryan — refuse to acknowledge. There’s no other way to say this, but Waugh is more nimble with his gifts when he bakes his pies with an anglophonic upper crust. And that ugly truth should give any reader or admirer great pause. (Even Selina Hastings, one of his biographers, was forced to concede this. And McCrum, to his credit, does at least write that “Scoop derives less inspiration from Ethiopia,” although this is a bit like stating that Paul Manafort merely muttered a little white lie.) Waugh’s limitations in Scoop are not as scabrous as Black Mischief — a novel so packed with racism that it’s almost the literary equivalent to Louis C.K.’s recent attempts at a comeback. But his “insights” into Africa are still very bad, despite all the other rich wit contained within the book. Waugh cannot see anyone who does not share his lily-white complexion as human. His creatively bankrupt view of Africans as bloodthirsty cannibals or “crapulous black servants” or “a natty young Negro smoking from a long cigarette holder” carries over from Black Mischief. “A pious old darky named Mr. Samuel Smiles Jackson” is installed President. I was rankled by the constant cries of “Boy!” from the assorted journos, late risers who complain about not getting swift servitude with a smile. (“Six bloody black servants and no breakfast,” sneers the entitled Corker at one point.) Even the potentially interesting politics behind Ishmaelia’s upheaval are coarse and general, with the arrival of Dr. Benito at a press conference described in one paragraph with a contrast of “blacks” and “whites” that show the force and timing of a man determined to be vituperative, but without substantive subtlety. One of the book’s jokes involves a nonexistent city on the nation’s map identified as “Laku,” which is Ishmaelite for “I don’t know.” And while it does allow for a decent setup in which numerous journalists expend lavish resources to find Laku for their stories, I suspect that this is really Waugh confessing he doesn’t know and can’t know because he doesn’t want to.

Still, in approaching Scoop, I was determined to give this book more care than what I doled out to Brideshead. Not only did I spend a few months rereading all of Waugh’s novels up through Brideshead, finding them considerably richer than I did on my first two canon reads, but I also dived into the Selina Hastings and Martin Stannard biographies, along with numerous other texts pertaining to Scoop. And one cannot completely invalidate Waugh’s talent:

“Why, once Jakes went out to cover a revolution in one of the Balkan capitals. He overslept in a carriage, woke up at the wrong station, didn’t know any different, got out, went straight to a hotel, and cabled off a thousand-word story about barricades in the streets, flaming churches, machine guns answering the rattle of his typewriter as he wrote, a dead child, like a broken doll, spreadeagled in the deserted roadway below his window — you know. Well, they were pretty surprised at his office, getting a story like that from the wrong country, but they trusted Jakes and splashed it in six national newspapers. That day every special in Europe got orders to rush to the new revolution.”

This is pitch-perfect Waugh. Sadly, the wanton laziness of journalists and willful opportunism of newspaper publishers remain very applicable eighty-one years after Scoop‘s publication. In 2015, a Hardin County newspaper misreported that the local sheriff had said that “those who go into the law enforcement profession typically do it because they have a desire to shoot minorities.” And this was before The New York Times became an apologist outlet for Nazis (the original title of that linked article was “In America’s Heartland, the Nazi Sympathizer Next Door”) and didn’t even bother to fact-check an infamous climate change denial article from Bret Stephens published on April 28, 2017.

So Scoop does deserve our attention in an age devoted to “alternative facts” and a vulgar leader who routinely squeezes savage whoppers through his soulless teeth. Waugh uses a familiar but extremely effective series of misunderstandings to kickstart his often razor-sharp sendup, whereby a hot writer by the name of John Courtney Boot is considered to be the ideal candidate to cover a war in Ishamelia for The Daily Beast (not to be confused with the present Daily Beast founded by Tina Brown, who took the name from Waugh — and, while we’re on the subject of contemporary parallels, Scoop also features a character by the name of Nannie Bloggs, quite fitting in an epoch populated with dozens of nanny blogs). John Boot is confused with William Boot, a bucolic man who writes a nature column known as Lush Places and believes himself to be in trouble with the top brass for substituting “beaver” with “great crested grebe” in a recent installment. He is sent to cover a war that nobody understands.

The novel is funny and thrilling in its first one hundred pages, with Waugh deftly balancing his keen eye for decor (he did study architecture) with these goofy mixups. Rather tellingly, however, Waugh does spend a lot of time with William Boot in transit to Ishamelia, almost as if Waugh is reluctant to get to the country and write about the adventure. And it is within the regions of East Africa that Waugh is on less firm footing, especially when he strays from the journalists. Stannard has helpfully observed that, of all Waugh’s pre-war novels, Scoop was the most heavily edited and that it was the “political” sections with which Waugh had “structural problems.” But Scoop‘s problems really amount to tonal ones. Where Erskine Caldwell’s Tobacco Road (ML #91) brilliantly holds up a mirror to expose the audience’s assumptions about people (with the novel’s Broadway adaptation inspiring a tremendously interesting Ralph Ellison essay called “An Extravagance of Laughter,” which many of today’s self-righteous vigilantes should read), Scoop seems more content to revel in its atavistic prejudices.

In 2003, Christopher Hitchens gently bemoaned the “rank crudity” of Waugh’s childish names for side characters. And I think he was right to pinpoint Waugh’s declining powers of invention. For all of Scoop‘s blazing panoramas and descriptive sheen (the prose committed to the Megalopilitan offices is brilliant), the ultimate weakness of the book is that Waugh seems incapable of imbuing Ishamelia with the same inventive life with which he devotes to England. When one looks at the travel writing that came before this, even the high points of Waugh in Abyssinia are the sections where he bitches about his boredom.

Waugh’s writing was often fueled by a vicious need for revenge and an inability to let things go. Take the case of Charles Crutwell, the Hertford dean who praised Waugh on his writing and awarded him an Oxford scholarship as a young man. Waugh proceeded to be incredibly lazy about his studies, deciding that he had earned this financial reward, that he no longer needed to exert himself in any way, and that he would spend his time boozing it up and getting tight with his mates. Crutwell told Waugh that he needed to take his research more seriously. He could have had Waugh expelled, but he didn’t. And for this, Crutwell became the target of Waugh’s savage barbs throughout much of his early writing and many of his novels. In Decline and Fall, you’ll find Toby Crutwell as an insane burglar turned MP. In Vile Bodies, a “Captain Crutwell” is the snobby member of the Committee of the Ladies’ Conservative Association at Chesham Bois. There’s a Crutwell in Black Mischief and A Handful of Dust. Waugh’s story “Mr. Loveday’s Little Outing” was originally titled “Mr. Crutwell’s Little Outing.” And in one of Scoop‘s supererogatory chapters, William Boot meets a General Crutwell who has had numerous landmarks named after him. Keep in mind that this is sixteen years after the events in Hertford. You want to take Waugh aside, buy him a beer, and say, “Bro, walk away.”

Now I have to confess that this type of brutal targeted satire was catnip for me at a certain impressionable age that lingered embarrassingly long into my late thirties. The very kind George Saunders tried to get me to understand this twelve years ago during an episode of my old literary podcast, The Bat Segundo Show, in which we were discussing the way Sacha Baron Cohen singled out people with total malice. Cohen’s recent television series Who is America certainly upheld Saunders’s point. Of course, I stubbornly pushed back. Because ridicule is a hell of a drug. Just ask anyone with a Twitter account. But I now understand, especially after contending with Waugh again, that effective satire needs to be more concerned with exposing and virulently denouncing those in actual power, railing against the tyrannical institutions that diminish individual lives, and, of course, exposing the follies of human behavior. Waugh does this to a large extent in Scoop and his observations about newspapermen running up large tabs on their expense accounts and manipulating the competition are both funny and beautiful, but he also appears to have been operating from an inferiority complex, an intense need for victory against his perceived oppressors and something that, truth be told, represents a minor but nevertheless troubling trait I recognize in myself and that has caused much of my own writing and communications with people to be vehemently misunderstood, if not outright distorted into libelous and untrue allegations. When your motivation to write involves the expression of childish snubs and pedantic rage without a corresponding set of virtues, it is, from my standpoint, failed satire. And I don’t know about you, but my feeling is that, if you’re still holding a grudge against someone after five or six years, then the issue is no longer about the person who wronged you, but about a petty and enduring narcissism on behalf of the grudgeholder. What precisely do these many Crutwells add to Waugh’s writing? Not much, to tell you the truth.

We do know that, when Waugh covered Abyssinia, he wrote in a letter to Penelope Betjeman, “I am a very bad journalist, well only a shit could be good on this particular job.” So perhaps there was a part of Waugh that needed to construct a biting novel from his own toxic combination of arrogance and self-loathing.

But Waugh’s biggest flaw as a writer, however great his talent, was his inability to summon empathy or a humanistic vision throughout his work, even if it is there in spurts in Brideshead and perhaps best realized in his finest novel, A Handful of Dust. When William Boot foot falls in love with Kätchen, a poorly realized character at best, Waugh has no interest in portraying Boot’s feelings as anything more than that of a dopey cipher who deserves our contempt: “For twenty-three years he had remained celibate and and heart-whole; landbound. Now for the first time he was far from sure, submerged among deep waters, below wind and tide, where huge trees raised their spongy flowers and monstrous things without fur or feather, wing or foot, passed silently in submarine twilight. A lush place.” It is one thing to present Boot clumsily setting up an unnecessary canoe or showing the way he gets hoodwinked over a heavy package of stones or not understanding basic journalism jargon and to let Boot’s bumbling behavior (or, for that matter, the apposite metaphor of a three-legged dog barking in a barrel just outside Kätchen’s home) speak for itself. It is quite another thing to stack the deck against your protagonist with a passage like this, however eloquently condemned. What Waugh had not learned from Wodehouse was that there was a way of both recognizing the ineptitude of a dunderhead while also humanizing his feelings. You can lay down as many barbs as you like in art, but, at a certain point, if you’re any good, the artistic expression itself has to evolve beyond mere virtuosic style. This, in my view, is the main reason why Waugh crumbled and why I think his standing should be reassessed. The vindictiveness in Black Mischief, however crucially transgressive at the time, still represented a failure of creative powers. All Waugh had left at the end was a bitter nostalgia for a lost Britannia and a fear of modernity, which amounted to little more than an old man pining for the good old days by the time Waugh got to his wildly overrated Sword of Honour trilogy (and by the time Louis C.K. returned on stage with his first full set littered with racism, transphobia, and scorn for the young generation). If Waugh had learned to see the marvel of a changing world and if he had embraced human progress rather than fleeing from it, he might have produced more substantive work. But, hey, here I am talking about the guy nearly a century later, largely because he’s on a list. Still, even today, young conservative men have adopted the tweedy analog look of a “better time.” So maybe the joke’s on me. Thankfully the next Waugh novel book I have to write about, A Handful of Dust (ML #34), is a legitimate masterpiece. So I will try to give Waugh a more generous hearing when we get there in a few years. For now, I’m trying to shake off his seductive spite as well as the few remaining dregs of my own.

Next Up: Ernest Hemingway’s A Farewell to Arms!

The Face of Battle (Modern Library Nonfiction #81)

(This is the twentieth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: The Strange Death of Liberal England.)

Thy fanes, thy temples to the surface bow,
Commingling slowly with heroic earth,
Broke by the share of every rustic plough:
So perish monuments of mortal birth,
So perish all in turn, save well-recorded worth;

— Lord Byron, Child Harolde’s Pilgrimage

I must confess from the outset that the study of armed human conflict, with its near Spartan fixation on tactics and statistics, has long filled me with malaise. It is among the least sexiest subjects that a polymath of any type can devote her attentions to, akin to cracking open a thick, limb-crushing tax code volume written in a way that obliterates all joy and finding a deranged pleasure within this mind-numbingly dull amalgam of numbers and turgid prose. As Margaret Atwood once quipped in a poem, the military historian says, “I don’t ask why, because it is mostly the same.” And when the song remains the same, why would anyone other than a ketamine fiend dance to it?

I’ve long pictured the military historian as some aging jingoistic white male whose idea of a good time involves blasting John Philip Sousa from a set of speakers that should be devoted to happening hip-hop: a lonely and humorless parasite who moves cast-iron figures across a threadbare map in some dusty basement, possibly talking to himself in a gruff tone that uncannily mimics Rod Steiger’s inebriated cadences. He seems overly enamored of the dry details of ordnance, mirthless arrows, and terrain circles. Perhaps he fritters away his time in some homebuilt shack far off the main artery of Interstate 76, ready to reproduce well-studied holes with his Smith & Wesson should any nagging progressive come to take away his tattered Confederate flag or any other paleolithic memorabilia that rattles his martial disposition. But let’s say that such a man is committed to peace. Then you’re left with his soporific drone as he dodders on about some long dead general’s left flank attack in the most unpalatable ramble imaginable. He prioritizes a detached tabulative breakdown over the more palpable and poignant truths that motivates men. He doesn’t seem to care about how a soldier experiences trauma or summons bravery in impossible conditions, or how these battles permanently alter nations and lives. The military historian is, in Napoleonic short, a buzz killer despite his buzz cut. Indeed, military history is so embarrassing to read and advocate that, only a few weeks ago, I was forced to hide what I was reading when a woman started flirting with me at a bar. (I sheepishly avoided revealing the title to her for fifteen minutes. Nevertheless, she persisted. And upon seeing The Face of Battle, the woman in question rightfully headed for the hills, even after I offered to buy her a drink.)

There are quite a few military history books on the Modern Library list. So I’m more or less fucked. It is not that war itself does not interest me. Human beings have been fighting each other since the beginning of time and only a soulless anti-intellectual fool resolutely committed to the vulgar act of amusing himself to death would fail to feel anything pertaining to this flaw in the human makeup. The podcaster Dan Carlin, who specializes in military history, is one of the few people who I can listen to in this medium for many hours and remain completely enthralled. But that is only because Carlin is incredibly skilled at showing how the paradigm shifts of war influence our everyday lives. Christopher Nolan’s Dunkirk was a remarkable film that hurled its audience into the dizzying depths of war, but this is merely a vicarious sensory experience. I can get behind Paul Fussell’s The Great War and Modern Memory (ML NF #75) because of that book’s cogent observations on how war influenced literary culture. Neil Sheehan’s A Bright Shining Lie (ML NF #84) remains a journalistic masterpiece that I very much admire — in large part because of its razor-sharp commitment to human psychology, which in turn allows us to understand the miasmic madness of making tactical decisions (see that book’s incredible “Battle of Ap Bac” chapter). But I’d hesitate to categorize either of these two brilliant volumes within the exacting genre of unadulterated military history. I’ve always had the sense that there’s an underlying bellicosity, if not an outright advocacy of warfare, with books that are exclusively dolled up in camo.

So upon reading The Face of Battle, it was something of a relief to see that John Keegan was up front from the get-go about what military history fails to do, and why accounts of battles are so problematic. He begins the book saying that he has never seen or been in a battle. And this is a hell of a way to open up a book that professes to give us the lowdown on what war is all about. It is a genuinely humble statement from someone who has made a career out of being an expert. He openly points to military history’s major weakness: “the failure to demonstrate connection between thought and action.” “What of feeling?” I thought as I read this sentence. According to Keegan, historians need to keep their emotions on a leash. And the technical example he cites — the British Official History of the First World War — is an uninspiring passage indeed. So what is the historian to do? Quote from the letters of soldiers. But then Keegan writes, “The almost universal illiteracy, however, of the common soldier of any country before the nineteenth makes it a technique difficult to employ.” Ugh. Keegan!

From Ilya Berkovich’s Motivation in War: The Experience of Common Soldiers in Old-Regime Europe:

Considering the social origins of most eighteenth-century soldiers, one might think that literate soldiers were uncommon. However, literacy among the lower classes in old-regime Europe was becoming less exceptional. It is estimated that up to 40 per cent of the labouring poor in Britain were literate. Between 1600 and 1790, the portion of French bridegrooms singing their parish records doubled to about half of the total male population. Interestingly, the corresponding figures in northern and eastern frontier regions, which provided most French recruits, were much higher, with some areas coming close to universal literacy. Literacy rates in the Holy Roman Empire fluctuated widely, yet it is telling that over 40 per cent of the day labourers in mid-century Coblenz were able to sign their names. In rural East Prussia, one of the poorest regions in Germany, comparable figures were reached in 1800, although this was still a fourfold increase compared to only half-a-century before….

And so on. Fascinating possibilities for scholarship! It seems to me that someone here did not want to roll up his sleeves and get his hands dirty.

You see the problems I was having with this book. On one hand, Keegan wants to rail against the limitations of military history (and he should! you go, girl!). On the other hand, he upholds the very rigid ideas that stand against the execution of military history in a satisfying, fact-based, and reasonably emotional way that allows voluble chowderheads like me an entry point.

But that’s not the main focus of this book. Keegan settles upon three separate events — the Battle of Agincourt on October 25, 1415, the Battle of Waterloo on June 18, 1815, and the first day of battle on the Somme (July 1, 1916) — to seek comparisons, commonalities, and various parallels that we might use to understand military mechanics. He is duly reportorial in each instance, but overly fond of taxonomy rather than tangibility. Still, there are moments when Keegan’s bureaucratic obsessiveness are actually interesting — such as his examination of British archers and infantry running up against French cavalry during Agincourt. After all, if a horse is charging its way into a man, either the horse is going to run away, men are going to be knocked down, or there’s going to be a “ripple effect” causing open pockets on each side of the horse. So it’s actually quite extraordinary to consider how the French got their asses kicked with such a clear advantage. Well, the British did this with stakes, which impaled the horses. And the threat of this obstacle caused the French to retreat with their backs to the British, resulting in archers lobbing arrows into their vertebrae.

Keegan informs us that “the force of unavoidable circumstances” sealed the fate of the French and allowed Henry V to win at Agincourt. When Keegan gets to Waterloo, we see a similar approach adopted by Napoleon near the end. Large crowds of French infantry rushed towards the British line, landing within mere yards. The two armies exchanged fire and the French, at a loss of what to do, turned around and fled. This was not an altogether smart strategy, given the depleting reserves that Napoleon had at his disposal. But it does eloquently demonstrate that battles tend to crumble once one side has entered an unavoidable choice. The rush of men on both sides at the Somme in 1916, of course, in the trenches not only escalated this to an unprecedented scale of atrocity, but essentially laid down the flagstones for the 20th century’s practice of mutually assured destruction.

These are vital ideas to understand. Still, I’m not going to lie. Keegan was, in many ways, dull and soporific — even for a patient reader like me. I learned more about Henry V’s campaign by reading Juliet Barker’s excellent volume Agincourt, which not only unpacked the incredible logistics of invading northwestern France with engrossing aplomb but also juxtaposed this campaign against history and many vital realities about 15th century life. And a deep dive into various World War I volumes (I especially recommend Richard Aldington’s surprisingly ribald novel, Death of a Hero) unveiled a lot of unanticipated sonic transcriptions that inspired me to draft an audio drama script that I hope to produce in a few years. Keegan is certainly helpful in a dry intellectual manner — the equivalent of being served a dull dish of desiccated biscuits when you haven’t eaten anything for days; I mean, there’s a certain point in which you’ll gorge on anything — but he’s not the man who inspired me about battle. Hell, when one of the most boring and pretentious New Yorker contributors of all time espouses Keegan’s “matchlessly vivid pen,” you know there’s a reason to hide beneath your blanket. Keegan is undoubtedly on this list because nobody before him had quite unpacked war from the bottom-up approach rather than the general’s top-down viewpoint. But like most military historians, he didn’t have enough of a heart for my tastes. There’s a way to present a detailed fact-driven truth without being such a detached fussbucket about it. And we shall explore and exuberantly praise such virtuosic historians in future Modern Library installments!

Next Up: Erwin Panofsky’s Studies in Iconology!

The Strange Death of Liberal England (Modern Library Nonfiction #82)

(This is the nineteenth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: Vermeer.)

It was a picnic-perfect summer in 1914. The rich flaunted their wealth with all the subtlety of rats leaping onto a pristine wedding dress. The newspapers steered their coverage away from serious events to pursue lurid items about sports and celebrity gossip. A comic double act by the name of Collins & Harlan recorded an absurd ditty called “Aba Daba Honeymoon,” which Thomas Pynchon was to describe fifty years later as “the nadir of all American expression.” Few human souls twirling their canes and parasols in these conditions of unbridled frivolity could have anticipated that an archduke’s assassination in late June would plunge Europe into a gruesome war that would leave twenty million dead, permanently altering notions of honor, bloodshed, and noblesse oblige.

But even a few years before the July Crisis, there were strong signs in England that something was amiss. Politicians demonstrated a cataclysmic failure to read or address the natural trajectory of human progress. Women justly demanded the right to vote and were very willing to starve themselves in prison and burn down many buildings for it. Workers fought violently for fair wages, often locked into stalemates with greedy mining companies. They were intoxicated by a new militant brand of syndicalism from France then popularized by Georges Sorel. The atmosphere was one of increasing upheaval and escalated incoherence, even among the most noble-minded revolutionaries. The influx of gold from Africa inspired both lavish spending and an inflated currency. The liberals in power were supposed to stand up for the working stiffs who couldn’t quite meet the rising prices for boots and food and clothes with their take home pay. And much like today’s Democratic Party in the States, these tepid Parliamentary wafflers past their Fabian prime revealed a commitment to ineptitude over nuts-and-bolts pragmatism. They allowed the Tories to play them like rubes losing easy games of three-card monte. Amidst such madness, England became a place of oblivious tension not dissimilar to the nonstop nonsense that currently plagues both sides of the Atlantic. With the middle and upper classes keeping their heads in the clouds and their spirits saturated in moonbeam dreams and a bubble gum aura, is it any wonder that people were willing to incite war and violence for the most impulsive reasons?

George Dangerfield’s The Strange Death of Liberal England examines this crazed period between 1910 and 1914 with an exacting and quite entertaining poetic eye. Dangerfield, an erudite journalist who parlayed his zingy word-slinging into a teaching career, is somewhat neglected today, but his remarkable knack for knowing when to suggest and when to stick with the facts is worthy of careful study, a summation of the beautifully mordant touch he brought as a historian. He describes, for example, the “dismal, rattling sound” of Liberalism refusing to adjust to the times, and eloquently sends up the out-of-touch movement in a manner that might also apply to today’s neoliberals, who stubbornly refuse to consider the lives and needs of the working class even as they profess to know what’s best for them:

[I]t was just as if some unfortunate miracle had been performed upon its contents, turning them into nothing more than bits of old iron, fragments of intimate crockery, and other relics of a domestic past. What could be the matter? Liberalism was still embodied in a large political party; it enjoyed the support of philosophy and religion; it was intelligible, and it was English. But it was also slow; and it so far transcended politics and economics as to impose itself upon behaviour as well. For a nation which wanted to revive a sluggish blood by running very fast and in any direction, Liberalism was clearly an inconvenient burden.

Dangerfield knew when to let other people hang themselves by their own words. The infamous Margot Asquith, the starry-eyed socialite married to the Prime Minister who led England into World War I, is quoted at length from her letters to Robert Smillie, the brave union organizer who fought on behalf of the Miners’ Federation of Great Britain. Asquith, so fundamentally clueless about diplomacy, could not understand why meeting Smillie might be a bad idea given the tense negotiations.

I did feel that Dangerfield was unduly harsh on Sylvia Pankhurst, one of the key organizers behind the suffragette movement. His wry fixation upon Pankhurst’s indomitable commitment — what he styles “the fantastic Eden of militant exaltation” — to starvation and brutality from the police, all in the brave and honorable fight for women, may very well be a product of the 1930s boys’ club mentality, but it seems slightly cheap given how otherwise astute Dangerfield is in heightening just the right personality flaws among other key figures of the time. The Pankhurst family was certainly eccentric, but surely they were deserving of more than just cheap quips, such as the volley Dangerfield lobs as Christabel announces the Pankhurst withdrawal from the WSPU (“She made this long-expected remark quite casually — she might almost have been talking to the little Pomeranian dog which she was nursing.”).

Still, Dangerfield was the master of the interregnum history. His later volume, The Era of Good Feelings, examined the period between Jefferson and Jackson and is almost as good as The Strange Death. One reads the book and sees the model for Christopher Hitchens’s biting erudite style. (The book was a favorite of Hitch’s and frequently cited in his essays.)

But it is clear that Dangerfield’s heart and his mischievous vivacity resided with his homeland rather than the nation he emigrated to later in life. In all of his work, especially the material dwelling on the United Kingdom, Dangerfield knew precisely what years to hit, the pivotal moments that encapsulated specific actions that triggered political movements. As he chronicles the repercussions of the June 14, 1911 strike in Southampton, he is careful to remark upon how “it is impossible not to be surprised at the little physical violence that was done — only a few men killed, in Wales in 1912, and two or three in Dublin in 1913; in England itself not a death. Is this the effect of revolutionary methods, and, if so, do the methods deserve the word?” He then carries on speculating about the pros and cons of peaceful revolution and ties this into the “spiritual death and rebirth” of English character. And we see that Dangerfield isn’t just a smartypants funnyman, but a subtle philosopher who leaves human possibilities open to the reader. He is a welcome reminder that seeing the real doesn’t necessarily emerge when you lock eyes on an alluring Twitch stream or a hypnotic Instagram feed. It comes when you take the time to step away, to focus on the events that are truly important, and to ruminate upon the incredible progress that human beings still remain quite capable of making.

Next Up: John Keegan’s The Face of Battle!

Vermeer (Modern Library Nonfiction #83)

(This is the eighteenth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: A Bright Shining Lie.)

Johannes Vermeer was the Steph Curry of 17th century painters: a dazzling mack daddy who spent lengthy periods of his choppy forty-three year life layering lapis lazuli and ultramarine and madder lake onto some of the most beautiful paintings ever created in human history. To ask how he perfected the glowing pour of his domestic scenes through painstaking brush strokes is to court trouble. Did he do so through mirrors and lenses? Does the Hockney-Falco theory have any real bearing on appreciating his work? Vermeer famously left no record of how he achieved his elegant handwrought touch, which has left many to become obsessed with the question, even taking the trouble (as Tim Jennison, subject of the controversial Penn and Teller documentary, did) to learn Dutch, which is a maddening language by all reasonable standards.

The great mystery of how this genius mastered light purely by eye and through no apparent line work, all two centuries before the camera’s invention, has been taken up by such feverishly committed investigators as Philip Steadman, an architect who meticulously measured Vermeer’s interiors and constructed a one-sixth scale model of his room to uphold the theory that Vermeer used a camera obscura. For now, our attentions are with Lawrence Gowing, a self-taught art historian whose Vermeer obsession resulted in a highly useful and slyly passionate book, a short but smart volume bizarrely downplayed in The New York Times‘s Gowing obituary, but a title that the Modern Library judges were at least munificent enough to rank above the likes of Robert Caro eight years after Gowing kicked the proverbial bucket of paint.

Gowing frames Vermeer’s achievements by observing that this painter, unlike his 17th century Dutch peers Gabriël Metsu and Jan Steen, eschewed line and overt modelling work. Vermeer’s purity as an artist emerged with his curious pursuit of diffuse light at all costs. He remained quite impartial about how light spilled into his scenes. As Gowing notes, even a detail such as The Lacemaker‘s cushion tassels (pictured left) “have an enticing and baffling bluntness of focus.” In an age when anyone can instantly snap a picture to memorialize how light drifts into a room, this revolutionary approach cannot be understated, especially because Vermeer was confident enough in his aesthetic to push against the mercantile herd even as he served as head of the Guild of Saint Luke. In the seventeenth century, painters wanted to be noticed. They were, after all, artists with constantly grumbling bellies. So they tended to emphasize particular objects, even if it meant exaggerating the look, in an attempt to stand out. They might approach a patron and say, “Ha ha! I am Hendrik Van de Berg, the greatest painter of Maastricht! I have fifty thousand followers on…well, just imagine a world, preposterous as this may sound, in which short text messages determine your stature among peers and, yup, that would be me! Art King of Maastricht! Anyway, that nifty apple in the far right corner may look a little unnatural, but, dude, I think we can both agree that it really pops! And it will look good in your study while your starving servant polishes your boots and dreams of something to eat! Oh, I know you can’t pay your servants and that you are, in fact, fond of flogging them. But I am an artist and surely you can pay me! I’ll even throw in a complimentary whipping if you buy my work! Think of it as a patron reward!” Vermeer, by contrast, willfully blurred the apple. Vermeer’s peers in his hometown of Delift understood what he was doing, but the cost of being an artist was, alas, premature death due to exasperated financial stress.

Gowing’s gushing critical distinctions are a welcome reminder that it’s sometimes more important to know why art stands out rather than how it is created. The “No haters” crowd, fed on the soothing alfalfa sprouts of director’s commentaries and lengthy pop culture oral histories, would rather view Vermeer as a magician or a technical wizard than an artist. If Vermeer did use a camera obscura, he was certainly not the only Dutch painter doing so at the time. Gowing emphasizes that Vermeer’s style went above and beyond merely accumulating details. What should concern us is why he was so committed to the optical. What counts is Vermeer’s commitment to the visual experience: commonplace scenes that are somehow both radiant and persuasive depictions of reality. Gowing helpfully points out that any Vermeer investigation of life was never direct. The paintings were often established at an oblique angle. He singles out Vermeer’s “inhuman fineness of temper,” a tranquility that is quite extraordinary given that Vermeer was working with ten kids running around and the financial turmoil he had to endure.

Gowing is also very good at only drawing upon Vermeer’s biography when it is pertinent. Vermeer’s detachment and his slow output certainly hinges upon disappointments and setbacks he contended with during the last years of his life. Still, one only needs to look at Vermeer’s paintings to feel their somewhat passive but stirring view of humanity. Gowing distinguishes Vermeer from other painters by observing that “with the passivity characteristic of his thought, he accepted this part of his nature as the basis of the expressive content of his style.” Somehow Vermeer could inject his view on humanity purely through style. And somehow in this stylistic transformation, what seems passive is actually carefully rendered depth. Despite confining his paintings to two rooms, Gowing finds enough common qualities within these limitations for us to get a sense of what Vermeer was up to:

In only three of the twenty-six interiors that we have is the space between painter and sitter at all uninterrupted. In five of the others passage is considerably encumbered, in eight more the heavy objects interposed amount to something like a barrier and in the remaining ten they are veritable fortifications. It is hard to think that this preference tells us nothing about the painter’s nature. In it the whole of his dilemma is conveyed.

The book’s second part is more akin to descriptive liner notes for a must have box set and doesn’t quite match the first part’s perspicacity. But Gowing does provide several useful antecedents (such as Jan Van Bronkhorst’s The Procuress) that allow us to track Vermeer’s development as an artist. Again, because Vermeer didn’t leave much behind on his life or methods, it has been left for us to speculate on how he cultivated his exquisite style. But Gowing is too sharp a critic to be seduced by gossip and thankfully confines his findings to other paintings, showing us several paths leading us to Utrecht Caravaggism and trompe l’oeil.

I must warn you, however, that Gowing’s Vermeer, despite its ostensibly breezy length, will likely have you losing many hours studying Vermeer. What Gowing could not have foreseen is that his ruminations would be even more vital in a climate where some otherwise smart people believe that an ire-inducing and ill-considered think piece cobbled together in an hour constitutes serious thought.

Next Up: George Dangerfield’s The Strange Death of Liberal England!

The Prime of Miss Jean Brodie (Modern Library #76)

(This is the twenty-fifth entry in the The Modern Library Reading Challenge, an ambitious project to read the entire Modern Library from #100 to #1. Previous entry: Finnegans Wake.)

We are two days away from the great Muriel Spark’s 100th birthday. Yet, despite New Directions’s valiant reissue of her remarkable work only a few years ago (along with a quiet event planned on Thursday at the 92nd Street Y, which stands incommensurately like a shaking child in the vast shadow of Edinburgh’s impressive celebratory blowout), we are no closer to literary people universally singing her praises on this side of the Atlantic than we are in stopping men from wearing black socks to bed. And that’s a shame. Because Muriel Spark was truly one of the most innovative writers of the 20th century. She was a bold and an economical stylist who packed far more attentive detail and character speculation into one paragraph than most contemporary writers wrangle into a chapter, and she did so with high style, grace, and ferocious wit. The Prime of Miss Jean Brodie, her most enduring and popular novel (and, through a magical twist of fate, the next volume in the Modern Library Reading Challenge), certainly sees Spark’s great gifts on full display, but it is also a book that demands constant and even obsessive study.

I have read Brodie four times within the last two years. It is very possible I will read it four more times within the next two. I am inclined to press this richly entertaining book, no more than a hundred pages, into the hands of anyone who purports to take literature seriously, but who has somehow ignored Spark to hold up some bland offering from one of those “Most Anticipated” lists published at The Millions that nobody will remember or quote from in a decade.

Brodie is both a portrait of an exuberant teacher determined to educate a carefully selected group of girls so that they may be better equipped when “in their prime” and an incredible tableau of 1930s Edinburgh, such as the “wind-swept hockey fields which lay like the graves of the martyrs exposed to the weather in an outer suburb.” Miss Brodie may or may not be a tyrant. (She is fond of Mussolini and Italian culture.) One can read the book anew and come away with an entirely different opinion of the title character. The novel tantalizes us with flash-forwards (which can also be found in many of Spark’s later novels, such as The Driver’s Seat and Territorial Rights, which are also well worth your time) revealing the fates of the schoolgirls in adult life, leaving us with impressions of how formative life and education influences unknowingly in later years. One reads little snippets of the six girls under Miss Brodie’s tutelage from the present and the future– Rose “pulling threads from the girdle of her gym tunic” in class or Jenny not experiencing any sexual awe “until suddenly one day when she was nearly forty, an actress of moderate reputation married to a theatrical manager” — and asks how much Miss Brodie is responsible for corrupting fate, with Spark slyly implicating us as we become more curious.

Muriel Spark wrote this masterpiece in less than a month. This is especially amazing because, much like the magnetic properties contained within the glowing amber necklace Miss Brodie wears when off-screen romance inspires a new step in her exacting stride, this short novel reads as if an exquisite jeweler had painstakingly ensured that not a single element could ever fall out of alignment. And Spark sculpts many glistening carats along the way: the fictitious letters that two girls write after imagining Miss Spark’s love life, the creepy, one-armed artist Teddy Lloyd who also teaches at the school and disguises his true pedophilc nature through the sham panacea of Catholicism and family life, and the lingering question of which schoolgirl betrays Miss Brodie and causes her to lose her job. The novel presents us with many hints and details that hide in plain sight, but that all contribute to an atmosphere in which the girls end up coming up with explanations (often fictitious and sometimes apostate) for what is both seen and not seen. Miss Brodie’s careful lessons, which include a field trip into a rougher part of Edingburgh and often involve knowing the roots of words to better understand them, are perhaps being applied in dangerous ways. And in an age where people judge people who they haven’t met based on what they think they know from a social media profile, The Prime of Miss Jean Brodie remains potent and necessary reading.

Spark’s lecture “The Desegregation of Art,” delivered before a crowd of New York literati on May 26, 1970, offers useful insights into the ambitious gauntlet she felt obliged to throw down as an artist and gives us a sense of what is very much at stake in Brodie. She firmly believed that literature existed to infiltrate and fertilize the mind and denounced any fiction that stood in the way of this lofty artistic goal. If that meant tossing out socially conscious art that was not “achieving its end or illuminating our lives any more,” then this was the price to pay for better art that reflected the depths and thorny hurdles of life. She insisted that “ridicule is the only honourable weapon we have left” and believed that addressing wrongs emerged not so much from instant outrage, but through “a more deliberate cunning, a more derisive undermining of what is wrong. I would like to see less emotion and more intelligence in these efforts to impress our minds and hearts.” Much as Spark detested being a victim in her life, she believed that art reveling in victimhood turned readers into oppressors.

So we are left with Brodie as a remarkable volume that fertilizes our minds even as it challenges our own interpretations. Spark’s honorable ridicule in Brodie may very well lie with the way she shrewdly sends up how people are perceived for their failings based on superficial shorthand. And this extends even to the hypnotic allure of Miss Brodie’s own teaching. At one point, Miss Brodie observes that “John Stuart Mill used to rise at dawn to learn Greek at the age of five” and that the teacher herself learned from this lesson. Mill is a particularly funny choice, given that this philosopher was known for utilitarianism and that we are seemingly experiencing a short “utilitarian” novel when we read Brodie. But, of course, we aren’t. For one wants to reread it yet again.

The intrepid literary adventurer plunging forward on a bold bender for real-life inspiration is often viewed with contempt by any practitioner transforming bits of his life into analeptic artistic truth withstanding the test of time. The adventurer shakily balances the author’s complete works like vertiginous trays stacked tall enough to scrape plaster flakes off the ceiling as the letters and the collected marginalia and the autobiographical tidbits are swirled into a overflowing flute by a jittery finger serving as a makeshift cocktail straw. If not written off as a slightly smarter TMZ reporter who has somehow retained the ability to read despite being barraged daily by Harvey Levin’s soul-destroying smile, such an apparent gossipmonger, even if she is cogent enough to know that fictional characters rarely spring from a singular source, is still tarnished as that rakish yenta who reads fiction for the wrong reasons.

As I have ventured further into this years-long Modern Library project, I’ve come around to the daring idea that, for certain sui generis authors (and Muriel Spark is certainly one of them), one may indeed find deeper appreciation in the way they forge art from the people surrounding them. It isn’t so much the schema of who matches up with whom that should concern us, but rather the fascinating way in which characters defy an easily identifiable origin, turning into a form of fictionalized life that feels just as real on the page as any spellbinding life experience. There is a fundamental difference between the novelist who runs out of raw biographical material mid-career, her limited inventive faculties and inherent disconnection with humanity dishearteningly revealed with mediocre and unconvincing and blandly repetitive offerings in late career (see, for example, the wildly overrated Joyce Carol Oates, surely one of the great living literary embarrassments in the early 21st century), and the novelist who seizes the reins of an indefatigable spirit that runs quite giddily to the very end.

For someone like Muriel Spark, who was fiercely protective of her privacy and her public image, this is not necessarily a slam-dunk proposition even when many of the real life details match up. The formidable literary biographer Martin Stannard secured Spark’s reluctant blessing to get his hands dirty on details occluded in Spark’s remarkably opaque autobiography, Curriculum Vitae. Stannard, like many before him, pegged Christina Kay, the schoolteacher who taught Spark at the age of twelve, as the predominant inspiration for “the real Miss Jean Brodie.” Both Kay and Brodie insisted that their girls were the “crème de la crème.” Miss Kay also took Spark and her fellow students on great cultural adventures into Edinburgh. Both were keen on Italy and shared a rather clueless interest in Mussolini. (As late as 1979, Spark would insist that Miss Brodie was not a fascist and that Brodie’s admiration for Il Duce had more to do with Benito’s powerful masculinity, as it was perceived in 1930, which leads one to ponder the 53 percent of white women voted for Trump in 2016. Some weaknesses in human perception regrettably endure, despite the best history lessons.)

But much as the great Iris Murdoch regularly transcended reality to achieve jaw-droppingly marvelous art, which she defined as that which “invigorates without consoling,” one finds a similarly spellbinding spirit within Spark’s equally incredible novels. Once you read The Girls of Slender Means, The Prime of Miss Jean Brodie, Memento Mori, The Driver’s Seat, or A Far Cry from Kensington, if you have even the faintest desire of wanting to know how art works, you may find yourself obsessing over just how she was able to put so much into her novels. Ian Rankin, writer of the rightfully well-regarded Rebus novels, found himself precisely in this very position, reading The Prime of Miss Jean Brodie over and over again over the course of thirty years and always finding new details, even wondering if the titular character was the hero or the villain. (Some of Rankin’s work on Spark when he was pursuing a Ph.D is available online behind a paywall.)

And if you read Brodie, you may very well join us on this pleasantly fanatical quest. We are told at the end, with one of the characters hiding from the truth of how her life has been altered, “There was a Miss Jean Brodie in her prime.” And that seemingly innocent notion, in Spark’s nimble hands, is the white whale that turns any reader into Ahab.

Next Up: Evelyn Waugh’s Scoop!

A Bright Shining Lie (Modern Library Nonfiction #84)

(This is the seventeenth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: West with the Night.)

Young scrappy soldiers came to walk the villages and the jungles and the ricepaddies from all hopeful parts of America, itching to step into boots that matched the size and the bravery of their heroic fathers. They hungered to prove their manhood and their patriotism even as their spirits dwindled and their moral core dissipated as it became common knowledge that Vietnam was an unwinnable war. They came home in dishonor and disgrace, losers who had sacrificed their bodies and minds and souls in the name of failed American exceptionalism, and they were left to rot by their government and sometimes by their fellow citizens.

Much as the shellshocked men in World War I returned to their native soil facing similar indifference to their trauma and their pain, as memorably chronicled in Richard Aldington’s brutally mordant novel Death of a Hero, the men who served in Vietnam learned that the best years of their lives had been little more than a cruel joke, even when they defended napalm-soaking sorties that burned vast horrifying holes into villages and hospitals and fields and homes and schools that happened to be situated near a hopped up Huey often manned by a pilot who was losing his mind. Their collective shellshock was as commonplace as heartbreak and many dozen times more devastating. The Vietnam vets, who were all very brave and worthy of the same valor afforded the Greatest Generation (but never received their due), suffered PTSD and traumatic injuries and severe psychological damage. Every day of their lives after the war was a new battle against painful inner turmoil that spread to their families and their friends and their loved ones, stretching well beyond the poisoned polyester of the flapping American flag itself. It seemed that nobody wanted to hear their stories, much less any news about the one million civilians and Viet Cong soldiers who were slaughtered above the 17th parallel or the estimated 741,000 who died below it or the 312,000 people who died by direct order of various governments or the 273,000 Cambodians and the 62,000 Laotians.

They all died, and none of them needed to, because the conflict had escalated through the foggy hubris of war and the dogged jingoism of three U.S. Presidents and the exacting Pentagon number crunchers who believed they could will their analytical acumen into a guaranteed victory even when the truth was fudged and altered and far too frequently ignored and contemned. For all the Pentagon’s professed understanding, the imperious powers that be could not comprehend that the massive influx of American supplies would be plundered and reused by resourceful Viet Cong soldiers with a very long memory of history who learned how to take out Bell UH-1 helicopters and M-113 armored personnel carriers from the ground. They carried out the strategic hamlet program without providing basic needs to the very villagers who were supposed to be their allies. Most disastrously, the American interventionists severely underestimated the damage that the Ngo Dinh Diem regime was doing to South Vietnamese loyalty, culminating in the Buddhist Crisis of 1963, which persecuted religion in a manner shockingly similar to ongoing present-day American indignities against Muslims.

Somewhere between 1.5 million and 2.5 million people died in the Vietnam War. That’s close to the entire population of Chicago or the total population of Jamaica. It is the entire population of Nebraska. It is the combined population of Wyoming, Vermont, Washington D.C., and Alaska. It is the combined population of Iceland, Fiji, and Cyprus. It is a staggering and heartbreaking sum by any stretch of the imagination that should cause any human being to stop in his tracks and ponder how so much bloodshed could happen. Those who would blithely dismiss the study of all this as a priapic man’s game to keep close tabs on some completely insignificant item of celebrity gossip usually cannot comprehend the full scale of such unfathomable devastation and our duty to closely examine history so that such a bewildering bloodbath never happens again. And yet, even with the strong reception of Ken Burns’s recent documentary, the Vietnam War remains one of those subjects that Americans do not want to talk about, even when it epitomizes the toxic mix of Yankee Doodle Vanity, bureaucratic shortsightedness, savage masculinity, unchecked hypocrisy, credibility gaps, imperialist dishonesty, and cartoonish escalation of resources — all pernicious checkboxes that still mark American policy today.

We wouldn’t know of this American complicity without the invaluable work of reporters like Neil Sheehan and David Halberstam, who were raw and young and brash and sometimes foolhardy in their dispatches. It was undoubtedly their dogged free-wheeling approach, a fierce pursuit of journalistic truth that is unthinkable to such useless and unfathomably gullible New York Times company men like Richard Fausset and Peter Baker today, which caused Americans to ask questions of the war and that eventually led Daniel Ellsberg to release the Pentagon Papers (which Sheehan himself would later acquire for the New York Times in 1971). The quest for understanding, especially in the conflict’s early years, proved just as intoxicating to these sleep-deprived and overworked journos as it did to the soldiers who kept coming back for further tours of duty. All wondered why common sense had been so rashly and cheaply capitulated.

Sheehan and Halberstam followed in the footsteps of such famous war reporters as Francois Sully, Homer Bigart, Malcolm Browne, and Horst Faas. (William Prochnau’s book Once Upon a Distant War is an excellent and vivacious account of this period, although not without its minor liberties. A 1988 Neil Sheehan profile that Prochnau wrote for The Washington Post, offering some useful carryover material for his book, is also available online.) The two men arrived in Vietnam separately in 1962. They had both attended Harvard, but had arrived at the hallowed university through altogether different routes. Sheehan came from a working-class Irish background and lucked out with a scholarship. By the time Sheehan arrived in Saigon, he was a reformed alcoholic and a tortured man who had learned the fine art of carving extra hours out of any day, a talent he had honed while running a dairy farm as a kid. Sheehan worked for the penny-pinching UPI wire service and, much as a contemporary journalist is expected to write, shoot and cut video, and preserve his crisp telegenic form if he wishes to hold onto his job, he was often responsible for logistics extending well beyond the writing and transmission of copy.

Halberstam was a tall and lanky man from a middle-class Jewish background, but decidedly brasher than Sheehan. His trenchant reporting of civil rights struggles in the South attracted the notice of The New York Times‘s James Reston. Halberstam was a formidable if slipshod workhorse, banging out thousands of words per day that often had to be shoehorned into coherent shape by the exasperated Times team. But Halberstam’s reporting in the Congo was strong and gallant enough to land him in Saigon.

Sheehan and Halberstam would become friends and roommates, working very long days and often falling asleep at their typewriters. They chased any source that led them to demystify the war, but they were both seduced by a man named John Paul Vann, who became the subject of Sheehan’s journalistic masterpiece, A Bright Shining Lie. Halberstam would write two books from his Vietnam experience: The Making of a Quagmire, a short and useful 1965 volume that faded into obscurity within a decade, and The Best and the Brightest, a juicy and detailed top-down account of bureaucratic blunder that Stephen Bannon even pushed onto every member of the Trump transition team in February 2017 (as reported by the New York Times‘s Marc Tracy). But Neil Sheehan, who carried on with a quieter and more methodical approach than Halberstam’s gigantic and flagrant “us vs. them” style, rightly decided that more time and considerable rumination and careful reporting was the way in. He wisely believed that John Vann was the key to understanding American involvement and the mentality behind it. The book would consume sixteen years of Sheehan’s life. And for all the anguish that Sheehan suffered through that long and painful period, we are incredibly lucky to have it.

John Vann was a wildly energetic colonel from Norfolk, Virginia who could survive on four hours of sleep and sometimes none at all. He had built a military career on the “Vann luck.” He would willfully fly aircraft through a suicidal fusillade of fire and drive down dangerous roads that were known to be mined and patrolled by the Viet Cong. He would miraculously survive. Like Robert McNamara, he was very certain of how to win the war. But unlike McNamara, Vann did not rely on problematic data, but rather the know-how of knowing people and the pragmatic logistics that he picked up from his experience in the battlefield, often talking with and distributing candy to the South Vietnamese citizens suffering under the Diem regime. It was through such gestures that Vann avoided a few attempts on his life. Vann was savvy enough to court the trust and admiration of reporters like Sheehan and Halberstam pining for a few dependable truth bombs, to the point where the reporters pooled in their resources to buy him an engraved cigarette box when Vann left Vietnam the first time. But Vann would find a way back a few years later as an Agency for International Development official. He portrayed himself as a scrappy underdog whose candid bluster had prevented him from advancing to general, whose near twenty years of service and bravery and experience had simply not been heeded. But the truth of his checkered life, carefully concealed from many who knew him, told the real story.

Sheehan is both sensitive and meticulous in telling Vann’s take. We cannot help but admire Vann’s dogged work ethic and charisma in the book’s first section, as we see Vann attempting to bring the ARVN (the Army of the Republic of Vietnam, the South Vietnamese army known to recklessly attack insurgents under Diem) together with the then comparatively diminutive American presence in an attempt to win the war. Vann hoped to train the ARVN to better fight against the guerillas, but faced indifference from Huỳnh Văn Cao, an AVRN colonel to whom Vann was appointed an adviser. Cao often liked to don the bluster of a general. We see Vann being kind to the common soldiers, whether peasants or seasoned regulars, but we also see Vann as an egomaniac willing to overstep his rank to get results. One of Vann’s guides to negotiating the tricky turmoil of Vietnam was a 1958 novel called The Ugly American, which depicted American diplomats in a fictitious nation named Sarkhan that proved incredibly arrogant towards the culture, customs, and language of the people. The book would inspire Kennedy so much that he had sent copies of the book to every American Senator. (The Peace Corps would later become a Kennedy campaign talking point turned into a reality.) Vann would take an altogether different lesson from the book in attempting to turn Cao to his side by appealing to his ego and by flattering him. But in practice, Vann’s benign puppeteering as military command could result in disaster, such as a July 20, 1962 battle in the lower delta, in which Cao resisted Vann’s efforts to load helicopters with a second reserve to prevent Viet Cong soldiers from escaping by flatly declining the request. Such stalling allowed the Viet Cong more opportunities to pluck American ordnance, transforming .50 caliber machine guns into antiaircraft weapons through tireless ingenuity.

This communicative combativeness between the Americans and the ARVN would reach its nadir with the Battle of Ab Pac, which is one of the most gripping sections of Sheehan’s book. Vann would watch helplessly from a L-19 Bird Dog surveiling the battlefield as the AVRN delayed sending troops, not knowing that the Viet Cong had intercepted radio transmissions using stolen American equipment. This allowed the Viet Cong to strike hard and accurately against task forces that were effectively separated and caught adrift, leaving them open to attack. The American Hueys disregarded Vann’s orders and were hit by the Viet Cong. Vann, whose domineering tone could be off-putting, was unable to send M-113 carriers across the canals to save the remaining soldiers and reinforce the territory. Vann, increasingly desperate and flustered by the ARVN’s recalcitrance in advancing, approached Captain Ly Tong Ba, the ARVN man holding up support who said that he refused to take Americans, and ordered Robert Bays to “shoot that rotten, cowardly son of a bitch right now and move out.” The battle became the Viet Cong’s first major victory.

By presenting the facts in this manner, Sheehan leaves us with many lingering questions. Was Vann a somewhat more informed version of American interventionist arrogance? Was American might, in Vann’s obdurate form, needed to atone for serious deficiencies from Diem and the ARVN? Even if the ARVN had permitted the Americans to have more of a commanding hand, would not the Viet Cong have eventually secured a victory comparable to Ab Pac? Even at this stage in the book, Vann remains strangely heroic and we can sympathize with his frustration. But in allowing us to vicaroiusly identify with Vann, Sheehan slyly implicates the reader in the desire to win by any means necessary.

And then Sheehan does something rather amazing in his portrait of Vann. In a section entitled “Taking on the System,” he broadens the scope to the soldiers and the command contending with Vann’s aggressiveness (while likewise exposing the hubris of civilian leadership under McNamara, along with the bomb-happy pacification strategy of Victor Krulak and the foolhardy optimism of MACV commander Paul Harkins). And we begin to see that the Vietnam quagmire, like any intense battle for victory and power, was absolutely influenced by strong and truculent personalities, which young reporters like Halberstam and Sheehan were rightfully challenging. Unable to get the top dogs to understand through meetings and communiques, Vann began to weaponize the press against Harkin’s reality distortion field — this as the Diem regime’s increasing persecution of the Buddhists revealed the vast fissures cracking into South Vietnamese unity. Sheehan begins to insert both Halberstam and himself more into the narrative. With Vann now retired from the Army, we are rightly left to wonder if he was indeed as indispensable as many believed him to be.

But then Sheehan backtracks to Vann’s past. And we begin to see that he had been living a lie. He pulled himself from an impoverished Virginian upbringing, where he was an illegitimate child raised by a wanton alcoholic mother, and married a respectable woman named Mary Jane. But while stationed as an Army officer, he cultivated a taste for underage girls and hushed up both his numerous affairs and the allegations, even persuading Mary Jane to lie for him during a court-martial for statutory rape and adultery while also training himself to pass a lie detector test. While stationed in Vietnam the second time, Vann could not control his sexual appetite. He carried on numerous affairs, devoting his attentions quite ardently with two mistresses who were half his age, one of whom had his child, and keeping the two women largely in the dark about each other for a sustained period. His predatory behavior presents itself as a bigger lie more unsettling than the Harkin-style prevarications that resulted in needless deaths.

In the end, the “Vann luck” could not hold out. His death in 1972, at least as portrayed by Sheehan, is almost anticlimactic: the result of a helicopter crashing into a series of trees. As Vietnam changed and the American presence grew with unmitigated enormity, Vann’s apparent know-how could not penetrate as an AID commander, even though Sheehan depicts Vann having many adventures.

A Bright Shining Lie isn’t just an epic history of Vietnam. It also reveals the type of conflicted and deeply flawed American personality that has traditionally been allowed to rise to the top, influencing key American decisions, for better or worse. I read the book twice in the last year and, particularly in relation to Vann’s obstinacy and his abuse of women, I could not help but see Donald Trump as a more cartoonish version of Vann’s gruff and adamantine bluster. But the present landscape, as I write these words near the end of 2017, a year that has carried on with an endless concatenation of prominent names revealed as creeps and abusers of power, is now shifting to one where a masculine, wanton, and ultimatum-oriented approach to command is no longer being tolerated. And yet, even after war has devastated a nation through such a temperament, it is possible for those who are ravaged by violence to be forgiving. In 1989, Sheehan returned to Vietnam for two profiles published in The New Yorker (these are collected in the volume After the War Was Over). In his trip to North Vietnam, Sheehan is baffled by the farmers and the villagers showing no bad blood to Americans:

I encountered this lack of animosity everywhere we went in the North and kept asking for an explanation. The first offered was that the Vietnamese had never regarded the entire American people as their enemy. The American government — “the imperialists” — had been the enemy; other Americans, particularly the antiwar protesters, had been on the Vietnamese side. This did not seem explanation enough for people like the farmer on the road to Lang Son. He had suffered dearly at the hands of Americans who had not been an abstract “imperialist” entity. One afternoon in a village near Haiphong, when Susan and I were with Tran Le Tien, our other guide-interpreter, we were received with kindness by a family who lost a son in the South. On the way back home onto Hanoi I said to Tien that thee had to be more to this attitude than good Americans versus bad Americans. “It’s the wars with China,” Tien said. I decided he was right.

In other words, the enemy in war is the one that has most recently caused the greatest devastation. While the North Vietnamese’s forgiving character is quite remarkable in light of the casualties, perhaps it’s also incumbent upon all nations to be on the lookout for the character flaws in failed men who lead us into failed wars so that nothing like this ever has to happen again. Men do not have all the answers they often claim to possess, even when they look great on paper.

Next Up: Lawrence Gowing’s Vermeer!

Finnegans Wake (Modern Library #77)

(This is the twenty-fourth entry in the The Modern Library Reading Challenge, an ambitious project to read the entire Modern Library from #100 to #1. Previous entry: Kim.)

It has been five years since I last tendered any heartfelt words about 20th century fiction for the Modern Library Reading Challenge. An infernal yet magnificent Irish genius is to blame for the delay. Five years is frankly too damned long, especially if I hope to complete this massive and somewhat insane project before I croak my own answer to Joyce’s “Does nobody understand?” Frank Delaney’s recent passing has made me keenly cognizant that being a wallflower is not an option when any of us could fall off the wall. (The poor man never got to finish Re: Joyce, his wonderful podcast on Ulysses.) So here we go.

What I have wondered during this Joyce-populated reading period is whether one should even attempt to match Jimmy Jimmy Jo Jo Bop’s unquestionable erudition, for this is the kooky bodkin he has wielded before readers. A Wake expert once told me that fencing with this book is comparable to being diagnosed with a disease. A good friend, as deeply moved by Ulysses as I am, told me that he never bothered with Finnegans Wake. I asked why. He said that he refused to play James Joyce’s game. I replied, “Yes, but in the name of Annah the Allmaziful, the Everliving, the Bringer of Plurabilities, you are missing out on some marvelous puns and portmanteaus and the limitless richness of an obscurant dreamscape!” But I do see my pal’s point. Where Ulysses provides us with an invitational beauty to be treasured and reconsidered at nearly any time in life, Finnegans Wake is the loutish intoxicating charmer for the young, the book declaring itself the cleverest in the room, the novel above all novels that says, “Well, if you really love literature…”

In attempting to come to terms with the Wake, I certainly don’t wish to align myself with such execrable anti-intellectual oafs like Dan Kois, who see the joyful act of great art mesmerizing a daredevil reader as something akin to eating cultural vegetables. I have enjoyed longass offerings from Marugerite Young, Samuel R. Delany, Laurence Sterne, Umberto Eco, Leo Tolstoy, Marcel Proust, Gertrude Stein, Mark Z. Danielewski (see The Familiar, the fifth of its twenty-seven volumes will be released in October), and William T. Vollmann, but none of this could prepare me for the Wake. Finnegans Wake is worth the cerebral sweat if you are willing to sign up for the gym package, which involves knowing a little German, Gaelic, and French, familiarizing yourself with Vedic commentary, reading up on Giambattista Vico and Irish history, and doing your best to encourage and resist the urge to plunge further. It is certainly difficult to argue against the Wake‘s enchanting use of language. But if cleverness — even from a bedazzling and often sprightly brainiac such as the Wake — involves adjusting one’s mind and heart entirely to that of the author, there is unquestionably a form of literary tyranny involved. On the other hand, the Wake, unlike any other book I have ever read, does test the limits of what we’re willing to know and how you can live with not knowing. It took me an embarrassingly long time to realize that reading the Wake aloud and letting much of the esoterica wash over you is the best way to approach it and to love it. The only sane option is to accept that you will never know all the answers, that Joyce is smarter than you, and to enjoy the experience.

The book left me baffled, delighted, and often drove me mad. I am not sure that I want to read it again, although who’s kidding whom? I probably will. Finnegans Wake often felt like some bright and charming friend with benefits who texts you at 2AM, asking if you’re down to hook up, only to make you its bottom and leaving you cooking breakfast the next morning as your sexy lover basks in languor in your bed, singing pitch-perfect melodic ballads and cracking the smartest jokes in German. You sometimes wonder if you’re receiving any pleasure in a consummation that was supposed to be fun and spontaneous. Did I catch a case of the ten thunderclap words sprinkled throughout the book (Adam Harvey has kindly made YouTube videos on how to pronounce these) or merely the clap? These carnal metaphors on a book that essentially builds a dreamy narrative from an episode of sexual humiliation are no accident. Like Tinder, Finnegans Wake is a young man’s game. I would recommend attempting it before the age of forty, when there is still the time and the hunger to unravel the arcane wisecracking. Perhaps my mistake was reading this book on both sides of forty, with one foot steeped in bountiful possibility and the other more aware of mortality and the grave. My earlier plunges were largely felicitous. My subsequent belly flops were coated with the minor sting of missing out on something vital in the real world. And given the choice between staying home with the Wake or having a fun night out, it was a fairly easy decision. Many unreportable evenings later, I still believe I made the right choice. But how could any sensible reader not be wowed and enamored by Joyce’s uncompromising commitment to a difficult aesthetic?

All told, I worked my way through this intoxicating and frustrating melange in its full inimitable entirety twice, returning to the beginning of the Earwicker saga and then rereading other bits out of sequence, such as the mirthful and genuinely pleasurable showdown between Shaun and Shem in Book I, Chapter 7, which is among my favorite parts of the book. I can certainly follow the primary points of this “commodious vicus of recirculation,” even if the music of words usually triumphs over narrative coherence, which is often sandbagged altogether by later events such as Shaun’s ever-shifting identity. While I have largely enjoyed my journey, there were several points in which I cursed out Joyce for leading me down another rabbit hole. (The Dubliners’ low-key musical version of “The Ballad of Persse O’Reilly? My weeks-long obsession with the Wellington Monument near the south of Dublin’s Phoenix Park? My futile attempts to learn Gaelic on Duolingo? My concern with ellipses and a surprising preoccupation for how reels of film turned upon encountering “the lazily eye of his lupis” and the diagram above? My efforts to reconcile Butt and Taff with Mutt and Jute and follow the batty Irish-American connections — extending to a few visiting American characters and the dual Dublin in Laurens County, Georgia, which Joyce cites?) It has left me to ponder in all this time if Finnegans Wake and its “futurist onehorse balletbattle pictures” were entirely worth understanding. It has left me feeling very sorry indeed for Joyce’s very patient benefactor, Harriet Shaw Weaver. The phrase “tough sledding” is an understatement.

Still, you can’t help but sympathize with a man who, buttressed by the wealth and the literary notoriety that came after Ulysses, saw his “Work in Progress” (early selections of the Wake published in journals) abandoned by many of his prominent supporters as he was going blind. Stanislaus Joyce had already become suspicious of Ulysses‘s famously difficult “Oxen of the Sun” chapter and proceeded to condemn his brother further for the bits of the Wake that had appeared in the transatlantic review and would later tell Jim to his face that his “book of the night” was impenetrable. His benefactor Harriet Shaw Weaver went along with Joyce’s new direction for a while, with Joyce providing her with a pre-Campbell skeleton key on January 27, 1925, but later that year, some printers refused to set the type for these new excerpts. And two years later, Weaver would condemn the “Wholesale Safety Pun Factory” that Joyce had wrought. Ezra Pound, the putative paragon of poetic innovation, turned on Joyce, badmouthing this “circumambient peripherization.” H.G. Wells called it a dead end. (Did Rebecca West put a burr in Herbert’s ear?) In the face of declining love, Joyce’s remaining admirers published Our Exagmination Round His Factification for Incamination of Work in Progress, featuring the likes of Samuel Beckett, Frank Budgen, and William Carlos Williams defending Joyce’s new direction. Beckett would write:

Here form is content, content is form. You complaint that this stuff is not written in English. It is not written at all. It is not to be read — or rather it is not only to be read. It is to be looked at and listened to. His writing is not about something: it is that something itself….When the sense is sleep, the words go to sleep….When the sense is dancing, the words dance.

It was believed that Joyce himself wrote one of the two letters of protest featured in this small volume. Certainly the voice in the letter is unmistakably recognizable:

You must not stink I am attempting to ridicul (de sac! )you or to be smart, but I am so disturd by my inhumility to onthorstand most of the impsolocations constrained in your work…

Joyce wanted to have it both ways. He both longed for recognition and was contemptuous of anyone who didn’t recognize his genius. The remarkably vanilla-minded Arnold Bennett, a troublesome gnat who I wrote about earlier and who only boorish bores like Philip Hensher now have wet dreams about, redoubled the troubling conventionalism that he had expressed for Ulysses and continued to attack Joyce in the press, which inspired Joyce to send him up as Jute.

In reading the Wake, I have often wondered if I have understood anything at all, but I cannot abide by D.H. Lawrence’s characterization of Joyce as “too terribly would-be and done-on purpose, utterly without spontaneity or real life.” For Joyce does not bore me. He merely maddens me with his demands on my time. I ken the puns in many tongues and can divine much of the history blurring into alluring verbs. Joyce’s wildly arrogant but nonetheless remarkable goal was to keep the professors arguing over enigmas and riddles for centuries. And with the Internet, he has succeeded. Finnegans Wiki is a vital companion when you first start reading and hope to know everything, until you realize that you never will. What is more important here is to feel the book, to take in its miasmic rushes and quell the urge to order mimosas when your noggin explodes from too much “folkenfather of familyans.”

In my early days of reading the Wake, I kept up a Tumblr on my notes. I filled up a five subject notebook with crazed and often indecipherable notes. And then I realized that to carry on like this was futile. It would be akin to resolving every unsolved mystery about life. The Wake contains almost as many tributaries.

Finnegans Wake is not a book to be read. It is a book to be lived, ideally with fellow travelers. So if you have a very rich and active life, there’s no getting around the need to make time for it. Fortunately, it has inspired any number of marvelous online offerings. The incredible project, Waywords and Meansigns, has performed three different musical versions of the Wake. Listening to these interpretations helped lift my spirits when I wondered if I should give up entirely (the bluesy interpretation of the pearlagraph episode near the beginning of Book II came at a time when I was about to throw my book into the wall for the seventh time). I attended a meeting of The Finnegans Wake Society of New York, which not only led me to this invaluable annotative resource, but allowed me to understand that even the smartest and most literary people imaginable could not entirely make head or tail of Joyce and that any and all interpretive suggestions were fair game.

If Joyce wrote Ulysses for people to reconstruct Dublin brick by brick after the apocalypse, then Finnegans Wake was written to reconstruct the whole of human existence, albeit a region teetering somewhere between reality and dreams. There are crazed Russian generals and discordance and recursiveness and twins and families and lust and religion and bawdiness and drinking and blasphemy, but, much like Molly Bloom’s beautifully baring “Penelope” monologue, the Wake ends with the singular motive voice of a woman:

First we feel. Then we fall. And let her rain now if she likes. Gently or strongly as she likes. Anyway let her rain for my time is come. I done me best when I was let. Thinking always if I go all goes. A hundred cares, a tithe of troubles and is there one who understands me? One in a thousand of years of the nights? All me life I have been lived among them but now they are becoming lothed to me. And I am lothing their little warm tricks. And lothing their mean cosy turns. And all the greedy gushes out through their small souls. And all the lazy leaks down over their brash bodies. How small it’s all! And me letting on to meself always. And lilting on all the time. I thought you were all glittering with the noblest of carriage. You’re only a bumpkin. I thought you the great in all things, in guilt and in glory. You’re but a puny. Home! My people were not their sort out beyond there so far as I can. For all the bold and bad and bleary they are blamed, the seahags. No! Nor for all our wild dances in all their wild din.

And then we read “A way a lone a last a loved a long the,” and feel and fall some more, and turn back to the beginning to finish the aborted sentence. And every time we run through the loop, there is laughter, marvel, something we missed, something that aggravates us, and something that makes the rest of literature feel irrelevant.

Next Up: Muriel Spark’s The Prime of Miss Jean Brodie!

West with the Night (Modern Library Nonfiction #85)

(This is the sixteenth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: This Boy’s Life.)

She remains a bold and inspiring figure, a galvanizing tonic shimmering into the empty glass of a bleak political clime. She was bright and uncompromising and had piercingly beautiful eyes. She was a stratospheric human spire who stood tall and tough and resolute above a patriarchal sargasso. Three decades after her death, she really should be better known. Her name is Beryl Markham and this extraordinary woman has occupied my time and attentions for many months. She has even haunted my dreams. Forget merely persisting, which implies a life where one settles for the weaker hand. Beryl Markham existed, plowing through nearly every challenge presented to her with an exquisite equipoise as coolly resilient as the Black Lives Matter activist fearlessly approaching thuggish cops in a fluttering dress. I have now read her memoir West with the Night three times. There is a pretty good chance I will pore through its poetic commitment to fate and feats again before the year is up. If you are seeking ways to be braver, West with the Night is your guidebook.

She grew up in Kenya, became an expert horse trainer, and befriended the hunters of her adopted nation, where she smoothly communed with dangerous animals. For Markham, the wilderness was something to be welcomed rather than dreaded. Her natural panorama provided “silences that can speak” that were pregnant with natural wonder even while being sliced up by the cutting whirl of a propeller blade. But Markham believed in being present well before mindfulness became a widely adopted panacea. She cultivated a resilient and uncanny prescience as her instinct galvanized her to live with beasts and brethren of all types. It was a presence mastered through constant motion. “Never turn your back and never believe that an hour you remember is a better hour because it is dead,” wrote Markham when considering how to leave a place where one has lived and loved. This sentiment may no longer be possible in an era where one’s every word and move is monitored, exhumed by the easily outraged and the unadventurous for even the faintest malfeasance, but it is still worth holding close to one’s heart.

In her adult life, Markham carried on many scandalous affairs with prominent men (including Denys Finch Hatton, who Markham wooed away from Karen Blixen, the Danish author best known for Out of Africa (to be chronicled in MLNF #58)) and fell almost by accident into a life commanding planes, often scouting landscapes from above for safari hunts. Yet Markham saw the butcherous brio for game as an act of impudence, even as she viewed elephant hunting as no “more brutal than ninety per cent of all other human activities.” This may seem a pessimistic observation, although Markham’s memoir doesn’t feel sour because it always considers the world holistically. At one point, Markham writes, “Nothing is more common than birth: a million creatures are born in the time it takes to turn this page, and another million die.” And this grander vantage point, which would certainly be arrived at by someone who viewed the earth so frequently from the sky, somehow renders Markham’s more brusque views as pragmatic. She preferred the company of men to women, perhaps because her own mother abandoned her at a very young age. Yet I suspect that this fierce lifelong grudge was likely aligned with Markham’s drive to succeed with a carefully honed and almost effortlessly superhuman strength.

Markham endured pain and fear and discomfort without complaint, even when she was attacked by a lion, and somehow remained casual about her vivacious life, even after she became the first person to fly solo without a radio in a buckling plane across the Atlantic from east to west, where she soldiered on through brutal winds and reputational jeers from those who believed she could not make the journey. But she did. Because her habitually adventurous temperament, which always recognized the importance of pushing forward with your gut, would not stop her. And if all this were not enough, Markham wrote a masterpiece so powerful that even the macho egotist Ernest Hemingway was forced to prostrate himself to editor Maxwell Perkins in a letter: “She has written so well, and marvelously well, that I was completely ashamed of myself as a writer.” (Alas, this did not stop Hemingway from undermining her in the same paragraph as “a high-grade bitch” and “very unpleasant” with his typically sexist belittlement, a passage conveniently elided from most citations. Still, there’s something immensely satisfying in knowing that the bloated and overly imitated impostor, who plundered Martha Gellhorn’s column inches in Collier’s because he couldn’t handle his own wife being a far superior journalist, could get knocked off his peg by a woman who simply lived.)

In considering the human relationship to animals, Markham writes, “You cannot discredit truth merely because legend has grown out of it.” She details the beauty of elephants going out of their way to hide their dead, dragging corpses well outside the gaze of ape-descended midgets and other predators. And there is something about Markham’s majestic perspective that causes one to reject popular legends, creating alternative stories about the earth that are rooted in the more reliable soil of intuitive and compassionate experience. For Markham, imagination arrived through adventure rather than dreams. She declares that she “seldom dreamed a dream worth dreaming again, or at least none worth recording,” yet the fatigue of flying does cause her to perceive a rock as “a crumpled plane or a mass of twisted metal.”

Yet this considerable literary accomplishment (to say nothing of Markham’s significant aviation achievements) has been sullied by allegations of plagiarism. It was a scandal that caused even The Rumpus‘s Megan Mayhew Bergman to lose faith in Markham’s bravery. Raoul Schumacher, Markham’s third husband, was an alcoholic and a largely mediocre ghost writer who, much like Derek Stanford to Muriel Spark, could not seem to countenance that his life and work would never measure up to the woman he was with. Fragile male ego is a most curious phenomenon that one often finds when plunging into the lives of great women: not only are these women attracted to dissolute losers who usually fail to produce any noteworthy work of their own, but these men attempt to make up for their failings by installing or inventing themselves as collaborators, later claiming to be the indispensable muse or the true author all along, which is advantageously announced only after a great woman has secured her success. Biographers and critics who write about these incidents years later often accept the male stories (one rarely encounters this in reverse), even when the details contain the distinct whiff of a football field mired in bullshit.

I was not satisfied with the superficial acceptance of these rumors by Wikipedia, Robert O’Brien, and Michiko Kakutani. So I took it upon myself to read two Markham biographies (Mary S. Lovell’s Straight on Till Morning and Errol Trzebinski’s The Lives of Beryl Markham), where I hoped that the sourcing would offer a more reliable explanation.

I discovered that Trzebinski was largely conjectural, distressingly close to the infamous Kitty Kelley with her scabrous insinuations (accusations of illiteracy, suggestions that Markham could not pronounce words), and that Lovell was by far the more doggedly reliable and diligent source. Trzebinski also waited until many of the players were dead before publishing her biography, which is rather convenient timing, given that she relies heavily on conversations she had with them for sources.

The problem with Schumacher’s claim is that one can’t easily resolve the issue by going to a handwritten manuscript. West with the Night‘s manuscript was typed, dictated to Schumacher by Markham (see the above photo). The only photograph I have found (from the Lovell biography) shows Markham offering clear handwritten edits. So there is little physical evidence to suggest that Schumacher was the secret pilot. We have only his word for it and that of the friends he told, who include Scott O’Dell. Trzebinski, who is the main promulgator of these rumors, is slipshod with her sources, relying only upon a nebulous “Fox/Markham/Schumacher data” cluster (with references to “int. the late Scott O’Dell/James Fox, New York, April 1987” and “15/5/87” — presumably the same material drawn upon for James Fox’s “The Beryl Markham Mystery,” which appeared in the March 1987 issue of Vanity Fair, as well as a Scott O’Dell letter that was also published in the magazine) that fails to cite anything specific and relies on hearsay. When one factors in an incredulous story that Trzebinski spread about her own son’s death that the capable detectives at Scotland Yard were unable to corroborate, along with Trzebinski’s insistence on camera in the 1986 documentary World Without Walls that only a woman could have written West with the Night, one gets the sense that Trzebinski is the more unreliable and gossipy biographer. And Lovell offers definitive evidence which cast aspersions on Tzrebinski’s notion that Markham was something of a starry-eyed cipher:

But this proof of editing by Raoul, which some see as evidence that Beryl might not have been the sole author of the book, surely proved only that he acted as editor. Indeed his editing may have been responsible for the minor errors such as the title arap appearing as Arab. Together with the Americanization of Beryl’s Anglicized spelling, such changes could well have been standard editorial conversions (by either Raoul or Lee Barker – Houghton Mifflin’s commissioning editor) for a work aimed primarily at an American readership.

The incorrect spelling of Swahili words has an obvious explanation. In all cases they were written as Beryl pronounced them. She had learned the language as a child from her African friends but had probably never given much thought to the spelling. Neither Raoul nor anyone at Houghton Mifflin would have known either way.

In his letter to Vanity Fair, and in two subsequent telephone conversations with me, Scott O’Dell claimed that after he introduced Beryl and Raoul “they disappeared and surfaced four months later,” when Raoul told him that Beryl had written a memoir and asked what they should do with it. This is at odds with the surviving correspondence and other archived material which proves that the book was in production from early 1941 to January 1942, and that almost from the start Beryl was in contact with Lee Barker of Houghton Mifflin.

When Raoul told his friend that it was he who had written the book, could the explanation not be that he was embittered by his own inability to write without Beryl’s inspiration? That he exaggerated his editorial assistance into authorship to cover his own lack of words as a writer?

From the series of letters between Beryl and Houghton Mifflin, it is clear that Beryl had sent regular batches of work to the publishers before Raoul came into the picture. As explained earlier, Dr. Warren Austin lived in the Bahamas from 1942 to 1944, was physician to HRH the Duke of Windsor and became friends with Major Gray Phillips. Subsequently Dr. Austin lived for a while with Beryl and Raoul whilst he was looking for a house in Santa Barbara. The two often discussed their mutual connections in Raoul’s presence. Dr. Austin is certain that Raoul had never visited the Bahamas, reasoning that it would certainly have been mentioned during these conversations if he had. This speaks for itself. If Raoul was not even present when such a significant quantity of work was produced, then that part – at the very least – must have been written by Beryl.

Lovell’s supportive claims have not gone without challenge. James Fox claimed in The Spectator that he had seen “photostated documents, from the trunk since apparently removed as ‘souvenirs’ and thus not available to Lovell, which show that Schumacher took part in the earliest planning of the contents and the draft outline for the publisher and show whole passages written by Schumacher in handwriting.” But even he is forced to walk the ball back and claim that this “proves nothing in terms of authorship.” Since Fox is so fixated on “seeing” evidence rather than producing it, he may as well declare that he visited Alaska and could see Russia from his AirBnB or that he once observed giant six-legged wombats flying from the deliquescent soup he had for supper. If this is the “Fox/Markham/Schumacher data” that Trzebinski relied upon, then the plagiarism charge is poor scholarship and poor journalism indeed.

So I think it’s safe for us to accept Markham’s authorship unless something provable and concrete appears and still justifiably admire a woman who caused Hemingway to stop in his tracks, a woman who outmatched him in insight and words, a woman – who like many incredible women – was belittled by a sloppy, gossip-peddling, and opportunistic biographer looking to make name for herself (and the puff piece hack who enabled her) rather than providing us with the genuine and deserved insight on a truly remarkable figure of the 20th century.

Next Up: Neil Sheehan’s A Bright Shining Lie!

A Mathematician’s Apology (Modern Library Nonfiction #87)

(This is the fourteenth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: Six Easy Pieces.)

mlnf87Clocking in at a mere ninety pages in very large type, G.H. Hardy’s A Mathematician’s Apology is that rare canapé plucked from a small salver between all the other three-course meals and marathon banquets in the Modern Library series. It is a book so modest that you could probably read it in its entirety while waiting for the latest Windows 10 update to install. And what a bleak and despondent volume it turned out to be! I read the book twice and, each time I finished the book, I wanted to seek out some chalk-scrawling magician and offer a hug.

G.H. Hardy was a robust mathematician just over the age of sixty who had made some serious contributions to number theory and population genetics. He was a cricket-loving man who had brought the Indian autodidact Srinivasa Ramanujan to academic prominence by personally vouching for and mentoring him. You would think that a highly accomplished dude who went about the world with such bountiful and generous energies would be able to ride out his eccentric enthusiasm into his autumn years. But in 1939, Hardy survived a heart attack and felt that he was as useless as an ashtray on a motorcycle, possessing nothing much in the way of nimble acumen or originality. So he decided to memorialize his depressing thoughts about “useful” contributions to knowledge in A Mathematician’s Apology (in one of the book’s most stupendous understatements, Hardy observed that “my apology is bound to be to some extent egotistical”), and asked whether mathematics, the field that he had entered into because he “wanted to beat other boys, and this seemed to be the way in which I could do so most decisively,” was worthwhile.

You can probably guess how it all turned out:

It is indeed rather astonishing how little practical value scientific knowledge has for ordinary man, how dull and commonplace such of it as has value is, and how its value seems almost to vary inversely to reputed utility….We live either by rule of thumb or other people’s professional knowledge.

If only Hardy could have lived about sixty more years to discover the 21st century thinker’s parasitic relationship to Google and Wikipedia! The question is whether Hardy is right to be this cynical. While snidely observing “It is quite true that most people can do nothing well,” he isn’t a total sourpuss. He writes, “A man’s first duty, a young man’s at any rate, is to be ambitious,” and points out that ambition has been “the driving force behind nearly all the best work of the world.” What he fails to see, however, is that youthful ambition, whether in a writer or a scientist, often morphs into a set of routines that become second-nature. At a certain point, a person becomes comfortable enough with himself to simply go on with his work, quietly evolving, where the ambition becomes more covert and subconscious and mysterious.

Hardy never quite confronts what it is about youth that frightens him, but he is driven by a need to justify his work and his existence, pointing to two reasons for why people do what they do: (1) they work at something because they know they can do it well and (2) they work at something because a particular vocation or specialty came their way. But this seems too pat and Gladwellian to be a persuasive dichotomy. It doesn’t really account for the journey we all must face over why one does something, which generally includes the vital people you meet at certain places in your life who point you down certain directions. Either they recognize some talent in you and give you a leg up or they are smart and generous enough to recognize that one essential part of human duty is to help others find their way, to seek out your people — ideally a group of eclectic and vastly differing perspectives — and to work with each other to do the best damn work and live the best damn lives you can. Because what’s the point of geeking out about Fermat’s “two squares” theorem, which really is, as Hardy observes, a nifty mathematical axiom of pure beauty, if you can’t share it with others?

But let’s return to Hardy’s fixation on youth. Hardy makes the claim that “mathematics, more than any other art or science, is a young man’s game,” yet this staggering statement is easily debunked by such late bloomers as prime number ninja Zhang Yitang and Andrew Wiles solving Fermat’s Last Theorem at the age of 41. Even in Hardy’s own time, Henri Poincaré was making innovations to topology and Lorentz transformations well into middle age. (And Hardy explicitly references Poincaré in § 26 of his Apology. So it’s not like he didn’t know!) Perhaps some of the more recent late life contributions have much to do with forty now being the new thirty (or even the new twenty among a certain Jaguar-buying midlife crisis type) and many men in Hardy’s time believing themselves to be superannuated in body and soul around the age of thirty-five, but it does point to the likelihood that Hardy’s sentiments were less the result of serious thinking and more the result of crippling depression.

Where Richard Feynman saw chess as a happy metaphor for the universe, “a great game played by the gods” in which we humans are mere observers who “do not know what the rules of the game are,” merely allowed to watch the playing (and yet find marvel in this all the same), Hardy believed that any chess problem was “simply an exercise in pure mathematics…and everyone who calls a problem ‘beautiful’ is applauding mathematical beauty, even if is a beauty of a comparatively lowly kind.” Hardy was so sour that he compared a chess problem to a newspaper puzzle, claiming that it merely offered an “intellectual kick” for the clueless educated rabble. As someone who enjoys solving the Sunday New York Times crossword in full and a good chess game (it’s the street players I have learned the most from; for they often have the boldest and most original moves), I can’t really argue against Hardy’s claim that such pastimes are “trivial” or “unimportant” in the grand scheme of things. But Hardy seems unable to remember the possibly apocryphal tale of Archimedes discovering gradual displacement while in the bathtub or the more reliable story of Otto Loewi’s dream leading the great Nobel-winning physiologist to discover that nervous impulses arose from electrical transmissions. Great minds often need to be restfully thinking or active on other fronts in order to come up with significant innovations. And while Hardy may claim that “no chess problem has ever affected the development of scientific thought,” I feel compelled to note Pythagoras played the lyre (and even inspired a form of tuning), Newton had his meandering apple moment, and Einstein enjoyed hiking and sailing. These were undoubtedly “trivial” practices by Hardy’s austere standards, but would these great men have given us their contributions if they hadn’t had such downtime?

It’s a bit gobsmacking that Hardy never mentions how Loewi was fired up by his dreams. He seems only to see value in Morpheus’s prophecies if they are dark and melancholic:

I can remember Bertrand Russell telling me of a horrible dream. He was in the top floor of the University Library, about A.D. 2100. A library assistant was going round the shelves carrying an enormous bucket, taking down book after book, glancing at them, restoring them to the shelves or dumping them into the bucket. At last he came to three large volumes which Russell could recognize as the last surviving copy of Principia mathematica. He took down one of the volumes, turned over a few pages, seemed puzzled for a moment by the curious symbolism, closed the volume, balanced it in his hand and hesitated….

One of an author’s worst nightmares is to have his work rendered instantly obsolescent not long after his death, even though there is a very strong likelihood that, in about 150 years, few people will care about the majority of books published today. (Hell, few people care about anything I have to write today, much less this insane Modern Library project. There is a high probability that I will be dead in five decades and that nobody will read the many millions of words or listen to the countless hours of radio I have put out into the universe. It may seem pessimistic to consider this salient truth, but, if anything, it motivates me to make as much as I can in the time I have, which I suppose is an egotistical and foolishly optimistic approach. But what else can one do? Deposit one’s head in the sand, smoke endless bowls of pot, wolf down giant bags of Cheetos, and binge-watch insipid television that will also not be remembered?) You can either accept this reality and reach the few people you can and find happiness and gratitude in doing so. Or you can deny the clear fact that your ego is getting in the way of your achievements, embracing supererogatory anxieties and forcing you to spend too much time feeling needlessly morose.

I suppose that in articulating this common neurosis, Hardy is performing a service. He seems to relish “mathematical fame,” which he calls “one of the soundest and steadiest of investments.” Yet fame is a piss-poor reason to go about making art or formulating theorems. Most of the contributions to human advancement are rendered invisible. These are often small yet subtly influential rivulets that unknowingly pass into the great river that future generations will wade in. We fight for virtues and rigor and intelligence and truth and justice and fairness and equality because this will be the legacy that our children and grandchildren will latch onto. And we often make unknowing waves. Would we, for example, be enjoying Hamilton today if Lin-Manuel Miranda’s school bus driver had not drilled him with Geto Boys lyrics? And if we capitulate those standards, if we gainsay the “trivial” inspirations that cause others to offer their greatness, then we say to the next generation, who are probably not going to be listening to us, that fat, drunk, and stupid is the absolute way to go through life, son.

A chair may be a collection of whirling electrons, or an idea in the mind of God: each of these accounts of it may have its merits, but neither conforms at all closely to the suggestions of common sense.

This is Hardy suggesting some church and state-like separation between pure and applied mathematics. He sees physics as fitting into some idealistic philosophy while identifying pure mathematics as “a rock on which all idealism flounders.” But might not one fully inhabit common sense if the chair exists in some continuum beyond this either-or proposition? Is not the chair’s perceptive totality worth pursuing?

It is at this point in the book where Hardy’s argument really heads south and he makes an astonishingly wrongheaded claim, one that he could not have entirely foreseen, noting that “Real mathematics has no effects on war.” This was only a few years before Los Alamos was to prove him wrong. And that’s not all:

It can be maintained that modern warfare is less horrible than the warfare of pre-scientific times; that bombs are probably more merciful than bayonets; that lachrymatory gas and mustard gas are perhaps the most humane weapons yet devised by military science; and that the orthodox view rests solely on loose-thinking sentimentalism.

Oh Hardy! Hiroshima, Nagasaki, Agent Orange, Nick Ut’s famous napalm girl photo from Vietnam, Saddam Hussein’s chemical gas massacre in Halabja, the use of Sarin-spreading rockets in Syria. Not merciful. Not humane. And nothing to be sentimental about!

Nevertheless, I was grateful to argue with this book on my second read, which occurred a little more than two weeks after the shocking 2016 presidential election. I had thought myself largely divested of hope and optimism, with the barrage of headlines and frightening appointments (and even Trump’s most recent Taiwan call) doing nothing to summon my natural spirits. But Hardy did force me to engage with his points. And his book, while possessing many flawed arguments, is nevertheless a fascinating insight into a man who gave up: a worthwhile and emotionally true Rorschach test you may wish to try if you need to remind yourself why you’re still doing what you’re doing.

Next Up: Tobias Wolff’s This Boy’s Life!

Six Easy Pieces (Modern Library Nonfiction #88)

(This is the thirteenth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: Pilgrim at Tinker Creek.)

mlnf88Richard Feynman, exuberant Nobel laureate and formidable quantum mechanics man, may have been energetic in his lectures and innovatively performative in the classroom, but I’m not sure he was quite the great teacher that many have pegged him to be. James Gleick’s biography Genius informs us that students dropped out of his high-octane, info-rich undergraduate physics classes at a remarkable rate, replaced by Caltech faculty members and grad students who took to the Queens-born superstar much like baryons make up the visible matter of the universe. The extent to which Feynman was aware of this cosmic shift has been disputed by his chroniclers, but it is important to be aware of this shortcoming, especially if you’re bold enough to dive into the famed three volume Feynman Lectures on Physics, which are all thankfully available online. Six Easy Pieces represents an abridged version of Feynman’s full pedagogical oeuvre. And even though the many YouTube videos of Feynman reveal an undeniably magnetic and indefatigably passionate man of science who must have been an incredible dynamo to experience in person, one wonders whether barraging a hot room of young nervous twentysomethings with hastily delivered information is the right way to popularize science, much less inspire a formidable army of physicists.

Watch even a few minutes of Feynman firing on all his robust cylinders and it becomes glaringly apparent how difficult it is to contend with Feynman’s teaching legacy in book form. One wonders why the Modern Library nonfiction judges, who were keen to unknowingly bombard this devoted reader with such massive multivolume works as The Golden Bough, Dumas Malone’s Jefferson and His Time, and Principia Mathematica, didn’t give this spot to the full three volume Lectures. Did they view Feynman’s complete lesson plan as failed?

Judging from the sextet that I sampled in this deceptively slim volume, I would say that, while Feynman was undeniably brilliant, he was, like many geniuses, someone who often got lost within his own metaphors. While his analogy of two corks floating in a pool of water, with one cork jiggling in place to create motion in the pool that causes indirect motion for the other cork, is a tremendously useful method of conveying the “unseen” waves of the electromagnetic field (one that galvanized me to do the same in a saucepan after I had finished two bottles of wine over a week and a half), he is not nearly on-the-nose with his other analogies. The weakest lesson in the book, “Conservation on Energy,” trots out what seems to be a reliably populist metaphor with a child named “Dennis the Menace” playing with 28 blocks, somehow always ending up with 28 of these at the end of the day. Because Feynman wants to illustrate conservational constants, he shoehorns another element to the narrative whereby Dennis’s mother is, for no apparent reason, not allowed to open up the toy box revealing the number of blocks and thus must calculate how many blocks reside within. The mother has conveniently weighed the box at some unspecified time in advance back when it contained all 28 blocks.

This is bad teaching, in large part because it is bad storytelling that makes no sense. I became less interested in conservation of energy, with Feynman’s convoluted parallel clearly becoming more trouble than it was worth, and more interested in knowing why the mother was so fixated on remembering the number of blocks. Was she truly so starved for activity in her life that she spent all day at work avoiding all the juicy water cooler gossip about co-workers, much less kvetching about the boss, so that she might scheme a plan to at long last show her son that she would always know the weight of a single block? When Dennis showed resistance to opening the toy box, why didn’t the mother stand her ground and tell him to buzz off and stream an episode of Project Mc²?

Yet for all these defects in method, there is an indisputable poetic beauty in the way in which Feynman reminds us that we live in a vast world composed of limitless particles, a world in which we still aren’t aware of all the rules and in which even the particles contained within solids remain “fixed” in motion. Our universe is always moving, even when we can’t see it or completely comprehend it. Feynman is quick to observe throughout his lessons that “The test of all knowledge is experiment,” which again points to my theory that Feynman’s teachings, often accentuated by experiment, were probably better experienced than read. Nevertheless, even in book form, it is truly awe-inspiring to understand that we can still not accurately predict the precise mass, form, and force of all the cascading droplets from a mighty river once it hits the precipice of a waterfall. Such mysteries capture our imagination and, when Feynman is committed to encouraging our inventiveness through open and clear-eyed examples from our world, he is very much on point. Thanks in part to Feynman reminding me just how little we silly humans now know, I began to feel my heart open more for Tycho Brahe, that poor Dane who spent many years of his life refining Copernicus’s details and determining the elliptical patterns of planetary orbits. Brahe worked out his calculations entirely without a telescope, which allowed Johannes Kepler to sift through his invaluable measurements and forge laws that all contemporary astronomers now rely on to determine where a planet might be in the sky on any given night of the year. Heisenberg’s uncertainty principle hasn’t even been around a century and it’s nothing less than astounding to consider how our great grandparents had a completely different understanding of atoms and motion in their early lifetime than we do today.

Feynman did have me wanting to know more about the origins of many scientific discoveries, causing me to contemplate how each and every dawning realization altered human existence (an inevitable buildup for Thomas Kuhn and paradigms, which I will take up in ML Nonfiction #69). But unlike such contemporary scientists as Neil deGrasse Tyson, Alan Guth, or Brian Greene, Feynman did not especially inspire me to plunge broadly into my own experiments or make any further attempts to grapple with physics-based complexities. This may very well be more my failing than Feynman’s, but there shall be many more stabs at science as we carry on with this massive reading endeavor!

Next Up: G.H. Hardy’s A Mathematician’s Apology!

Pilgrim at Tinker Creek (Modern Library Nonfiction #89)

(This is the twelfth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: The Golden Bough.)

“Either this world, my mother, is a monster, or I myself am a freak.” — Annie Dillard, Pilgrim at Tinker Creek

I was a sliver-thin, stupefyingly shy, and very excitable boy who disguised his bruises under the long sleeves of his shirt not long before the age of five. I was also a freak.

bedroomI had two maps pinned to the wall of my drafty bedroom, which had been hastily constructed into the east edge of the garage in a house painted pink (now turquoise, according to Google Maps). The first map was of Tolkien’s Middle-earth, in which I followed the quests of Bilbo and Frodo by finger as I wrapped my precocious, word-happy head around sentences that I’d secretly study from the trilogy I had purloined from the living room, a well-thumbed set that I was careful to put back to the shelves before my volatile and often sour father returned home from the chemical plant. In some of his rare calm moments, my father read aloud from The Lord of the Rings if he wasn’t too drunk, irascible, or violent. His voice led me to imagine Shelob’s thick spidery thistles, Smeagol’s slithering corpus, and kink open my eyes the next morning for any other surprises I might divine in my daily journeys to school. The second map was of Santa Clara County, a very real region that everyone now knows as Silicon Valley but that used to be a sweeping swath of working and lower middle-class domiciles. This was one of several dozen free maps of Northern California that I had procured from AAA with my mother’s help. One of the nice perks of being an AAA member was the ample sample of rectangular geographical foldouts. I swiftly memorized all of the streets, held spellbound by the floral and butterfly patterns of freeway intersections seen from a majestic bird’s eye view in an errant illustrated sky. My mother became easily lost while driving and I knew the avenues and the freeways in more than a dozen counties so well that I could always provide an easy cure for her confusion. It is a wonder that I never ended up working as a cab driver, although my spatial acumen has remained so keen over the years that, to this day, I can still pinpoint the precise angle in which you need to slide a thick unruly couch into the tricky recesses of a small Euclidean-angled apartment even when I am completely exhausted.

mlnf89These two maps seemed to be the apotheosis of cartographic art at the time, filling me with joy and wonder and possibility. It helped me cope with the many problems I lived with at home. I understood that there were other regions beyond my bedroom where I could wander in peace, where I could meet kinder people or take in the beatific comforts of a soothing lake (Vasona Lake, just west of Highway 17 in Los Gatos, had a little railroad spiraling around its southern tip and was my real-life counterpart to Lake Evendim), where the draw of Rivendell’s elvish population or the thrill of smoky Smaug stewing inside the Lonely Mountain collided against visions of imagined mountain dwellers I might meet somewhere within the greens and browns of Santa Teresa Hills and the majestic observatories staring brazenly into the cosmos at the end of uphill winding roads. I would soon start exploring the world I had espied from my improvised bedroom study on my bike, pedaling unfathomable miles into vicinities I had only dreamed about, always seeking parallels to what the Oxford professor had whipped up. I once ventured as far south as Gilroy down the Monterey Highway, which Google Maps now informs me is a thirty-six mile round trip, because my neglectful parents never kept tabs on how long I was out of the house or where I was going. They didn’t seem to care. As shameful as this was, I’m glad they didn’t. I needed an uncanny dominion, a territory to flesh out, in order to stay happy, humble, and alive.

The maps opened up my always hungry eyes to books, which contained equally bountiful spaces devoted to the real and the imaginary, unspooling further marks and points for me to find in the palpable world and, most importantly, within my heart. I always held onto this strange reverence for place to beat back the sadness after serving as my father’s punching bag. To this day, I remain an outlier, a nomad, a lifelong exile, a wanderer even as I sit still, a renegade hated for what people think I am, a black sheep who will never belong no matter how kind I am. I won’t make the mistake of painting myself as some virtuous paragon, but I’ve become so accustomed to being condemned on illusory cause, to having all-too-common cruelties inflicted upon me (such as the starry-eyed bourgie Burning Man sybarite I recently opened my heart to, who proceeded to deride the city that I love, along with the perceived deficiencies of my hard-won apartment, this after I had told her tales, not easily summoned, about what it was like to be rootless and without family and how home and togetherness remain sensitive subjects for me) that the limitless marvels of the universe parked in my back pocket or swiftly summoned from my shelves or my constant peregrinations remain reliable, life-affirming balms that help heal the scars and render the wounds invisible. Heartbreak and its accompanying gang of thugs often feel like a mob bashing in your ventricles in a devastatingly distinct way, even though the great cosmic joke is that everyone experiences it and we have to love anyway.

So when Annie Dillard’s poetic masterpiece Pilgrim at Tinker Creek entered my reading life, its ebullient commitment to finding grace and gratitude in a monstrous world reminded me that seeing and perceiving and delving and gaping awestruck at Mother Earth’s endless glories seemed to me one one of the best survival skills you can cultivate and that I may have accidentally stumbled upon. As I said, I’m a freak. But Dillard is one too. And there’s a good chance you may walk away from this book, which I highly urge you to read, feeling a comparable kinship, as I did to Dillard. Even if you already have a formidable arsenal of boundless curiosity ready to be summoned at a moment’s notice, this shining 1974 volume remains vital and indispensable and will stir your soul for the better, whether you’re happy or sad. Near the end of a disastrous year, we need these inspirational moments now more than ever.

* * *

“Our life is a faint tracing on the surface of mystery.” – Pilgrim at Tinker Creek

Annie Dillard was only 28 when she wrote this stunning 20th century answer to Thoreau (the subject of her master’s thesis), which is both a perspicacious journal of journeying through the immediately accessible wild near her bucolic Southwestern Virginia perch and a daringly honest entreaty for consciousness and connection. Dillard’s worldview is so winningly inclusive that she can find wonder in such savage tableaux as a headless praying mantis clutching onto its mate or the larval creatures contained within a rock barnacle. The Washington Post claimed not long after Pilgrim‘s publication that the book was “selling so well on the West Coast and hipsters figure Annie Dillard’s some kind of female Castaneda, sitting up on Dead Man’s Mountain smoking mandrake roots and looking for Holes in the Horizon her guru said were there.” But Pilgrim, inspired in part from Colette’s Break of Day, is far from New Age nonsense. The book’s wise and erudite celebration of nature and spirituality was open and inspiring enough to charm even this urban-based secular humanist, who desperately needed a pick-me-up and a mandate to rejoin the world after a rapid-fire series of personal and political and romantic and artistic setbacks that occurred during the last two weeks.

For all of the book’s concerns with divinity, or what Dillard identifies as “a divine power that exists in a particular place, or that travels about over the face of the earth as a man might wander,” explicit gods don’t enter this meditation until a little under halfway through the book, where she points out jokingly how gods are often found on mountaintops and points out that God is an igniter as well as a destroyer, one that seeks invisibility for cover. And as someone who does not believe in a god and who would rather deposit his faith in imaginative storytelling and myth rather than the superstitions of religious ritual, I could nevertheless feel and accept the spiritual idea of being emotionally vulnerable while traversing into some majestic terrain. Or as Pascal wrote in Pensées 584 (quoted in part by Dillard), “God being thus hidden, every religion which does not affirm that God is hidden, is not true, and every religion which does not give the reason of it, is not instructive.”

Much of this awe comes through the humility of perceiving, of devoting yourself to sussing out every conceivable kernel that might present itself and uplift you on any given day and using this as the basis to push beyond the blinkered cage of your own self-consciousness. Dillard uses a metaphor of loose change throughout Pilgrim that neatly encapsulates this sentiment:

It is dire poverty indeed when a man is so malnourished and fatigued that he won’t stoop to pick up a penny. But if you cultivate a healthy poverty and simplicity, so that finding a penny will literally make your day, then, since the world is in fact planted in pennies, you have with your poverty bought a lifetime of days. It is that simple. What you see is what you get.

This is not too far removed from Thoreau’s faith in seeds: “Convince me that you have a seed there, and I am prepared to expect wonders.” The smug and insufferable Kathryn Schulzes of our world gleefully misread this great tradition of discovering possibilities in the small as arrogance, little realizing how their own blind and unimaginative hubris glows with crass Conde Nast entitlement as they fail to observe that Thoreau and Dillard were also acknowledging the ineluctable force of a bigger and fiercer world that will carry on with formidable complexity long after our dead bodies push against daisies. Faced with the choice of sustaining a sour Schulz-like apostasy or receiving every living day as a gift, I’d rather risk the arrogance of dreaming from the collected riches of what I have and what I can give rather than the gutless timidity of a prescriptive rigidity that fails to consider that we are all steeped in foolish and inconsistent behavior which, in the grand scheme of things, is ultimately insignificant.

Dillard is guided just as much by Heisenberg’s uncertainty principle as she is by religious and philosophical texts. The famous 1927 scientific law, which articulates how you can never know a particle’s speed and velocity at the same time, is very much comparable to chasing down some hidden deity or contending with some experiential palpitations when you understand that there simply is no answer, for one can feel but never fully comprehend the totality in a skirmish with Nature. It accounts for Dillard frequently noting that the towhee chirping on a treetop or the muskrat she observes chewing grass on a bank for forty minutes never see her. In seeing these amazing creatures carry on with their lives, who are completely oblivious to her own human vagaries, Dillard reminds us that this is very much the state of Nature, whether human or animal. If it is indeed arrogance to find awe and humility in this state of affairs, as Dillard and Thoreau clearly both did, then one’s every breath may as well be a Napoleonic puff of the chest.

Dillard is also smart and expansive enough to show us that, no matter where we reside, we are fated to brush up against the feral. She points to how arboreal enthusiasts in the Lower Bronx discovered a fifteen feet ailanthus tree growing from a lower Bronx garage and how New York must spend countless dollars each year to rid its underground water pipes of roots. Such realities are often contended with out of sight and out of mind, even as the New York apartment dweller battles cockroaches, but the reminder is another useful point for why we must always find the pennies and dare to dream and wander and take in, no matter what part of the nation we dwell in.

Another refreshing aspect of Pilgrim is the way in which Dillard confronts her own horrors with fecundity. Yes, even this graceful ruminator has the decency to confess her hangups about the unsettling rapidity with which moths lay their eggs in vast droves. She stops short at truly confronting “the pressure of birth and growth” that appalls her, shifting to plants as a way of evading animals and then retreating back to the blood-pumping phylum to take in blood flukes and aphid reproduction more as panorama rather than something to be felt. This volte-face isn’t entirely satisfying. On the other hand, Dillard is also bold enough to scoop up a cup of duck-pond water and peer at monostyla under a microscope. What this tells us is that there are clear limits to how far any of us are willing to delve, yet I cannot find it within me to chide Dillard too harshly for a journey she was not quite willing to take, for this is an honest and heartfelt chronicle.

While I’ve probably been “arrogant” in retreating at length to my past in an effort to articulate how Dillard’s book so moved me, I would say that Pilgrim at Tinker Creek represents a third map for my adult years. It is a true work of art that I am happy to pin to the walls of my mind, which seems more reliable than any childhood bedroom. This book has caused me to wonder why I have ignored so much and has demanded that that I open myself up to any penny I could potentially cherish and to ponder what undiscoverable terrain I might deign to take in as I continue to walk this earth. I do not believe in a god, but I do feel with all my heart that one compelling reason to live is to fearlessly approach all that remains hidden. There is no way that you’ll ever know or find everything, but Dillard’s magnificent volume certainly gives you many good reasons to try.

Next Up: Richard Feynman’s Six Easy Pieces!

The Golden Bough (Modern Library Nonfiction #90)

(This is the eleventh entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: Shadow and Act.)

mlnf90It is somewhat difficult to obtain a decent print edition of the Third Edition of The Golden Bough without getting fleeced by some squirrely operator working out of a shady storage unit in the middle of nowhere. For nobody seems to read the whole enchilada anymore. This is hardly surprising in an age, abundantly cemented last week, when most people are more inclined to celebrate regressive stupidity, melting their minds in any self-immolating pastime rather than opening a book. But I was able to find an affordable edition with the help of a British antiquarian. I had no idea what I was in for, but some initial research suggested very strongly that I should not settle for the abridged edition that is much easier to acquire. Certainly the sheer time-sucking insanity of the Modern Library Reading Challenge, one of the many dependable bastions I have left in a bleak epoch, demands that I go the distance on all entries, even if it means becoming ensnared by a particular title for several weeks, often answering texts from pals checking in on me with fun little snippets from Estonian folklore quebraditaing somewhere within a “Fine. How are you?” Such is the life of a book-loving eccentric with a ridiculous self-imposed mandate that involves refusing to let terrible setbacks get in the way of happy rumination. We find hope and courage and new ideas and fierce fortitude in remembering that not a single authoritarian entity or pernicious individual can ever crush the possibilities contained within our minds, our hearts, and our souls.

The thirteen volume set landed at my feet with a promising thud after a month-long voyage by boat across the Atlantic Ocean, where it occupied my reading time for many months and proceeded to change my life. I realize that such a claim may sound trite in light of the devastating and life-altering results of the 2016 U.S. presidential election, but, if there’s anything we can learn from Stefan Zweig’s suicide, we must never forget the importance of patience, of working long and hard to fight and endure while steering humanity’s promising galleon back on the right course even as we look to culture’s power to sustain our spirits in the darkest times.1

James George Frazer’s The Golden Bough proved so galvanizing that I began to marvel more at trees and desired to spend more time beneath their magnificent branches. I began picking up the junk that other New Yorkers had so thoughtlessly deposited under their glorious leafy crowns. I began naming some of the trees I liked, saying “Hello, Balder!” (styled after the Norse god) to a beloved maple near the southwestern edge of Central Park. I started paying closer attention to the modest superstitious rituals that most of us take for granted, wanting to know why we feared black cats crossing our path (it started in the 1560s and originated with the idea that black cats were witches who had transformed their corporeal state) or worried ourselves into years of bad luck from walking under a ladder (it goes back to the Egyptians, who believed that walking under any pyramid would attenuate its mystical power). And, of course, I began to wonder if other superstitious rituals, such as voting for a vicious sociopathic demagogue to make a nation “great” again, originated from similar irrational fears. Despite being a secular humanist, I was stunned to discover that I had modest pagan proclivities and started to ask friends to engage in rather goofball offshoots of established rites in a somewhat libertine manner, much of which is unreportable. And if you think such a reaction is idiosyncratic (and it is), consider the strange takeaway that D.H. Lawrence memorialized in a December 8, 1915 letter:

I have been reading Frazer’s Golden Bough and Totemism and Exogamy. Now I am convinced of what I believed when I was about twenty — that there is another seat of consciousness than the brain and the nerve system: there is a blood-consciousness which exists in us independently of the ordinary mental consciousness, which depends on the eye as its source or connector. There is the blood-consciousness, with the sexual connection, holding the same relation as the eye, in seeing, holds to the mental consciousness. One lives, knows, and has one’s being in blood, without any references to nerves and brain. This is one half of life, belonging to the darkness. And the tragedy of this our life, and of your life, is that the mental and nerve consciousness exerts a tyranny over the blood-consciousness, and that your will has gone completely over to the mental consciousness, and is engaged with the destruction of your blood-being or blood-consciousness, the final liberating of the one, which is only death in result.

When I finished Frazer’s final volume, I certainly wasn’t prepared to suggest that any part of my consciousness was tyrannizing the others because of some eternal human connection to myths and rites enacted to answer and make sense of the presently inexplicable. But Lawrence did have a point about the way humans are naturally drawn to unusual ceremonies and celebrations that go well beyond Carolina Panthers coach Ron Rivera wearing the same shoes on game days, with the impulse often defying any steely rationalism we may use to make sense of our inherently animalistic nature, which any glance at a newspaper reveals to be quite frighteningly present.

goldenboughset

More importantly, The Golden Bough changed everything I thought I knew about storytelling and myth. It forced me to see commonalities within many cultures. To cite one of Frazer’s countless comparative examples, consider the way that humans have approached the bear hunt. After the Kamtchatkans had killed a bear and eaten its flesh, the host of the subsequent dinner party would bring the bear’s head before the assembled guests, wrap it in grass, and then conduct a panel of sorts where the host, serving as a moderator only slightly less ballistic than Bill O’Reilly, would ask the bear if he had been well-treated. Much like many wingnut “journalists” irresponsibly publishing claims in Slate today without robust evidence (and failing to tender corrections when pwned), the Kamctchatkan host would blame the Russians. The American Indians likewise implored the dead bear not to be angry for being hunted and would hang the bear’s head on a post, painting it red and blue rather than donning it with vegetation. They also addressed it, much in the manner that dog owners chat with their uncomprehending pets when nobody’s around. The Assiniboins also held feasts after a hunt and also mounted the bear’s head, shrouding it in strips of scarlet cloth, and respected the bear so much that they offered the head a pipe to smoke, not unlike the poor dog who sits outside Mets games with a pipe in his mouth. And looking beyond Frazer, one finds in Alanson Skinner’s Notes on the Eastern Cree and Northern Saulteaux a similar bear’s head ceremony that involved sharing a pipe before the participants took a bite from the bear’s flesh and, with the old Finnish custom of karhunpeijaiset, a bear’s head mounted upon a young tree, venerated and addressed as a relative or the son of a god. And according to the Russian ethnographer Vladimir Arsenyev (and I found this bit by sifting through James H. Grayson’s Myths and Legends from Korea), the Udegey people of the Russian Far East also had a bear ceremony and believed, “To throw the head away is a great sin….The cooked bear’s head is wrapped in its own skin with its fur outwards and tied up with a thin rope,” with a communal ceremony quite similar to the Finns.

I could go on (and indeed Frazer often rambles for pages), but there’s an undeniable awe in learning that something so specific about bears (head mounted, party organized, head covered, bear respected), much less anything else, arose independently in so many different parts of the world. It proves very conclusively, and perhaps this is especially essential for us to understand as we reconcile a vast and seemingly incurable national division, that humans share more in common with each other than we’re willing to confess and that the seemingly unique rituals that we believe define “us” are quite likely shared many times over in other parts of the nation, much less the world.

The reason it took me so long to read The Golden Bough was not because of its many thousand pages (aside from some sloggish parts in the Spirits of the Corn and of the Wild volumes, the books are surprisingly readable2), but because my imagination would become so captivated by some tale of trees esteemed above human life or a crazed orgiastic release (see Saturnalia) that I would lose many hours in the library seeing how much of this was still practiced. It has been more than a century since Frazer published the Third Edition, but his remarkable observations about shared rituals still invite us to dream and believe and to perceive that, Frazer’s regrettable references to “savages” and “primitives” notwithstanding, we are not so different from each other.

Frazer’s explanation for these common qualities — epitomized by the famous J.M.W. Turner painting (pictured above) sharing the same name title as Frazer’s capacious opus — rests in the sylvan lake of Nemi and an ancient tale in which a priest-king defended a singular tree. The priest-king, who was an incarnation of a god wedded to the world, could only be overpowered by a fight to the death and, if he was slain, he would be replaced by his victor, with the cycle perpetuating ad infinitum. Frazer believed that nearly every story in human history could be traced back to this unifying myth, with most of the tales triggered by our imagination arising out of what he called “sympathetic magic,” whereby humans often imitate what they cannot understand. So if this meant building effigies or participating in elaborate and often unusual rituals that explain why the sun scorched the crops to an unsustainable crisp in the last harvest or helped more animals to multiply for grand feasts next season, magical thinking provided both the bond and the panacea well before Robert B. Thomas came up with the Old Farmer’s Almanac.

There are two components to sympathetic magic: the first is Contagion, or physical contact, which involved a transfer of “essence” by physical contact (among other things, this would account for why humans have been especially careful about bear’s heads, as described above); the second was Similarity, whereby “the magicians infers that he can produce any effect he desires merely by imitating it: from the second he infers that whatever he does to a material object will affect equally the person with whom the object was once in contact, whether it formed part of his body or not.”

One of The Golden Bough‘s most fascinating volumes, The Scapegoat, reveals how a human belief in “essence” may be the root of our most irrational fears. Contagion often led to humans trying to transfer their disease and miseries to other people, if not reinforcing their own biases about people or groups that they disliked. I am indebted to the terrific podcast Imaginary Worlds for steering me to the work of Carol Nemeroff, whose psychological considerations of Frazer’s findings are are especially constructive in understanding disgust. Nemeroff and her colleagues conducted a series of studies in which they placed a sterilized dead roach in a glass of juice and asked subjects to eat fudge that resembled dog feces. The natural reactions (recoiling at the roach and the shit-shaped fudge) showed that sympathetic magic is still very much a mainstay of our culture.

Indeed, sympathetic magic drives most of our cherished rituals today. In one of his most controversial (but nevertheless true) observations, Frazer observes in Adonis Attis Osiris that, although the Gospels never cited a hard date for Jesus Christ’s birthday bash, Christians have adhered to their churchgoing rituals with the same practiced regularity one sees in fundamentalist homophobics holding up cardboard signs that misquote the Bible to reinforce their hate. The original celebration date of Christ’s alleged birth was January 6th. But because heathens celebrated the birthday of the Sun on December 25th, and this was often a draw for the Christians because the heathens were more fun, the Church agreed to set December 25th as the official day. If Christmas did not exist, it would be necessary for humankind to invent it. For such useful observations, The Golden Bough is incredibly handy to have in one’s library, if only to remind us that most of our beliefs, the recurring rituals we are expected to adhere to, are predicated upon some ancient explanation that we failed to shake from the Magic 8-Ball of our daily existence. So Colin Kaepernick really doesn’t need to stand for the national anthem. While this conformist rite is admittedly improved from the Nazi-like Bellamy salute, standing for The Star-Spangled Banner is little more than a quaint superstition that one is pressured to participate in to “belong.”

Frazer’s scholarship, while impressive, is sometimes inexact in the effort to find a Theory of Everything. Midway through putting together the Third Edition, Frazer was challenged by Dr. Edward Westermarck, who pointed out that fire festivals did not originate from fire reinforcing the sun’s light and heat, but rather a need to celebrate purification. Frazer did correct his views in Balder the Beautiful, but it does leave one contemplating whether sympathetic magic served as Frazer’s knee-jerk goto point in his noble quest to reconcile several folkloric strands.

Still, one cannot disavow the conclusion that much of our behavior is not only similarly ceremonial across cultures, which would indeed suggest a common source. Frazer managed one last volume, the Aftermath, in 1937, just four years before his death. While this volume is little more than a collection of B-sides, it does have leave one wondering what Frazer would have made of Nuremburg rallies or even our current default mode of walking like zombies in the streets, heads down, eyes bulging at the prospect of another chapter in a Snapchat story. The gods and the sympathetic magic may be a tad more secular these days, but we still remain seduced. Myths and stories and rituals are as old as the Chauvet Cave paintings. One cannot imagine being human without them.

Next Up: Annie Dillard’s Pilgrim at Tinker Creek!

Shadow and Act (Modern Library Nonfiction #91)

(This is the tenth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: The Power Broker.)

mlnf91When I first made my bold belly flop into the crisp waters of Ralph Ellison’s deep pool earlier this year, I felt instantly dismayed that it would be a good decade before I could perform thoughtful freestyle in response to his masterpiece Invisible Man (ML Fiction #19). As far as I’m concerned, that novel’s vivid imagery, beginning with its fierce and intensely revealing Battle Royale scene and culminating in its harrowing entrapment of the unnamed narrator, stands toe-to-toe with Saul Bellow’s The Adventures of Augie March as one of the most compelling panoramas of mid-20th century American life ever put to print, albeit one presented through a more hyperreal lens.

But many of today’s leading writers, ranging from Ta-Nehisi Coates to Jacqueline Woodson, have looked more to James Baldwin as their truth-telling cicerone, that fearless sage whose indisputably hypnotic energy was abundant enough to help any contemporary humanist grapple with the nightmarish realities that America continues to sweep under its bright plush neoliberal rug. At a cursory glance, largely because Ellison’s emphasis was more on culture than overt politics, it’s easy to see Ellison as a complacent “Maybe I’m Amazed” to Baldwin’s gritty “Cold Turkey,” especially when one considers the risk-averse conservatism which led to Ellison being attacked as an Uncle Tom during a 1968 panel at Grinnell College along with his selfish refusal to help emerging African-American authors after his success. But according to biographer Arnold Rampersad, Baldwin believed Ralph Ellison to be the angriest person he knew. And if one dives into Ellison’s actual words, Shadow and Act is an essential volume, which includes one of the most thrilling Molotov cocktails ever pitched into the face of a clueless literary critic, that is often just as potent and as lapel-grabbing as Baldwin’s The Fire Next Time.

For it would seem that while Negroes have been undergoing a process of “Americanization” from a time preceding this birth of this nation — including the fusing of their blood lines with other non-African strains, there has been a stubborn confusion as to their American identity. Somehow it was assumed that the Negroes, of all the diverse American peoples, would remain unaffected by the climate, the weather, the political circumstances — from which not even slaves were exempt — the social structures, the national manners, the modes of production and the tides of the market, the national ideals, the conflicts of values, the rising and falling of national morale, or the complex give and take of acculturalization which was undergone by all others who found their existence within the American democracy.

This passage, taken from an Ellison essay on Amiri Baraka’s Blues People, is not altogether different from Baldwin’s view of America as “a paranoid color wheel” in The Evidence of Things Not Seen, where Baldwin posited that a retreat into the bigoted mystique of Southern pride represented the ultimate denial of “Americanization” and thus African-American identity. Yet the common experiences that cut across racial lines, recently investigated with comic perspicacity on a “Black Jeopardy” Saturday Night Live sketch, may very well be a humanizing force to counter the despicable hate and madness that inspires uneducated white males to desecrate a Mississippi black church or a vicious demagogue to call one of his supporters “a thug” for having the temerity to ask him to be more respectful and inclusive.

Ellison, however, was too smart and too wide of a reader to confine these sins of dehumanization to their obvious targets. Like Baldwin and Coates and Richard Wright, Ellison looked to France for answers and, while never actually residing there, he certainly counted André Malraux and Paul Valéry among his hard influences. In writing about Richard Wright’s Black Boy, Ellison wisely singled out critics who failed to consider the full extent of African-American humanity even as they simultaneously demanded an on-the-nose and unambiguous “explanation” of who Wright was. (And it’s worth noting that Ellison himself, who was given his first professional writing gig by Wright, was also just as critical of Wright’s ideological propositions as Baldwin.) Ellison described how “the prevailing mood of American criticism has so thoroughly excluded the Negro that it fails to recognize some of the most basic tenets of Western democratic thought when encountering them in a black skin” and deservedly excoriated whites for seeing Paul Robeson and Marian Anderson merely as the ne plus ultra of African-American artistic innovation rather than the beginning of a great movement.

shriversombreroAt issue, in Ellison’s time and today, is the degree to which any individual voice is allowed to express himself. And Ellison rightly resented any force that would stifle this, whether it be the lingering dregs of Southern slavery telling the African-American how he must act or who he must be in telling his story as well as the myopic critics who would gainsay any voice by way of their boxlike assumptions about other Americans. One sees this unthinking lurch towards authoritarianism today with such white supremacists as Jonathan Franzen, Lionel Shriver, and the many Brooklyn novelists who, despite setting their works in gentrified neighborhoods still prominently populated by African-Americans, fail to include, much less humanize, black people who still live there.

“White supremacist” may seem like a harshly provocative label for any bumbling white writer who lacks the democratic bonhomie to leave the house and talk with other people and consider that those who do not share his skin color may indeed share more common experience than presumed. But if these writers are going to boast about how their narratives allegedly tell the truth about America while refusing to accept challenge for their gaping holes and denying the complexity of vital people who make up this great nation, then it seems apposite to bring a loaded gun to a knife fight. If we accept Ellison’s view of race as “an irrational sea in which Americans flounder like convoyed ships in a gale,” then it is clear that these egotistical, self-appointed seers are buckling on damaged vessels hewing to shambling sea routes mapped out by blustering navigators basking in white privilege, hitting murky ports festooned with ancient barnacles that they adamantly refuse to remove.

Franzen, despite growing up in a city in which half the population is African-American, recently told Slate‘s Isaac Chotiner that he could not countenance writing about other races because he has not loved them or gone out of his way to know them and thus excludes non-white characters from his massive and increasingly mediocre novels. Shriver wrote a novel, The Mandibles, in which the only black characters are (1) Leulla, bound to a chair and walked with a leash, and (2) Selma, who speaks in a racist Mammy patois (“I love the pitcher of all them rich folk having to cough they big piles of gold”). She then had the effrontery to deliver a keynote speech at the Brisbane Writers Festival arguing for the right to “try on other people’s hats,” failing to understand that creating dimensional characters involves a great deal more than playing dress-up at the country club. She quoted from a Margot Kaminski review of Chris Cleave’s Little Bee that offered the perfectly reasonable consideration, one that doesn’t deny an author’s right to cross boundaries, that an author may wish to take “special care…with a story that’s not implicitly yours to tell.” Such forethought clearly means constructing an identity that is more human rather than crassly archetypal, an eminently pragmatic consideration on how any work of contemporary art should probably reflect the many identities that make up our world. But for Shriver, a character should be manipulated at an author’s whim, even if her creative vagaries represent an impoverishment of imagination. For Shriver, inserting another nonwhite, non-heteronormative character into The Mandibles represented “issues that might distract from my central subject moment of apocalyptic economics.” Which brings us back to Ellison’s question of “Americanization” and how “the diverse American peoples” are indeed regularly affected by the decisions of those who uphold the status quo, whether overtly or covertly.

Writer Maxine Benba-Clarke bravely confronted Shriver with the full monty of this dismissive racism and Shriver responded, “When I come to your country. I expect. To be treated. With hospitality.” And with that vile and shrill answer, devoid of humanity and humility, Shriver exposed the upright incomprehension of her position, stepping from behind the arras as a kind of literary Jan Smuts for the 21st century.3

If this current state of affairs represents a bristling example of Giambattista Vico’s corsi e ricorsi, and I believe it does, then Ellison’s essay, “Twentieth-Century Fiction and the Black Mask of Humanity,” astutely demonstrates how this cultural amaurosis went down before, with 20th century authors willfully misreading Mark Twain, failing to see that Huck’s release of Jim represented a moment that not only involved recognizing Jim as a human being, but admitting “the evil implicit in his ’emancipation'” as well as Twain accepting “his personal responsibility in the condition of society.” With great creative power comes great creative responsibility. Ellison points to Ernest Hemingway scouring The Adventures of Huckleberry Finn merely for its technical accomplishments rather than this moral candor and how William Faulkner, despite being “the greatest artist the South has produced,” may not be have been quite the all-encompassing oracle, given that The Unvanquished‘s Ringo is, despite his loyalty, devoid of humanity. In another essay on Stephen Crane, Ellison reaffirms that great art involves “the cost of moral perception, of achieving an informed sense of life, in a universe which is essentially hostile to man and in which skill and courage and loyalty are virtues which help in the struggle but by no means exempt us from the necessary plunge into the storm-sea-war of experience.” And in the essays on music that form the book’s second section (“Sound and the Mainstream”), Ellison cements this ethos with his personal experience growing up in the South. If literature might help us to confront the complexities of moral perception, then the lyrical, floating tones of a majestic singer or a distinctive cat shredding eloquently on an axe might aid us in expressing it. And that quest for authentic expression is forever in conflict with audience assumptions, as seen with such powerful figures as Charlie Parker, whom Ellison describes as “a sacrificial figure whose struggles against personal chaos…served as entertainment for a ravenous, sensation-starved, culturally disoriented public which had but the slightest notion of its real significance.”

What makes Ellison’s demands for inclusive identity quite sophisticated is the vital component of admitting one’s own complicity, an act well beyond the superficial expression of easily forgotten shame or white guilt that none of the 20th or the 21st century writers identified here have had the guts to push past. And Ellison wasn’t just a writer who pointed fingers. He held himself just as accountable, as seen in a terrific 1985 essay called “An Extravagance of Laughter” (not included in Shadow and Act, but found in Going with the Territory), in which Ellison writes about how he went to the theatre to see Jack Kirkland’s adaptation of Erskine Caldwell’s Tobacco Road. (I wrote about Tobacco Road in 2011 as part of this series and praised the way that this still volatile novel pushes its audience to confront its own prejudices against the impoverished through remarkably flamboyant characters.) Upon seeing wanton animal passion among poor whites on the stage, Ellison burst into an uncontrollable paroxysm of laughter, which emerged as he was still negotiating the rituals of New York life shortly after arriving from the South. Ellison compared his reaction, which provoked outraged leers from the largely white audience, to an informal social ceremony he observed while he was a student at Tuskegee that involved a set of enormous whitewashed barrels labeled FOR COLORED placed in public space. If an African-American felt an overwhelming desire to laugh, he would thrust his head into the pit of the barrel and do so. Ellison observes that African-Americans “who in light of their social status and past condition of servitude were regarded as having absolutely nothing in their daily experience which could possibly inspire rational laughter.” And the expression of this inherently human quality, despite being a cathartic part of reckoning with identity and one’s position in the world, was nevertheless positioned out of sight and thus out of mind.

When I took an improv class at UCB earlier this year, I had an instructor who offered rather austere prohibitions to any strain of humor considered “too dark” or “punching down,” which would effectively disqualify both Tobacco Road and the Tuskegee barrel ritual that Ellison describes.2 These restrictions greatly frustrated me and a few of my classmates, who didn’t necessarily see the exploration of edgy comic terrain as a default choice, but merely one part of asserting an identity inclusive of many perspectives. I challenged the notion of confining behavior to obvious choices and ended up getting a phone call from the registrar, who was a smart and genial man and with whom I ended up having a friendly and thoughtful volley about comedy. I had apparently been ratted out by one student, who claimed that I was “disrupting” the class when I was merely inquiring about my own complicity in establishing base reality. In my efforts to further clarify my position, I sent a lengthy email to the instructor, one referencing “An Extravagance of Laughter,” and pointed out that delving into the uncomfortable was a vital part of reckoning with truth and ensuring that you grew your voice and evolved as an artist. I never received a reply. I can’t say that I blame him.

Ellison’s inquiry into the roots of how we find common ground with others suggests that we may be able to do so if we (a) acknowledge the completeness of other identities and (b) allow enough room for necessary catharsis and the acknowledgment of our feelings and our failings as we take baby steps towards better understanding each other.

The most blistering firebomb in the book is, of course, the infamous essay “The World and the Jug,” which demonstrates just what happens when you assume rather than take the time to know another person. It is a refreshingly uncoiled response that one could not imagine being published in this age of “No haters” reviewing policies and genial retreat from substantive subjects in today’s book review sections. Reacting to Irving Howe’s “Black Boys and Native Sons,” Ellison condemns Howe for not seeing “a human being but an abstract embodiment of living hell” and truly hammers home the need for all art to be considered on the basis of its human experience rather than the spectator’s constricting inferences. Howe’s great mistake was to view all African-American novels through the prism of a “protest novel” and this effectively revealed his own biases against what black writers had to say and very much for certain prerigged ideas that Howe expected them to say. “Must I be condemned because my sense of Negro life was quite different?” writes Ellison in response to Howe roping him in with Richard Wright and James Baldwin. And Ellison pours on the vinegar by not only observing how Howe self-plagiarized passages from previous reviews, but how his intractable ideology led him to defend the “old-fashioned” violence contained in Wright’s The Long Dream, which, whatever its merits, clearly did not keep current with the changing dialogue at the time.

Shadow and Act, with its inclusion of interviews and speeches and riffs on music (along with a sketch of a struggling mother), may be confused with a personal scrapbook. But it is, first and foremost, one man’s effort to assert his identity and his philosophy in the most cathartic and inclusive way possible. We still have much to learn from Ellison more than fifty years after these essays first appeared. And while I will always be galvanized by James Baldwin (who awaits our study in a few years), Ralph Ellison offers plentiful flagstones to face the present and the future.

SUPPLEMENT: One of the great mysteries that has bedeviled Ralph Ellison fans for decades is the identity of the critic who attacked Invisible Man as a “literary race riot.” In a Paris Review interview included in Shadow and Act, Ellison had this to say about the critic:

But there is one widely syndicated critical bankrupt who made liberal noises during the thirties and has been frightened ever since. He attacked my book as a “literary race riot.”

With the generous help of Ellison’s biographer Arnold Rampersad (who gave me an idea of where the quote might be found in an email volley) and the good people at the New York Public Library, I have tracked down the “widely syndicated critical bankrupt” in question.

sterlingnorthHis name is Sterling North, best known for the children’s novel Rascal in 1963. He wrote widely popular (and rightly forgotten) children’s books while writing book reviews for various newspapers. North was such a vanilla-minded man that he comics “a poisonous mushroom growth” and seemed to have it in for any work of art that dared to do something different — or that didn’t involve treacly narratives involving raising baby raccoons.

And then, in the April 16, 1952 issue of the New York World-Telegram, he belittled Ellison’s masterpiece, writing these words:

This is one of the most tragic and disturbing books I have ever read. For the most part brilliantly written and deeply sincere, it is, at the same time, bitter, violent and unbalanced. Except for a few closing pages in which the author tries to express something like a sane outlook on race relations, it is composed largely of such scenes of interracial strife that it achieves the effect of one continuous literary race riot. Ralph Ellison is a Negro with almost as much writing talent as Richard Wright. Like his embittered hero (known only as “I’ throughout the book, Mr. Ellison received scholarships to help him through college, one from the State of Oklahoma which made possible three years at the Tuskegee Institute, and one from the Rosenwald Foundation.

If Mr. Ellison is as scornful and bitter about this sort of assistance as he lets his “hero” be, those who made the money available must wonder if it was well spent.

North’s remarkably condescending words offer an alarming view of the cultural oppression that Ellison was fighting against and serve as further justification for Ellison’s views in Shadow and Act. Aside from his gross mischaracterization of Ellison’s novel, there is North’s troubling assumptions that Ellison should be grateful in the manner of an obsequious and servile stereotype, only deserves a scholarship if he writes a novel that fits North’s limited idea of what African-American identity should be, and that future white benefactors should think twice about granting opportunities for future uppity Ellisons.

It’s doubtful that The Sterling North Society will recognize this calumny, but this is despicable racism by any measure. A dive into North’s past also reveals So Dear to My Heart, a 1948 film adaptation of North’s Midnight and Jeremiah that reveled in Uncle Tom representations of African-Americans.

North’s full review of The Invisible Man can be read below:

sterling-north

Next Up: James George Frazer’s The Golden Bough!

The Power Broker (Modern Library Nonfiction #92)

(This is the ninth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: The American Political Tradition.)

mlnf92Sunset Park is a cozy part of Brooklyn trilling with children making midday escapes from big brick schools, with a few old factories that wail great threnodies whenever the moon winks a ditty about displaced residents on a cloudy night. There are robust workers and tight-knit families and bahn mi bistros and bustling bakeries from which one can savor the tantalizing nectar of glorious Spanish gossip squeezing into the streets. If you are tipsy after too many pints at the Irish pubs lining the southwestern fringe, there are 24 hour donut shops serving as makeshift diners, with loquacious jacks cooking up chorizo hash for any hungry ghost in a fix.

This is the region, along with East New York and Flatlands and Bensonhurst, where Brooklyn’s true soul still shines. It remains insulated from the Williamsburg hipsters oblivious to the high rise monstrosities now being hoisted near the East River or the yuppies who cleave to Park Slope’s gluten-free stroller war zone like children keeping to the shallow end of the pool. But the motley banter rivals the bright babble bubbling five miles east in Ditmas Park and even the chatty ripples that percolate just two miles south in Bay Ridge. In Sunset Park, you can pluck the city’s most enormous plantains from bold bodega bins bulging with promise, talk to the last honest bartender at Brooklyn’s best bowling alley, or walk beneath a Buddhist temple for some of the finest vegetarian Chinese grub in the region. It is a place of repose. It is a place of fun. It is a place to live.

Yet as great and as welcoming and as improbably enduring as this part of Brooklyn is, it could have been bigger. And for a long time, it was. Until Robert Moses came along.

There are many grim tales contained within Robert A. Caro’s The Power Broker — an alarmingly large and exquisitely gripping and undeniably great and insanely obsessive masterpiece of journalism documenting the most ruthless urban planner that New York, and possibly America, has ever known. If you love New York City even one tenth as much as I do, you will find many reasons to shout obscenities out your window after reading about what Robert Moses did to this mighty metropolis. It was Moses who killed off the free aquarium, open to all, that once stood in Battery Park. It was Moses who pitted reliable mass transit lines serving regular Janes and Joes against highways designed solely for those who had the shekels to buy and upkeep a car. It was Moses who believed African-Americans to be “dirty” and who, in building Riverside Park, stiffed the Harlem section of playgrounds (seventeen in the West Side; one in Harlem) and football fields (five to one). Moses was so casually racist that most of the parks he built, the parks that secured his popularity, served white middle-class New Yorkers. But working-class families needed these parks more and were often reduced to opening a fire hydrant in the streets and playing in the gutter during a hot summer.

Not a single person in power will ever change the Manhattan skyline in the irreversible way that Moses did. Robert Moses had massive ambition, savvy savagery, limitless arrogance and energy, improbably large coffers that he willed together through a bridge bond ploy, a panache for grabbing and holding onto power, and a sick talent for persuading some of the most powerful figures of the 20th century to sign crooked agreements and/or get steamrolled into deals that screwed them over in quite profound ways.

thirdavenue

For me, one of the acts that sums Moses up is the way in which he ripped out a major part of Sunset Park’s soul by erecting the Gowanus Expressway above Third Avenue. This is a toxic concrete barrier that still remains as cold and as gray and as unwelcoming as the bleakest rainstorm in December. To this very day, you can still hear the Belt Parkway’s thundering traffic as far away as Sixth Avenue. During a recent walk along Third Avenue on a somewhat chilly afternoon, I surveyed Moses’s handiwork and was nearly mowed down by a minivan barreling out of Costco, its back bulging with wasteful mass-produced goods, as a mad staccato honk pierced my ears with a motive that felt vaguely murderous.

Robert Moses wanted to make New York a city for automobiles, even though he never learned how to drive. And in some of the neighborhoods where his blots against natural urban life remain, his dogged legacy against regular people still persists.

gowanusparkway

(Source: The Gowanus Improvement: November 1, 1941 / The Triborough Bridge Authority)

Sunset Park’s residents had begged Moses to build the expressway over Second Avenue. This was closer to the water and the industrial din and might have preserved the many small businesses and happy homes that once punctuated Third Avenue’s happy line. But Moses, citing the recently opened subway that now serves the D, N, and R underneath Fourth Avenue and the available support beams from the soon-to-be-demolished El, was determined to raise a freeway on Third Avenue that he claimed was much cheaper, even though the engineers who weren’t on Moses’s payroll had observed that one mere mile of freeway looping back to the shore wouldn’t substantially reduce the cost. But Moses had fought barons before and had made a few curving compromises while constructing the Northern State Parkway. Armed with the power of eminent domain and a formidable administrative power in which bulldozers and blockades could be summoned against opponents almost as fast as a modern day Seamless delivery, Moses was not about to see his vision vitiated. And if that meant calling the good parts of Sunset Park a “slum,” which it wasn’t, or spouting off any number of lies or threats to destroy perfectly respectable working class neighborhoods, then he’d do it.

As documented by Caro, the Gowanus stretched a raised subway line’s harmless Venetian-blind shadow into a dirty expanse that was nearly two and a half times as wide, wider than a football field and twice as onyx. The traffic lights were so swiftly timed that one had to be a running back to sprint beneath the smog-choking blackness to the other side of the street. The condensation from the steel pillars created such a relentless dripping that it transformed this once sunny thoroughfare into a dirt-clogged river Styx for cars. The cost was seven movie theaters, dozens of restaurants, endless mom and pop stores, butcher shops that raffled Christmas turkeys, and tidy affordable apartments — all shuttered. Moses did not plan for the increased industrial traffic that sprinkled into Sunset Park’s streets, just as he hadn’t for his many other freeways and bridges. Garbage and rats accumulated in the surrounding lots. There was violence and drugs and gang wars. The traffic tightened and slowed to a crawl, demanding more roads, more buildings to gut, more more neighborhoods to disrupt for the worse.

robertmosesgraphicnovel

Who was this man? And why was he so determined to assert his will? He fancied himself New York’s answer to Georges-Eugène Haussmann (even reusing a doughnut-shaped building for the 1964 World’s Fair that the Parisian planner himself had put together in 1867), yet didn’t begin to earn a dime for his tyranny until his forties. (He lived off his family’s money and secured early planning jobs by declining a salary.) He thought himself a poet (not an especially good one), but if he had any potential prose style, it turned sour and hard and technocratic by the time he hit Oxford and received his doctorate at Columbia. He worked seemingly every hour of the day and took endless walks, memorizing the precise points where he would later build big parks and tennis courts. And he loved to swim, taking broad strokes well beyond the shores in his sixties and seventies with an endurance and strength that crushed men who were two decades younger. Small wonder that Moses gave the city so many public pools.

After I finished reading The Power Broker, I wanted to know more. I found myself plunging into the collected works of Jane Jacobs (Jacobs’s successful battle to save Washington Square Park was left out by Caro due to the enormity of The Power Broker‘s original manuscript), as well as Anthony Flint’s excellent volume Wrestling with Moses (documenting the battles between Moses and Jacobs), an extremely useful volume edited by Hilary Ballon and Kenneth T. Jackson called Robert Moses and the Modern City that may be the best overview of every Moses project (and attempts, not entirely successfully, to refute some of Caro’s claims), as well as a wonderful graphic novel from Pierre Christin and Olivier Balez (Robert Moses: The Master Builder of New York City) which I recommend for anyone who doesn’t have enough time to read Caro’s 1,200 page biography written in very small print (although you really should read it).

I wanted to know how a man like Moses could operate so long without too many challenging him. His behavior often resembled a spoiled infant braying for his binky. When faced by an authority figure, Moses would often threaten to resign from a position until he got his way. Moses used this tactic so frequently that Mayor La Guardia once sent him a note reading, “Enclosed are your last five or six resignations; I’m starting a new file,” followed by city corporation counsel Paul Windels creating a pad of forms reading “I, Robert Moses, do hereby resign as _______ effective __________,” which further infuriated Moses.

The answer, of course, was through money and influence that Moses had raised through a bridge bond scheme floated through the Triborough Bridge and Tunnel Authority, with Moses as Chairman:

Moses wanted banks to be so anxious to purchase Triborough bonds that they would use all of their immense power to force elected officials to give his public works proposals the approval that would result in their issuance. So although the safety of the banks’ money was already amply assured by Triborough’s current earnings (so great that each year the Authority collected far more money than it spent), by the irrevocable covenants guaranteeing that tolls could never be removed without the bondholders’ consent, and by Triborough’s monopoly, also irrevocable, that guaranteed them that if any future intracity water crossing were built, they would share in its tolls, too, Moses provided them with additional assurances. He maintained huge cash reserves — “Fantastic,” says Jackson Phillips, director of municipal research for Dun and Bradstreet; “the last time I looked they had ten years’ interest on reserve” — and when he floated the Verrazano bonds he agreed to lay aside — in addition to the existing reserves! — 15 percent ($45,000,000) of the cash he received for the new bond issue, and not touch it until the bridge was open and operating five years later. Purchasers of the Verrazano bonds could be all but certain that they could collect their interest every year even if the bridge never collected a single toll. Small wonder that Phillips says, “Triborough’s are just about the best bonds there are.” Wall Streeters may believe that “any investment is a bet,” but Robert Moses was certainly running the safest game in town.

In other words, Moses pulled off one of the most sinister financial games in New York history. The Triborough Authority could not only collect tolls on its bridges and capitalize on these receipts by issuing revenue bonds, which would in turn generate considerable income for Moses to fund his many public works projects, but it was capable of spending more money than the City of New York. Which meant that the city often had to come crawling back to Moses. And if the city or the state wanted to audit the Triborough Authority, this operation was so incredibly complicated that it would require at least fifty accountants working full-time for a year in order to comprehend it. Government did not have this kind of money to place safeguards against Moses. Moreover, it needed Moses’s financial assistance in order to provide for the commonweal.

It wasn’t until 1968, when Governor Nelson Rockefeller and Mayor John Lindsay put an end to these remarkable shenanigans by siphoning tolls into the newly created Metropolitan Transportation Authority. The bondholders might have sued over this. It was, after all, unconstitutional to uproot existing contractual obligations. But Rockefeller’s brother David happened to be the head of Chase Manhattan Bank. And Chase was the largest TBTA bondholder. In a glaring case of “it’s not what you know, it’s who you know,” the Triborough Authority as puppet organization for Moses was finished. Moses was forced to abandon his role. And the man’s political hold on New York was effectively finished after four decades of relentless building and endless resignation threats.

It seemed a fitting end for a man who had maintained such a stranglehold over such a large area. Six years later, Robert Caro’s biography appeared. Moses wrote a 23 page response shortly after the book’s publication. Caro’s rebuttal was five paragraphs, concluding with this one:

It is slightly absurd (but typical of Robert Moses) to label as without documentation a book that has 83 solid pages of single-spaced, small-type notes and that is based on seven years of research, including 522 separate interviews.

Next Up: Ralph Ellison’s Shadow and Act!

The American Political Tradition (Modern Library Nonfiction #93)

(This is the eighth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: The Contours of American History.)

mlnf93Before he became famous for delineating “the paranoid style in American politics” and honing every principled bone against the feverish anti-intellectualism one now sees embodied in everything from long-standing philistine Dan Kois decrying “eating his cultural vegetables” to lunatic presidential candidate Ted Cruz declaring gluten-free meals as a politically correct “social experiment,” historian Richard Hofstadter spent four years on a fiercely independent book that would go on to sell close to a million copies. The American Political Tradition was a Plutarchian overview of illustrious American figures ranging from vivacious abolitionist Wendell Phillips to Woodrow Wilson as closeted conservative. It was aimed at winning over a high-minded American public. Like William Appleman Williams, Hofstadter was very much following in Charles Beard’s footsteps, although this historian hoped to march to his own interpretive drum. Reacting to the toxic McCarthyism at the time, Hofstadter’s cautious defense of old school American liberalism, with the reluctant bulwark hoisted as he poked holes into the foibles of celebrated icons, saddled him with the label of “consensus historian.” With each subsequent volume (most notably The Age of Reform), Hofstadter drifted further away from anything close to a scorching critique of our Founders as hardliners enforcing their economic interests to a more vociferous denouncement of agrarian Populists and numbnuts standing in the way of erudite democratic promise. Yet even as he turned more conservative in later years, Hofstadter insisted that his “assertion of consensus history in 1948 had its sources in the Marxism of the 1930s.”

Such adamantine labels really aren’t fair to Hofstadter’s achievements in The American Political Tradition. The book is by no means perfect, but its Leatherman Wave-like dissection of American history unfolds with some sharp and handy blades. While Hofstadter is strangely reluctant to out Andrew Jackson as a demagogue (“He became a favorite of the people, and might easily come to believe that the people chose well.”) and far too pardonable towards John C. Calhoun, a rigid bloviator with a harsh voice who was one of slavery’s biggest cheerleaders and whose absolutist stance against tariffs under the guise of moderatism would later inspire the South to consider secession as a legitimate nuclear option3, Hofstadter at his best slices with a necessary critical force into many hallowed patriarchs. For it is the sum of their variegated and contradictory parts that has caused some to view the American trajectory in Manichean terms.

One of the book’s standout chapters is Hofstadter’s shrewd analysis of Lincoln as an exceptionally formidable man who dialed down his egalitarian ardor to zero the meter for his shrewd and very rapid political rise. In just four years, Lincoln advanced from an obscure attorney in Illinois to a prominent party leader in that same state’s House of Representatives. But Hofstadter cogently argues that Lincoln was far from the outspoken abolitionist who would later lay down some very strong words against those who would deny other people freedom. Lincoln not only kept his enemies closer than his friends, but he was exceptionally careful with his rhetoric, even though one eye-popping 1836 declaration proposed extending suffrage to women.2 Much as Franklin D. Roosevelt was very savvy about letting his political opponents make the first move before he acted, Lincoln used the Declaration of Independence’s very text as ammunition and inspiration for his justification for abolition, which come much later — Lincoln’s first public condemnation of slavery arrived when Lincoln was forty-five — than Lincoln’s many admirers are often willing to admit.

Hofstadter points out that Lincoln’s seeming contradiction between revolutionary politics and pragmatic interpretation of the law was not especially peculiar, but part of a nuts-and-bolts perpetuation of an ongoing political tradition, one that can be seen with Lincoln’s hard maneuvering with the 1851 conditional loan he issued to his stepbrother John D. Johnson. Lincoln’s famous House Divided speech was masterful rhetoric urging national reconciliation of the slavery issue, but he didn’t exactly go out of his way to out himself as an abolitionist. Hofstadter points out that in 1858, seemingly Honest Abe spoke in two entirely different manners about racial equality in Chicago and in Charleston (see the second paragraph of his first speech). Yet these observations not only illustrate Lincoln’s political genius, but invite parallels to Lyndon Johnson’s brilliant and equally contradictory engineering in passing the 1957 Civil Rights Act (perhaps best chronicled in a gripping 100 page section of Robert A. Caro’s excellent Master of the Senate). The American political tradition, which Hofstadter identifies as a continuity with capitalist democratic principles, is seen today with Hillary Clinton struggling against a young population hungry for progressive change unlikely to happen overnight, despite Bernie Sanders’s valiant plans and the immediate need to rectify corporate America’s viselike hold on the very democratic principles that have sustained this nation for more than two hundred years.

Yet this is the same tradition that has given us long years without a stabilizing central bank, the Trail of Tears, the Civil War, the Credit Mobilier scandal, robber barons, and Hoover’s unshakable faith that “prosperity was just around the corner,” among many other disgraces. Hofstadter is thankfully not above condemning lasseiz-faire absolutism, such as Grover Cleveland’s unrealistic assumption that “things must work out smoothly without government action, or the whole system, coherent enough in theory, would fall from the weakness of its premises” or the free silver campaign that buttressed the bombastic William Jennings Bryan into an improbable presidential candidate. On Bryan, Hofstadter describes his intellectual acumen as “a boy who never left home” and one can see some of Bryan’s regrettable legacy in the red-faced fulminations of a certain overgrown boy who currently pledges to make America great again. A careless and clumsy figure like Bryan was the very antithesis of Lincoln. Bryan failed to see difficult political tasks through to their necessary end. He would adopt principles that he once decried. His well-meaning efforts amounted to practically nothing. Think of Bryan as Fargo‘s Jerry Lundegaard to Lincoln’s Joe Girard. Hofstadter suggests that “steadfast and self-confident intelligence,” perhaps more important than courage and sincerity, was the very quality that Bryan and this nation so desperately needed. Yet in writing about Teddy Roosevelt and pointing to the frequency of “manly” and “masterful” in his prose, Hofstadter imputes that these “more perfect” personal qualities for the political tradition “easily became transformed into the imperial impulse.”

This is, at times, a very grumpy book. One almost bemoans the missed opportunity to enlist the late Andy Rooney to read aloud the audio version. But it is not without its optimism. Hofstadter places most of his faith in abolitionist agitator Wendell Phillips. But even after defending Phillips from numerous historical condemnations and pointing to Phillips’s “higher level of intellectual self-awareness,” Hofstadter sees the agitator as merely “the counterweight to sloth and indifference.” But Hofstadter, at this young stage of his career, isn’t quite willing to write off agitators. He does point to why Phillips was a necessary and influential force providing equilibrium:

But when a social crisis or revolutionary period at last matures, the sharp distinctions that govern the logical and doctrinaire mind of the agitator become at one with the realities, and he appears overnight to the people as a plausible and forceful thinker. The man who has maintained that all history is the history of class struggles and has appeared so wide of the mark in times of class collaboration may become a powerful leader when society is seething with unresolved class conflict; the man who has been valiantly demanding the abolition of slavery for thirty years may become a vital figure when emancipation makes its appearance as a burning issue of practical politics. Such was the experience of Wendell Phillips: although he never held office, he became one of the most influential Americans during the few years after the fall of Fort Sumter.

thealternativefactorThe question of whether you believe Hofstadter to be a consensus historian or not may depend on how much you believe that he viewed the American political tradition much like two Lazaruses forever duking it out for existence in the old Star Trek episode “The Alternative Factor.” He certainly sees a nation of political pragmatists and obdurate agitators caught in an eternal dead lock, which is not too far from the progressive historians who styled their interpretations on class conflict. But his fine eye for ferreting out the Burkean undertow within Woodrow Wilson’s putative liberalism or exposing how Hoover’s faith in unregulated business had him quivering with disbelief after Black Thursday suggests a historian who is interested in countering ideological bromides. Perhaps if Hofstadter had stretched some of his chapters across a massive book, his reputation as a consensus historian wouldn’t have been the subject of so many heated arguments among political wonks.

Fortunately, the next Modern Library essay in this series will investigate how one man fluctuated his politics to serve his own ends and reshaped a major metropolis through the iron will of his personality. That very long and very great book may be the key that turns the consensus lock. It will certainly tell us a lot more about political power.

Next Up: Robert A. Caro’s The Power Broker!

The Contours of American History (Modern Library Nonfiction #94)

(This is the seventh entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: The Promise of American Life.)

mlnf94

History is never the thrilling Zapcat powerboat ride it can and should be when we remain committed to oaring through the same exhausted legends about American exceptionalism and bogus democratic promise. Much as we may find new insights into human existence by tilting our canoes to the ripples contained within a storyteller’s cadences, so too may we discover more complete ways of contending with our historical contradictions through the viewpoint of a responsible revisionist armed with the facts and rejecting the hard establishment line.

The revisionist historian, that charming and sometimes infuriating rabble-rouser never to be confused with some creepy Holocaust denier flailing in a sea of empty Cheetos bags and crackpot pamphlets, often gets needlessly maligned in America. Before Annette Gordon-Reed offered conclusive evidence of Thomas Jefferson’s relationship with Sally Hemings (upheld by a 1998 DNA test), Fawn Brodie was attacked by vanilla-minded legacy holders two decades before for pushing beyond James Callender’s tawdry trolling, daring to suggest that there was good reason to believe that our much heralded champion of the rights of man had skeletons in his closet that were vital to understanding his philosophy. Brodie’s book, despite its psychobiographical failings, led to a reckoning with our myths and assumptions about the Sage of Monticello, one that continues to this very day with college students demanding the removal of Jefferson statues on campuses.

Provided that their efforts do not involve going out of their way to Bowlderize troubling if incontrovertible parts of the story and the results are as expansive and as rigorous as their more timorous mainstream counterparts, revisionists are often vital reconcilers of the public record. It is the facile propagandist who ignores Rosa Parks’s radicalism to paint a roseate image of a meek and tired seamstress who refused to give up her seat on a bus (“small,” “delicate,” and “little,” as belittled by Bill Clinton in 2005) or who upholds the lie that Abner Doubleday created baseball.

In recent decades, many young students have ardently clutched their copies of Howard Zinn’s A People’s History of the United States with the taut adamantine grip of a Fallout 4 junkie reluctant to capitulate her controller. Zinn’s thoughtful volume has been vehemently denounced by some establishment historians who have questioned the perceived polemical emphasis of class conflict at the expense of other issues. But before Zinn, there was William Appleman Williams, a brash energetic troublemaker who was arguably a more rigorous scholar than Zinn and who was among the best and the boldest of the firebrand 20th century historians who emerged from a Charles Beard afterglow with ass to kick once the bubble gum supply ran out.

William Appleman Williams unpacked the economic motivations of American expansion and foreign policy in The Tragedy of American Diplomacy and broadened this scholarship further with The Contours of American History, a punchy volume examining how imperialism and liberalism became a sordid double stitch intertwined in the American quilt well before the Sons of Liberty spilled massive chests of desperately offloaded tea into Boston Habor. Yet Williams’s often nimble analysis, riddled as it sometimes is with conceptual overreach, robustly articulates the ever-changing and contradictory American Weltanschauung that has motivated nearly every governmental decision since. He documents a worldview that started off with the relatively benign goal of creating and sustaining an economic nation that provided for everyone, but devolved under the autocratic yoke of Jacksonian democracy and Gilded Age greed to the corporate capitalist nightmare we are all trying to awake from today. And because Williams’s challenge to the so-called “American experiment” was so unprecedented in the mid-20th century, this historian was tarnished, besmirched, and condemned by other putative progressives who might have enlarged their rigid notions of national identity if they had been more willing to dive into the subtle words and actions directing the unshakable financial impetus.

Williams was harassed by the House Committee on Un-American Activities, that despicably despotic body that ruined the lives of so many, with a demand to produce the unfinished Contours manuscript. The HUAC would order Williams to testify in Washington and then cancel the appearance by telegram once he’d hopped on a train to the Beltway. Even after he testified for ten minutes and the HUAC abandoned its witch hunt, the IRS harassed him in various forms for nearly twenty years. Williams was hounded by the neoliberalism critic Arthur Schlesigner, Jr., who dutifully condemned Williams as “pro-communist” to the American Historical Association’s president. Even as late as 2009, an academic called Williams an “idiot” before a Society of Historians of American Foreign Relations panel, decrying Williams’s approach to history as a crude retooling of Charles Beard’s infamous assault upon our Founding Fathers’s pecuniary predispositions.3

But Williams was far from a typical progressive. He was a registered Republican when he first came to Wisconsin. He voted for Nixon as the lesser evil in 1960. And even in Contours, he defended Herbert Hoover’s hands-off Depression era policies, seeing this as a necessary tactic to forestall property holders from creating a business-friendly fascism that could have had a more diabolical effect on our clime than the many Hoovervilles that had mushroomed across the nation. Williams argued that Hoover’s perceived failure to do anything represented a more active resistance against special interests than the Progressive Movement was willing to acknowledge or act upon at the time. And that’s the way this jazz-loving Midwestern historian rolled. As Williams was to write in a 1973 essay, the revisionist’s duty was to “see basic facts in a different way and as interconnected in new relationships. He is a sister and a brother to those who use old steel to make a zipper, as contrasted with those who add new elements to make a better steel.”

In my previous Modern Library essay, I castigated Herbert Croly for the historical developments that he could not see ahead of him, for erring too much in his perfervid belief in a central government and for diminishing the justifiable grievances of protesters. William Appleman Williams may very well represent the opposite problem: a historian who could see the implications of any action all too well, one who was willing to articulate any interpretation of the facts even if it meant being alienated by the jingoistic minds who needed to reconsider the other fateful historical trajectories upholding the status quo.

Williams’s highly specific examples very much allow him to sell us on his interpretation. In Tragedy, for example, Williams’s deductive prowess is in high gear when he examines how Woodrow Wilson’s March 1913 decision to refuse a government loan to China, one long coveted by American industrialists at the time (and later attempted privately), actually fell within the framework of the Open Door Policy. Many historians have interpreted Wilson’s pushback as a betrayal of American expansionism at the time, but Williams points to the lack of private capital available to fulfill the job as well as the possibility that any governmental loan, even one secured with the help of other financiers, may have been perceived as a very clear threat to neighboring Japan. The Open Door Policy, for all of its flaws and its needless sullying of China, was intended to provide a peacefully imperialist framework for a burgeoning American empire: a GATT or IMF before its time, though regrettably without much in the way of homegrown protest. (Rebellion would come later in Beijing with the May Fourth movement.) The ostensible goal was to strengthen China with fresh influxes of low-risk private capital so that it could withstand troublesome neighbors looking for a fight, even as the new obligations to American entrepreneurs forged hot rivulets of cash rolling back to the imperialist homeland. Wilson’s decision was, as discerned by Williams, a canny chesslike stratagem to avoid war and conflict, one that would keep China a servant to America’s riches. From the vantage point of the 21st century, this useful historical interpretation reveals Wilson to be a pioneer in the kind of venal and now all too commonplace globalization that morally bankrupt neoliberals like Thomas Friedman have no problem opening their old steel zippers for. Their free trade fantasies possess all the out-of-sight, out-of-mind justification of a revenge porn junkie ignoring another person’s real world humiliation for fleeting sociopathic pleasure.

It was with Contours that Williams blew the lid off the great American lie, exposing the American liberal’s failure to confront his own implication in much of the lasseiz nous faire madness. Williams traced the origins of our mercantilist approach to Anthony Ashley Cooper, the Earl of Shaftesbury. In the 17th century, Shaftesbury was a political figure who opposed harsh penalties and absolutist government. He stood up for the nonconformists and called for regular parliaments, and would go on to found and lead the early Whig party in the wake of the British Exclusion Crisis. While traveling to Oxford to remove an abscess from his liver, he hit it off with a young doctor by the name of John Locke. (There weren’t as many cafes back then as there are today. In the 1600s, you had to take whatever mingling opportunities you could get.) Locke, of course, would later have many ideas about the social contract, a scheme about inalienable natural rights that would eventually find its way into a number one ditty penned by Jefferson that would become known as the Declaration of Independence.

But there was a twist to this tale. As Williams points out, Locke’s ideas were a corruption of Shaftesbury’s more inclusive and democratic efforts. Where Shaftesbury was willing to rebel against the King to ensure that courts and alternative political parties were in place to prevent the government from becoming an absolute tyranny, even going to the trouble of building a coalition that extended across all classes to fight for these safeguards when not putting together the Habeas Corpus Act of 1679, it was Locke who limited Shaftesbury’s remarkably liberal contributions by undercutting individual rights. Locke believed that those who owned property were perfectly justified in protesting their government, for they were the ones who had entered into a social contract. But the rabble who didn’t own property could more or less buzz off.2 As Williams put it, “[I]ndividualism was a right and a liberty reserved to those who accepted a status quo defined by a certain set of natural truths agreed upon a majority. Within such a framework, and it is a far narrower set of limits than it appears at first glance, the natural laws of property and labor were deemed sufficient to guide men’s pursuit of happiness.”

Yet those who subscribed to these early mercantilist standards believed that this classically liberal idea of “corporate structure” involved a basic responsibility to provide for everyone. And the way of sustaining such a benevolent national juggernaut was through the establishment of an empire: a Pax Americana predicated upon the promise of a democracy promulgated by patriarchs who not so quietly believed that the people were incapable of it.3 Williams observes how the Quakers in Philadelphia, who opposed expansion and much of the onslaughts against Native Americans, were very much committed to noblesse oblige, setting up hospitals, education, and philanthropic endeavors to take care of everyone. But this generous spirit was no match for the free trade nabobs or the hard-hearted Calvinists who increasingly shifted such solicitude to the propertied class (one can easily imagine Alec Baldwin’s Glengarry Glenn Ross “Always be closing” speech spouted by a Calvinist), leading the great theologian Jonathan Edwards to offer righteous pushback against “fraud and trickishness in trade.”

Against this backdrop, post-Revolutionary expansion and the Monroe Doctrine allowed mercantilism to transmute into an idea that was more about the grab than the munificent results, with visions of empire dancing in many heads. By the time Frederick Jackson Turner tendered his Frontier Thesis in 1893, mercantilism was no longer about providing for the commonweal, but about any “self-made man” looking out after his interests. Williams points to Chief Justice John Marshall’s efforts to enforce safeguards, such as his Gibbons vs. Ogden decision regulating interstate commerce, against the monopolies that would come to dominate America near the turn of the century. Marshall’s immediate successor, Chief Justice Taney, expanded the flexibility of the Constitution’s Contract Clause with his 1837 Charles River Bridge v. Warren Bridge decision, permitting states to alter any contract as it saw fit. While Taney’s decision seemed to strike the death knell against monopolies, it was no match against the consolidated trusts that were to come with the railroads and the robber barons. Rather curiously, for all of his sharp observations about free trade and expansionist dangers during this time, Williams devotes little more than a paragraph to the 1836 closing of the Second Bank of the United States:

[Nicholas Biddle] did a better job than the directors of the Bank of England. Under his leadership the bank not only established a national system of credit balancing which assisted the west as much as the east, and probably more, but sought with considerable success to save smaller banks from their own inexperience and greed. It was ultimately his undoing, for what the militant advocates of lasseiz nous faire came to demand was help without responsibilities. In their minds, at any rate, that was the working definition of democratic freedom.

Talk about sweeping one of the greatest financial calamities in American history under the rug! I don’t want to get too much into Andrew Jackson, who I believe to be nothing less than an abhorrent, reckless, and self-destructive maniac who claimed “liberalism” using the iron fist of tyranny, in this installment. I shall preserve my apparently unquenchable ire for Old Hickory when I tackle Arthur Schlesinger, Jr.’s The Age of Jackson in a few years (Modern Library Nonfiction #36). But Jackson’s imperious and irresponsible battle with Biddle, complete with his Specie Circular, undoubtedly led to the Panic of 1837, in which interest rates spiked, the rich got richer, a fixable financial mess spiraled out of control and became needlessly dangerous, and buyers could not come up with the hard cash to invest in land. Considering Williams’s defense of Hoover in both Contours and Tragedy, it is extremely curious that he would shy away from analyzing why some form of central bank might be necessary to mitigate against volatility, even though he adopted some fascinating counterpoints to the “too big to fail” theory decades before Bernanke and Krugman.

This oversight points to the biggest issue I have with Williams. His solution to the great imperialist predicament was democratic socialism, which he called “the only real frontier available to Americans in the second half of the 20th century.” While this is a clever way of inverting Turner’s thesis, to uphold this, Williams cites a few examples such as the courage of Wendell Phillips, a few throwaway references to social property, and a late 19th century return with Edward Bellamy and Henry Demarest Lloyd to the Quaker-like notion of “a commonwealth in which men were brothers first and economic men second.” But while Williams is often a master of synthesis, he falls somewhat short in delineating how his many historical examples can aid us to correct our ongoing ills. If the American Weltanschauung is so steeped in our culture, how then can democratic socialism uproot it? This vital question remains at the root of any progressive-minded conversation. But now that we have a presidential race in which socialism is no longer a dirty word and the two leading Democratic candidates bicker over who is the greater progressive, perhaps the answer might arrive as naturally as Williams anticipated.

Next Up: Richard Hofstadter’s The American Political Tradition!

The Promise of American Life (Modern Library Nonfiction #95)

(This is the sixth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: In Cold Blood.)

mlnf95Before The New Republic devolved under Chris Hughes into a half-worthy husk of knee-jerk platitudes just a few histrionic clickbait headlines shy of wily Slate reductionism, it was a formidable liberal magazine for many decades, courageous enough to take real stands while sustaining vital dialogue about how and when government should intercede in important affairs. The source of this philosophical thrust, as duly documented by Franklin Foer, was the greatly diffident son of a prominent newspaperman, an unlikely progenitor who entered and exited Harvard many times without ever finishing, someone who suffered from severe depression and who, for a time, didn’t know what to do with his life other than play bridge and tennis and write about obscure architecture. But Croly found it in him to spill his views about democracy’s potential, what he called the “New Nationalism,” into a 1909 book called The Promise of American Life, which served as something of a manifesto for the early 20th century Progressives and became a cult hit among political wonks at the time. It partially inspired Theodore Roosevelt, who was proudly name-checked by Croly as “a Hamiltonian with a difference,” to initiate his ill-fated 1912 Bull Moose campaign as an outsider presidential candidate. (Historians have argued over the palpable influence of Croly’s book on Roosevelt, but it’s possible that, had not Croly confirmed what Roosevelt had already been thinking about, Roosevelt may not have entered the 1912 race as ardently as he did. With a more united Republican coalition against Wilson, America may very well have carried on with a second Taft term, with an altogether different involvement in World War I. Taft’s notable rulings as Chief Justice of the Supreme Court, which included extending executive power and broadening the scope of police evidence, may not been carried out in the 1920s. A book is often more of a Molotov shattering upon history’s turf than we are willing to accept.)

Croly’s book touched a nerve among a small passionate group. One couple ended up reading Croly’s book aloud to each other during their honeymoon (leaving this 21st century reader, comparing Croly’s thick “irremediable”-heavy prose style against now all too common sybaritic options, to imagine other important activities that this nubile pair may have missed out on). The newly married couple was Willard Straight and Dorothy Whitney. They had money. They invited Croly to lunch. The New Republic was formed.

So we are contending with a book that not only created an enduring magazine and possibly altered the course of American history, but one that had a profound impact on the right elite at the right time. So it was a tremendous surprise to discover a book that greatly infuriated me during the two times I read it, at one time causing me to hurl it with high indignant velocity against a wall, for reasons that have more to do with this gushing early 20th century idealist failing to foresee the rise of Nazism, the despicable marriage of racism and police brutality, growing income inequality, corporate oligarchy, draconian Common Core educational standards, and dangerous demagogues like George Wallace and Donald Trump.

But it is also important to remember that Croly wrote this book before radio, television, the Internet, women’s suffrage, two world wars, the Great Depression, smartphones, outrage culture, and 9/11. And it is never a good idea to read an older book, especially one of a political nature, without considering the time that it was written. I did my best to curb my instincts to loathe Croly for what he could not anticipate, for his larger questions of how power aligns itself with the democratic will of the people are still very much worth considering. Croly is quite right to identify the strange Frankenstein monster of Alexander Hamilton’s pragmatic central government and Thomas Jefferson’s rights of man — the uniquely American philosophical conflict that has been the basis of nearly every national conflict and problem that has followed — as a “double perversion” of our nation’s potential, even if Croly seems unwilling to consider that some “perversions” are necessary for an evolving democratic republic and he is often too trusting of executive authority and the general public’s obeisance to it. That these inquiries still remain irreconcilable (and are perverted blunter still by crass politicians who bellow about how to “make America great again” as they eject those who challenge them from the room) some 107 years after the book’s publication speaks to both the necessity and the difficulty of the question.

I’ve juxtaposed Croly’s meek-looking law clerk mien against George Bellows’s famous boxing painting (unveiled two years before Croly’s book) because there really is no better way to visualize the American individual’s relationship to its lumbering, venal, and often futile government. Croly’s solution is to call for all Americans to be actively engaged in a collaborative and faithful relationship with the nation: “to accept a conception of democracy which provides for the substantial integrity of his country, not only as a nation with an exclusively democratic mission, but as a democracy with an essentially national career.” On its face, this seems like a reasonable proposition. We all wish to belong in a democracy, to maintain fidelity to our country, and to believe that the Lockean social contract in which the state provides for the commonweal is a workable and reasonable quid pro quo. But it is also the kind of orgiastic meat and potatoes mantra that led both Kennedy and Reagan to evoke mythical American exceptionalism with the infamous “shining city upon a hill” metaphor. Dulcet words may make us feel better about ourselves and our nation, but we have seen again and again how government inaction on guns and a minimum wage that does not reflect contemporary living standards demands a Black Lives Matter movement and a “fight for $15.” And when one begins to unpack just what Croly wants us to give up for this roseate and wholly unrealistic Faustian bargain, we begin to see someone who may be more of a thoughtful and naive grandstander than a vital conceptual pragmatist.

Croly is right to demand that America operate with a larger administrative organ in place, some highly efficient Hamiltonian body that mitigates against “the evil effects of a loose union.” He smartly points out that such evils as slavery resulted from the American contradictions originating in the strange alliance between our poetic Jeffersonian call for Constitutional democracy and individualistic will and the many strains of populism and nationalism that followed. In his insistence on “the transformation of Hamiltonianism into a thoroughly democratic political principle,” Croly is suspicious of reformers, many of which he singles out in a manner strikingly similar to Norman Mailer’s “Quick and Expensive Comments on the Talent in the Room.” He calls William Jennings Bryan an “ill conceived” reformer, claims the now nearly forgotten William Travers Jerome to be “lulled into repose” by traditional Jeffersonian democracy (never mind Jerome’s successful crusades against Tammany Hall corruption, regrettably overshadowed by his prosecution of Harry K. Thaw during the Stanford White murder trial), interestingly pegs William Randolph Hearst as someone motivated by endless “proclaimation[s] of a rigorous interpretation of the principle of equal rights,” and holds up Teddy Roosevelt as “more novel and more radical” in his calls for a Square Deal than “he himself has probably proclaimed.”

But Croly’s position on reform is quite problematic, deeply unsettling, and often contradictory. He believes that citizens “should be permitted every opportunity to protest in the most vigorous and persistent manner,” yet he states that such protests “must conform to certain conditions” enforced by the state. While we are certainly far removed from the 1910 bombing of the Los Angeles Times building that galvanized the labor movement, as we saw with the appalling free speech cages during the 2004 Republican Convention, muzzling protesters not only attenuated their message but allowed the NYPD to set up traps for the activists, which ensured their arrest and detention — a prototype for the exorbitant enforcement used to diminish and belittle the Occupy Wall Street movement a few years later. Croly believes that the job of sustaining democratic promise should, oddly enough, be left to legislators and executives granted all the power required and sees state and municipal governments as largely unsuccessful:

The interest of individual liberty in relation to the organization of democracy demands simply that the individual officeholder should possess an amount of power and independence adequate to the efficient performance of his work. The work of a justice of the Supreme Court demands a power that is absolute for its own special work, and it demands technically complete independence. An executive should, as a rule, serve for a longer term, and hold a position of greater independence than a legislator, because his work of enforcing the laws and attending to the business details of government demands continuity, complete responsibility within its own sphere, and the necessity occasionally of braving adverse currents of public opinion. The term of service and the technical independence of a legislator might well be more restricted than that of an executive; but even a legislator should be granted as much power and independence as he may need for the official performance of his public duty. The American democracy has shown its enmity to individual political liberty, not because it has required its political favorites constantly to seek reëlection, but because it has since 1800 tended to refuse to its favorites during their official term as much power and independence as is needed for administrative, legislative, and judicial efficiency. It has been jealous of the power it delegated, and has tried to take away with one hand what it gave with the other.

There is no room for “Act locally, think globally” in Croly’s vision. This is especially ungenerous given the many successful progressive movements that flourished decades after Croly’s death, such as the civil rights movement beginning with local sit-ins and developing into a more cogent and less ragged strain of the destructive Jacksonian populism that Croly rightly calls out, especially in relation to the cavalier obliteration of the Second Bank of the United States and the Nullification Crisis of 1832, which required Henry Clay to clean up Jackson’s despotic absolutism with a compromise. On the Nullification point, Croly identifies Daniel Webster, a man who became treacherously committed to holding the Union together, as “the most eloquent and effective expositor of American nationalism,” who “taught American public opinion to consider the Union as the core and crown of the American political system,” even as he offers a beautifully stinging barb on Webster’s abolitionist betrayal with the 1850 speech endorsing the Fugitive Slave Act: “He was as much terrorized by the possible consequences of any candid and courageous dealing with the question as were the prosperous business men of the North; and his luminous intelligence shed no light upon a question, which evaded his Constitutional theories, terrified his will, and clouded the radiance of his patriotic visions.”

But Croly also promulgates a number of loopy schemes, including making representative legislatures at any level beholden to an executive who is armed with a near tyrannical ability to scuttle laws, even as he claims that voters removing representatives through referendum “will obtain and keep a much more complete and direct control over the making of their laws than that which they have exerted hitherto; and the possible desirability of the direct exercise of this function cannot be disputed by any loyal democrat.” Well, this loyal democrat, immediately summoning Lord Acton’s famous quote, calls bullshit on giving any two-bit boss that kind of absolute power. Because Croly’s baffling notion of “democracy” conjures up the terrifying image of a sea of hands raised in a Bellamy salute. On one hand, Croly believes that a democracy must secure and exercise individual rights, even as he rightly recognizes that, when people exercise these rights, they cultivate the “tendency to divide the community into divergent classes.” On the other hand, he believes that individuals should be kept on a restrictive leash:

[T]hey should not, so far as possible, be allowed to outlast their own utility. They must continue to be earned. It is power and opportunity enjoyed without being earned which help to damage the individual — both the individuals who benefit and the individuals who consent — and which tend to loosen the ultimate social bond. A democracy, no less than a monarchy or an aristocracy, must recognize political, economic, and social discriminations, but it must also manage to withdraw its consent whenever these discriminations show any tendency to excessive endurance. The essential wholeness of the community depends absolutely on the ceaseless creation of a political, economic, and social aristocracy and their equally incessant replacement.

There’s certainly something to be said about how many Americans fail to appreciate the rights that they have. Reminding all citizens of their duties to flex their individual rights may be a very sound idea. (Perhaps one solution to American indifference and political disillusion is the implementation of a compulsory voting policy with penalties, similar to what goes on in Australia.) But with such a middling door prize like this handed out at the democratic dance party, why on earth would any individual want to subscribe to the American promise? Aristocrats, by their very nature, wish to hold onto their power and privilege and not let go. Croly’s pact is thus equally unappealing for the struggling individual living paycheck to paycheck, the career politician, or the business tycoon.

Moreover, in addition to opposing the Sherman Antitrust Act, Croly nearly succumbs to total Taylorism in his dismissal of labor unions: “They seek by the passage of eight-hour and prevailing rate-of-wages laws to give an official sanction to the claims of the unions, and they do so without making any attempt to promote the parallel public interest in an increasing efficiency of labor. But these eight-hour and other similar laws are frequently being declared unconstitutional by the state courts, and for the supposed benefit of individual liberty.” Granted, Croly’s words came ten years before the passage of the Adamson Act, the first federal law enforcing a mandatory eight-hour day. But Croly’s failure to see the social benefits of well-rested workers better positioned to exercise their individual liberty for a democratic promise is one of his more outrageous and myopic pronouncements, even as he also avers how the conditions that create unrestricted economic opportunities also spawn individual bondage. But if Croly wants Americans to “[keep] his flag flying at any personal cost or sacrifice,” then he really needs to have more sympathy for the travails of the working stiff.

Despite all my complaints, I still believe some 21st century thinker should pick up from Croly’s many points and make an equally ambitious attempt to harmonize Hamilton and Jefferson with more recent developments. American politics has transformed into a cartoonish nightmare from which we cannot seem to escape, one that causes tax absolutist lunatics like Grover Norquist to appear remotely sane. That we are seeing a strange replay of the 1912 election with the 2016 presidential race, with Trump stepping in as an unlikely Roosevelt and Bernie Sanders possibly filling in for Eugene Debs, and that so many Americans covet an “outsider” candidate who will fix a government that they perceive as a broken system speaks to a great need for some ambitious mind to reassess our history and the manner in which we belong to our nation, while also observing the many ways in which Americans come together well outside of the political bear trap. For the American individual is no longer boxing George Bellows-style with her government. She is now engaged in a vicious MMA match unfurling inside a steel cage. Whether this ugly pugilism can be tempered with peace and tolerance is anyone’s guess, but, if we really believe in democracy, the least we can do is try to find some workaround in which people feel once again that they’re part of the process.

Next Up: William Appleman Williams’s The Contours of American History!

In Cold Blood (Modern Library Nonfiction #96)

(This is the fifth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: The Journalist and the Murderer.)

halmacapoteTruman Capote was a feverish liar and a frenzied opportunist from the first moment his high voice pierced the walls of a literary elite eager to filth up its antimacassars with gossip. He used his looks to present himself as a child prodigy, famously photographed in languorous repose by Harold Halma to incite intrigue and controversy. He claimed to win national awards for his high school writing that no scholar has ever been able to turn up. He escorted the nearly blind James Thurber to his dalliances with secretaries and deliberately put on Thurber’s socks inside out so that his wife would notice, later boasting that one secretary was “the ugliest thing you’ve ever seen.” Biographer Gerald Clarke chronicled how Capote befriended New Yorker office manager Daise Terry, who was feared and disliked by many at the magazine, because he knew she could help him. (Capote’s tactics paid off. Terry gave him the easiest job on staff: copyboy on the art department.) If Capote wanted to know you, he wanted to use you. But the beginnings of a man willing to do just about anything to get ahead can be found in his early childhood.

Capote’s cousin Jennings Faulk Carter once described young Truman coming up with the idea of charging admission for a circus. Capote had heard a story in the local paper about a two-headed chicken. Lacking the creative talent to build a chicken himself, he enlisted Carter and Harper Lee for this faux poultry con. The two accomplices never saw any of the money. Decades later, Capote would escalate this tactic on a grander scale, earning millions of dollars and great renown for hoisting a literary big top over a small Kansas town after reading a 300 word item about a family murder in The New York Times. Harper Lee would be dragged into this carnival as well.

mlnf96The tale of how two frightening men murdered four members of the Clutter family for a pittance and created a climate of fear in the surrounding rural area (and later the nation) is very familiar to nearly anyone who reads, buttressed by the gritty 1967 film (featuring a pre-The Walking Dead Scott Wilson as Dick Hickock and a pre-Bonnie Lee Bakley murder Robert Blake as Perry Smith) and a deservedly acclaimed 2005 film featuring the late great Philip Seymour Hoffman as Capote. But what is not so discussed is the rather flimsy foundation on which this “masterpiece” has been built.

Years before “based on a true story” became a risible cliche, Capote and his publicists framed In Cold Blood‘s authenticity around Capote’s purported accuracy. Yet the book itself contains many gaping holes in which we have only Smith and Hickock’s words, twisted further by Capote. What are we to make of Bill and Johnny — a boy and his grandfather who Smith and Hickock pick up for a roadside soda bottle-collecting adventure to make a few bucks? In our modern age, we would demand the competent journalist to track these two side characters down, to compare their accounts with those of Smith and Hickock. Capote claims that these two had once lived with the boy’s aunt on a farm near Shreveport, Louisiana, yet no independent party appears to have corroborated their identities. Did Capote (or Hickock and Smith) make them up? Does the episode really contribute to our understanding of the killers’ pathology? One doesn’t need to be aware of a recent DNA test that disproved Hickock and Smith’s involvement with the quadruple murder of the Walker family in Sarasota County, Florida, taking place one month after the Clutter murders, to see that Capote is more interested in holding up the funhouse mirror to impart specious complicity:

Hickock consented to take the [polygraph] test and so did Smith, who told Kansas authorities, “I remarked at the time, I said to Dick, I’ll bet whoever did this must be somebody that read about what happened out here in Kansas. A nut.” The results of the test, to the dismay of Osprey’s sheriff as well as Alvin Dewey, who does not believe in exceptional circumstances, were decisively negative.

Never mind that polygraph tests are inaccurate. It isn’t so much Hickock and Smith’s motivations that Capote was interested in. He was more concerned with stretching out a sense of amorphous terror on a wide canvas. As Hickock and Smith await to be hanged, they encounter Lowell Lee Andrews in the adjacent cell. He is a fiercely intelligent, corpulent eighteen-year-old boy who fulfilled his dormant dreams of murdering his family, but Capote’s portrait leaves little room for subtlety:

For the secret Lowell Lee, the one concealed inside the shy church going biology student, fancied himself an ice-hearted master criminal: he wanted to wear gangsterish silk shirts and drive scarlet sports cars; he wanted to be recognized as no mere bespectacled, bookish, overweight, virginal schoolboy; and while he did not dislike any member of his family, at least not consciously, murdering them seemed the swiftest, most sensible way of implementing the fantasies that possessed him.

We have modifiers (“shy,” “ice-hearted,” “gangsterish,” “silk,” “scarlet,” “bespectacled,” “bookish,” “virginal,” “swiftest,” and “sensible”) that conjure up a fantasy atop the fantasy, that suggest relativism to the two main heavies, but there is little room for subtlety or for any doubt in the reader’s mind. Capote does bring up the fact that Andrews suffered from schizophrenia, but diminishes this mental illness by calling it “simple” before dredging up the M’Naghten Rule, which was devised in 1843 (still on the books well before psychiatry existed and predicated upon a 19th century standard) to exclude any insanity defense whereby the accused recognizes right from wrong. But he has already tarnished Andrews with testimony from Dr. Joseph Satten: “He considered himself the only important, only significant person in the world. And in his own seclusive world it seemed to him just as right to kill his mother as to kill an animal or a fly.” I certainly don’t want to defend Andrews’s crime (much less the Clutter family murders), but this conveniently pat assessment does ignore more difficult and far more interesting questions that Capote lacks the coherence, the empathy, or the candor to confess his own contradictions to pursue. Many pages before, in relation to Hickock, Capote calls M’Naghten “a formula quite color-blind to any gradations between black and white.” In other words, Capote is the worst kind of journalist: a cherry-picking sensationalist who applies standards as he sees fit, heavily steering the reader’s opinion even as he feigns objectivity. The ethical reader reads In Cold Blood in the 21st century, wanting Katherine Boo to emerge from the future through a wormhole, if only to open up a can of whoopass on Capote for these egregious and often thoughtless indiscretions.

Capote’s decision to remove himself from the crisp, lurid story was commended by many during In Cold Blood‘s immediate reception as a feat of unparalleled objectivity, with the “nonfiction novel” label sticking to the book like a trendy hashtag that hipsters refuse to surrender, but I think Cynthia Ozick described the thorny predicament best in her infamous driveby on Capote (collected in Art & Ardor): “Essence without existence; to achieve the alp of truth without the risk of the footing.” If we accept any novel — whether “nonfiction” or fully imaginative — as some sinister or benign cousin to the essay, as a reasonably honest attempt to reckon with the human experience through invention, then In Cold Blood is a failure: the work of a man who sat idly in his tony Manhattan spread with cadged notebooks and totaled recall of aggressively acquired conversations even as his murderous subjects begged their “friend” to help them escape the hangman’s noose.

In 2013, Slate‘s Ben Yagoda described numerous factual indiscretions, revealing that editor William Shawn had penciled in “How know?” on the New Yorker galley proofs of Capote’s four part opus (In Cold Blood first appeared in magazine form). That same year, the Wall Street Journal uncovered new evidence from the Kansas Bureau of Investigation, which revealed that the KBI did not, upon receiving intelligence from informant Floyd Wells, swiftly dispatch agent Harold Nye to the farmhouse where Richard Hickock had lodged. (“It was as though some visitor were expected,” writes Capote. Expected by Hickock’s father or an author conveniently tampering his narrative like a subway commuter feverishly filling in a sudoku puzzle?) As Jack de Bellis has observed, Capote’s revisions from New Yorker articles to book form revealed Capote’s feeble command of time, directions, and even specific places. But de Bellis’s examination revealed more descriptive imprudence, such as Capote shifting a line on how Perry “couldn’t stand” another prisoner to “could have boiled him in oil” (“How know?” we can ask today), along with many efforts to coarsen the language and tweak punctuation for a sensationalist audience.

And then there is the propped up hero Alvin Dewey, presented by Capote as a tireless investigator who consumes almost nothing but coffee and who loses twenty pounds: a police procedural stereotype if ever there was one. Dewey disputes closing his eyes during the execution and the closing scene of Dewey meeting Nancy Clutter’s best friend, Susan Kidwell, in a cemetery is not only invented, but heavily mimics the belabored ending of Capote’s 1951 novel, The Grass Harp. But then “Foxy” Dewey and Capote were tighter than a pair of frisky lovers holed up for a week in a seedy motel.

Capote was not only granted unprecedented access to internal documents, but Capote’s papers reveal that Dewey provided Capote with stage directions in the police interview transcripts. (One such annotation reads “Perry turns white. Looked at the ceiling. Swallows.”) There is also the highly suspect payola of Columbia Pictures offering Dewey’s wife a job as a consultant on the 1965 film for a fairly substantial fee. Harold Nye, another investigator whose contributions have been smudged out of the history told Charles J. Shields in a December 30, 2002 interview (quoted in Mockingbird), “I really got upset when I know that Al [Dewey] gave them a full set of the reports. That was like committing the largest sin there was, because the bureau absolutely would not stand for that at all. If it would have been found out, he would have been discharged immediately from the bureau.”

haroldnyeIn fact, Harold Nye and other KBI agents did much of the footwork that Capote attributes to Dewey. Nye was so incensed by Capote’s prevarications that he read 115 pages of In Cold Blood before hurling the book across the living room. And in the last few years, the Nye family has been fighting to reveal the details inside two tattered notebooks that contain revelations about the Clutter killings that may drastically challenge Capote’s narrative.

Yet even before this, Capote’s magnum opus was up for debate. In June 1966, Esquire published an article by Phillip K. Tompkins challenging Capote’s alleged objectivity. He journeyed to Kansas and discovered that Nancy Clutter’s boyfriend was hardly the ace athlete (“And now, after helping clear the dining table of all its holiday dishes, that was what he decided to do that — put on a sweatshirt and go for a run.”) that Capote presented him as, that Nancy’s horse was sold for a higher sum to the father of the local postmaster rather than “a Mennonite farmer who said he might use her for plowing,” and that the undersheriff’s wife disputed Capote’s account:

During our telephone conversation, Mrs. Meier repeatedly told me that she never heard Perry cry; that on the day in question she was in her bedroom, not the kitchen; that she did not turn on the radio to drown out the sound of crying; that she did not hold Perry’s hand; that she did not hear Perry say, ‘I’m embraced by shame.’ And finally – that she had never told such things to Capote. Ms. Meier told me repeatedly and firmly, in her gentle way, that these things were not true.

(For more on Capote’s libertine liberties, see Chapter 4 of Ralph F. Voss’s Truman Capote and the Legacy of In Cold Blood.)

Confronted by these many disgraceful distortions, we are left to ignore the “journalist” and assess the execution. On a strictly showboating criteria, In Cold Blood succeeds and captures our imagination, even if one feels compelled to take a cold shower knowing that Capote’s factual indiscretions were committed with a blatant disregard for the truth, not unlike two psychopaths murdering a family because they believed the Clutters possessed a safe bountiful with riches. One admires the way that Capote describes newsmen “[slapping] frozen ears with ungloved, freezing hands,” even as one winces at the way Capote plays into patriarchal shorthand when Nye “visits” Barbara Johnson (Perry Smith’s only surviving sister: the other two committed suicide), describing her father as a “real man” who had once “survived a winter alone in the Alaskan wilderness.” The strained metaphor of two gray tomcats — “thin, dirty strays with strange and clever habits” – wandering around Garden City during the Smith-Hickcock trial allows Capote to pad out his narrative after he has exhausted his supply of “flat,” “dull,” “dusty,” “austere,” and “stark” to describe Kansas in the manner of some sheltered socialite referencing the “flyover states.” Yet for all these cliches, In Cold Blood contains an inexplicably hypnotic allure, a hold upon our attention even as the book remains aggressively committed to the facile conclusion that the world is populated by people capable of murdering a family over an amount somewhere “between forty and fifty dollars.” As Jimmy Breslin put it (quoted in M. Thomas Inge’s Conversations with Truman Capote), “This Capote steps in with flat, objective, terrible realism. And suddenly there is nothing else you want to read.”

That the book endures — and is even being adapted into a forthcoming “miniseries event’ by playwright Kevin Hood — speaks to an incurable gossipy strain in Western culture, one reinforced by the recent success of the podcast Serial and the television series The Jinx. It isn’t so much the facts that concern our preoccupation with true crime, but the sense that we are vicariously aligned with the fallible journalist pursuing the story, who we can entrust to dig up scandalous dirt as we crack open our peanuts waiting for the next act. If the investigator is honest about her inadequacies, as Serial‘s Sarah Koenig most certainly was, the results can provide breathtaking insight into the manner in which we incriminate other people with our emotional assumptions and our fallible memories and superficially examined evidence. But if the “journalist” removes himself from culpability, presenting himself as some demigod beyond question or reproach (Capote’s varying percentages of total recall memory certainly feel like some newly wrangled whiz kid bragging about his chops before the knowledge bowl), then the author is not so much a sensitive artist seeking new ways inside the darkest realm of humanity, but a crude huckster occupying an outsize stage, waiting to grab his lucrative check and attentive accolades while the real victims of devastation weep over concerns that are far more human and far more deserving of our attention. We can remove the elephants from the lineup, but the circus train still rolls on.

Next Up: The Promise of American Life by Herbert Croly!

The Journalist and the Murderer (Modern Library Nonfiction #97)

(This is the fourth entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: The Taming of Chance.)

mlnf97One of the mistakes often made by those who immerse themselves in Janet Malcolm’s The Journalist and the Murderer is believing that MacDonald’s guilt or innocence is what matters most. But Malcolm is really exploring how journalistic opportunity and impetuous judgment can lead any figure to be roundly condemned in the court of public opinion. Malcolm’s book was written before the Internet blew apart much of the edifice separating advertising and editorial with native advertising and sponsored articles, but this ongoing ethical dilemma matters ever more in our age of social media and citizen journalism, especially when Spike Lee impulsively tweets the wrong address of George Zimmerman (and gets sued because of the resultant harassment) and The New York Post publishes a front page cover of two innocent men (also resulting in a lawsuit) because Reddit happened to believe they were responsible for the 2013 Boston Marathon bombing.

Yet it is important to approach anything concerning the Jeffrey MacDonald murder case with caution. It has caused at least one documentary filmmaker to go slightly mad. It is an evidential involution that can ensnare even the most disciplined mind, a permanently gravid geyser gushing out books and arguments and arguments about books, with more holes within the relentlessly regenerating mass than the finest mound of Jarlsberg. But here are the underlying facts:

On February 17, 1970, Jeffrey MacDonald reported a stabbing to the military police. Four officers found MacDonald’s wife Colette, and their two children, Kimberley and Kristen, all dead in their respective bedrooms. MacDonald went to trial and was found guilty of one count of first-degree murder and two counts of second-degree murder. He was sentenced to three life sentences. Only two months before this conviction, MacDonald hired the journalist Joe McGinniss — the author of The Selling of a President 1968, then looking for a comeback — to write a book about the case, under the theory that any money generated by MacDonald’s percentage could be used to sprout a defense fund. MacDonald placed total trust in McGinniss, opening the locks to all his papers and letting him stay in his condominium. McGinniss’s book, Fatal Vision, was published in the spring of 1983. It was a bestseller and spawned a popular television miniseries, largely because MacDonald was portrayed as a narcissist and a sociopath, fitting the entertainment needs of a bloodthirsty public. MacDonald didn’t know the full extent of this depiction. Indeed, as he was sitting in jail, McGinniss refused to send him a galley or an advance copy. (“At no time was there ever any understanding that you would be given an advance look at the book six months prior to publication,” wrote McGinniss to MacDonald on February 16, 1983. “As Joe Wambuagh told you in 1975, with him you would not even see a copy before it was published. Same with me. Same with any principled and responsible author.” Malcolm copiously chronicles the “principled and responsible” conduct of McGinniss quite well, which includes speaking with MacDonald in misleading and ingratiating tones, often pretending to be a friend — anything to get MacDonald to talk.)

wallacemacdonald

On 60 Minutes, roughly around the book’s publication, Mike Wallace revealed to MacDonald what McGinniss was up to:

Mike Wallace (narrating): Even government prosecutors couldn’t come up with a motive or an explanation of how a man like MacDonald could have committed so brutal a crime. But Joe McGinniss thinks he’s found the key. New evidence he discovered after the trial. Evidence he has never discussed with MacDonald. A hitherto unrevealed account by the doctor himself of his activities in the period just before the murders.

Joe McGinniss: In his own handwriting, in notes prepared for his own attorneys, he goes into great detail about his consumption of a drug called Eskatrol, which is no longer on the market. It was voluntarily withdrawn in 1980 because of dangerous side effects. Among the side effects of this drug are, when taken to excess by susceptible individuals, temporary psychosis, often manifested as a rage reaction. Here we have somebody under enormous pressure and he’s taking enough of this Eskatrol, enough amphetamines, so that by his own account, he’s lost 15 pounds in the three weeks leading up to the murders.

eskatrolnoteWallace: Now wait. According to the note which I’ve seen, three to five Eskatrol he has taken. We don’t know if he’s taken it over a period of several weeks or if he’s taken three to five Eskatrol a day or a week or a month.

McGinniss: We do know that if you take three to five Eskatrol over a month, you’re not going to lose 15 pounds in doing so.

Jeffrey MacDonald: I never stated that to anyone and I did not in fact lose fifteen pounds. I also wasn’t taking Eskatrol.

Wallace (reading MacDonald’s note): “We ate dinner together at 5:45 PM. It is possible I had one diet pill at this time. I do not remember and do not think I had one. But it is possible. I had lost 12 to 15 pounds in the prior three to four weeks in the process, using three to five capsules of Eskatrol Spansule. I was also…”

MacDonald: Three to five capsules for the three weeks.

Wallace: According to this.

MacDonald: Right.

Wallace: According to this.

MacDonald: And that’s a possibility.

Wallace: Then why would you put down here that…that there was even a possibility?

MacDonald: These are notes given to an attorney, who has told me to bare my soul as to any possibility so we could always be prepared. So I…

Wallace: Mhm. But you’ve already told me that you didn’t lose 15 pounds in the three weeks prior…

MacDonald: I don’t think that I did.

Wallace: It’s in your notes. “I had lost 12-15 lbs. in the prior 3-4 weeks, in the process using 3-5 capsules of Eskatrol Spansules.” That’s speed. And compazine. To counteract the excitability of speed. “I was losing weight because I was working out with a boxing team and the coach told me to lose weight.” — 60 Minutes

One of McGinniss’s exclusive contentions was that MacDonald had murdered his family because he was high on Eskatrol. Or, as he wrote in Fatal Vision:

It is also fact that if Jeffrey MacDonald were taking three to five Eskatrol Spansules daily, he would have been consuming 75 mg. of dextroamphetamine — more than enough to precipitate an amphetamine psychosis.

Note the phrasing. Even though McGinniss does not know for a fact whether or not MacDonald took three to five Eskatrol (and MacDonald himself is also uncertain: both MacDonald and McGinniss prevaricate enough to summon the justifiably hot and bothered mesh of Mike Wallace’s grilling), he establishes the possibility as factual — even though it is pure speculation. The prognostication becomes a varnished truth, one that wishes to prop up McGinniss’s melodramatic thesis.

* * *

Malcolm was sued for libel by Jeffrey Masson over her depiction of him in her book, In the Freud Archives. In The Journalist and the Murderer, she has called upon all journalists to feel “some compunction about the exploitative character of the journalist-subject relationship,” yet claims that her own separate lawsuit was not the driving force in the book’s afterword. Yet even Malcolm, a patient and painstaking practitioner, could not get every detail of MacDonald’s appearance on 60 Minutes right:

As Mike Wallace — who had received an advance copy of Fatal Vision without difficulty or a lecture — read out loud to MacDonald passages in which he was portrayed as a psychopathic killer, the camera recorded his look of shock and utter discomposure.

Wallace was reading MacDonald’s own notes to his attorney back to him, not McGinniss’s book. These were not McGinniss’s passages in which MacDonald was “portrayed as a psychopathic killer,” but passages from MacDonald’s own words that attempted to establish his Eskatrol use. Did Malcolm have a transcript of the 60 Minutes segment now readily available online in 1990? Or is it possible that MacDonald’s notes to his attorney had fused so perfectly with McGinnis’s book that the two became indistinguishable?

This raises important questions over whether any journalist can ever get the facts entirely right, no matter how fair-minded the intentions. It is one thing to be the hero of one’s own story, but it is quite another to know that, even if she believes herself to be morally or factually in the clear, the journalist is doomed to twist the truth to serve her purposes.

It obviously helps to be transparent about one’s bias. At one point in The Journalist and the Murderer, Malcolm is forthright enough to confess that she is struck by MacDonald’s physical grace as he breaks off pieces of tiny powdered sugar doughnuts. This is the kind of observational detail often inserted in lengthy celebrity profiles to “humanize” a Hollywood actor uttering the same calcified boilerplate rattled off to every roundtable junketeer. But if such a flourish is fluid enough to apply to MacDonald, we are left to wonder how Malcolm’s personal connection interferes with her purported journalistic objectivity. In the same paragraph, Malcolm neatly notes the casual abuse MacDonald received in his mailbox after McGinniss’s book was published — in particular a married couple who read Fatal Vision while on vacation who took the time to write a hateful letter while sunbathing at the Sheraton Waikiki Hotel. This casual cruelty illustrates how the reader can be just as complicit as the opportunistic journo in perpetuating an incomplete or slanted portrait.

The important conundrum that Malcolm imparts in her short and magnificently complicated volume is why we bother to read or write journalism at all if we know the game is rigged. The thorny morality can extend to biography (Malcolm’s The Silent Woman is another excellent book which sets forth the inherent and surprisingly cyclical bias in writing about Sylvia Plath). And even when the seasoned journalist is aware of ethical discrepancies, the judgmental pangs will still crop up. In “A Girl of the Zeitgeist” (contained in the marvelous collection, Forty-One False Starts), Malcolm confessed her own disappointment in how Ingrid Sischy failed to live up to her preconceptions as a bold and modern woman. Malcolm’s tendentiousness may very well be as incorrigible as McGinnis’s, but is it more forgivable because she’s open about it?

* * *

It can be difficult for Janet Malcolm’s most arduous advocates to detect the fine grains of empathy carefully lining the crisp and meticulous forms of her svelte and careful arguments, which are almost always sanded against venal opportunists. Malcolm’s responsive opponents, which have recently included Esquire‘s Tom Junod, Errol Morris, and other middling men who are inexplicably intimidated by women who are smarter, have attempted to paint Malcolm as a hypocrite, an opportunist, and a self-loathing harpy of the first order. Junod wrote that “it’s clear to anyone who reads her work that very few journalists are animated by malice than Janet Malcolm” and described her work as “a self-hater whose work has managed to speak for the self-hatred” of journalism. Yet Junod cannot cite any examples of this self-hate and malice, save for the purported Henry Youngman-like sting of her one liners (Malcolm is not James Wolcott; she is considerably more thoughtful and interesting) and for pointing out, in Iphigenia in Forest Hills, how trials “offer unique opportunities for journalistic heartlessness,” failing to observe how Malcolm pointed out how words or evidence lifted out of context could be used to condemn or besmirch the innocent until proven guilty (and owning up to her own biases and her desire to interfere).

Malcolm is not as relentless as her generational peer Renata Adler, but she is just as refreshingly formidable. She is as thorough with her positions and almost as misunderstood. She has made many prominent enemies for her controversial positions — even fighting a ten year trial against Jeffrey Masson over the authenticity of his quotations (dismissed initially by a federal judge in California on the grounds that there was an absence of malice). Adler was ousted from The New Yorker, but Malcolm was not. In the last few years, both have rightfully found renewed attention for their years among a new generation.

One origin for the anti-Malcolm assault is John Taylor’s 1989 New York Magazine article, “Holier than Thou,” which is perhaps singularly responsible for making it mandatory for any mention of The Journalist and the Murderer to include its infamous opening line: “Every journalist who is not too stupid or too full of himself to notice what is going on knows that what he does is morally indefensible.” Taylor excoriated Malcolm for betraying McGinniss as a subject, dredged up the Masson claims, and claimed that Malcolm used Masson much as McGinniss had used MacDonald. It does not occur to Taylor that Malcolm herself may be thoroughly familiar with what went down and that the two lengthy articles which became The Journalist and the Murderer might indeed be an attempt to reckon with the events that caused the fracas:

Madame Bovary, c’est moi,” Flaubert said of his famous character. The characters of nonfiction, no less than those of fiction, derive from the writer’s most idiosyncratic desires and deepest anxieties; they are what the writer wishes he was and worries that he is. Masson, c’est moi.

Similarly, Evan Hughes had difficulty grappling with this idea, caviling over the “bizarre stance” of Malcolm not wanting to be “oppressed by the mountain of documents that formed in my office.” He falsely infers that Malcolm has claimed that “it is pointless to learn the facts to try to get to the bottom of a crime,” not parsing Malcolm’s clear distinction between evidence and the journalist’s ineluctable need to realize characters on the page. No matter how faithfully the journalist sticks with the facts, a journalistic subject becomes a character because the narrative exigencies demand it. Errol Morris can find Malcolm’s stance “disturbing and problematic” as much as he likes, but he is the one who violated the journalistic taboo of paying subjects for his 2008 film, Standard Operating Procedure, without full disclosure. One of Morris’s documentary subjects, Joyce McKinney, claimed that she was tricked into giving an interview for what became Tabloid, alleging that one of Morris’s co-producers broke into her home with a release form. Years before Morris proved triumphant in an appellate court, he tweeted:

The notion of something “unvarnished” attached to a personal account may have originated with Shakespeare:

And therefore little shall I grace my cause
In speaking for myself. Yet, by your gracious patience,
I will a round unvarnished tale deliver
Of my whole course of love. What drugs, what charms,
What conjuration and what mighty magic—
For such proceeding I am charged withal—
I won his daughter.
Othello, Act 1, Scene 3

Othello hoped that in telling “a round unvarnished tale,” he would be able to come clean with Brabantio over why he had eloped with the senator’s daughter Desdemona. He wishes to be straightforward. It’s an extremely honorable and heartfelt gesture that has us very much believing in Othello’s eloquence. Othello was very lucky not to be speaking with a journalist, who surely would have used his words against him.

Next Up: Truman Capote’s In Cold Blood!

The Taming of Chance (Modern Library Nonfiction #98)

(This is the third entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: Operating Instructions.)

mlnf98In the bustling beginnings of the twentieth century, the ferociously independent mind who forever altered the way in which we look at the universe was living in poverty.* His name was Charles Sanders Peirce and he’d anticipated Heisenberg’s uncertainty principle by a few decades. In 1892, Peirce examined what he called the doctrine of necessity, which held that every single fact of the universe was determined by law. Because before Peirce came along, there were several social scientists who were determined to find laws in everything — whether it be an explanation for why you parted your hair at a certain angle with a comb, felt disgust towards specific members of the boy band One Direction, or ran into an old friend at a restaurant one hundred miles away from where you both live. Peirce declared that absolute chance — that is, spontaneity or anything we cannot predict before an event, such as the many fish that pelted upon the heads of puzzled citizens in Shasta County, California on a January night in 1903 — is a fundamental part of the universe. He concluded that even the careful rules discovered by scientists only come about because, to paraphrase Autolycus from A Winter’s Tale, although humans are not always naturally honest, chance sometimes makes them so.

The story of how Peirce’s brave stance was summoned from the roiling industry of men with abaci and rulers is adeptly set forth in Ian Hacking’s The Taming of Chance, a pleasantly head-tingling volume that I was compelled to read twice to ken the fine particulars. It’s difficult to articulate how revolutionary this idea was at the time, especially since we now live in an epoch in which much of existence feels preordained by statistics. We have witnessed Nate Silver’s demographic models anticipate election results and, as chronicled in Moneyball, player performance analysis has shifted the way in which professional baseball teams select their roster and steer their lineup into the playoffs, adding a strange computational taint that feels as squirmy as performance enhancing drugs.

But there was a time in human history in which chance was considered a superstition of the vulgar, even as Leibniz, seeing that a number of very smart people were beginning to chatter quite a bit about probability, argued that the true measure of a Prussian state resided in how you tallied the population. Leibniz figured that if Prussia had a central statistic office, it would not only be possible to gauge the nation’s power but perhaps lead to certain laws and theories about the way these resources worked.

This was obviously an idea that appealed to chin-stroking men in power. One does not rule an empire without keeping the possibility of expansion whirling in the mind. It didn’t take long for statistics offices to open and enthusiasts to start counting heads in faraway places. (Indeed, much like the early days of computers, the opening innovations originated from amateurs and enthusiasts.) These early statisticians logged births, deaths, social status, the number of able-bodied men who might be able to take up weapons in a violent conflict, and many other categories suggested by Leibniz (and others that weren’t). And they didn’t just count in Prussia. In 1799, Sir John Sinclair published a 21 volume Statistical Account of Scotland that undoubtedly broke the backs of many of the poor working stiffs who were forced to carry these heavy tomes to the guys determined to count it all. Some of the counters became quite obsessive in their efforts. Hacking reports that Sinclair, in particular, became so sinister in his efforts to get each minister of the Church of Scotland to provide a detailed congregation schedule that he began making threats shrouded in a jocose tone. Perhaps the early counters needed wild-eyed dogged advocates like Sinclair to establish an extremely thorough baseline.

The practice of heavy-duty counting resulted, as Hacking puts it, in a bona-fide “avalanche of numbers.” Yet the intersection of politics and statistics created considerable fracas. Hacking describes the bickering and backbiting that went down in Prussia. What was a statistical office? Should we let the obsessive amateurs run it? Despite all the raging egos, bountiful volumes of data were published. And because there was a great deal of paper being shuffled around, cities were compelled by an altogether different doctrine of necessity to establish central statistical hubs. During the 1860s, statistical administrations were set up in Berlin, New York, Stockholm, Vienna, Rome, Leipzig, Frankfurt-am-Main, and many others. But from these central offices emerged a East/West statistics turf war, with France and England playing the role of Biggie on the West and Prussia as Tupac on the East. The West believed that a combination of individual competition and natural welfare best served society, while the East created the welfare state to solve these problems. And these attitudes, which Hacking is good enough to confess as caricaturish even as he illustrates a large and quite important point, affected the way in which statistics were perceived. If you believe in a welfare state, you’re probably not going to see laws forged from the printed numbers. Because numbers are all about individual action. And if you believe in the Hobbesian notion of free will, you’re going to look for statistical laws in the criminal numbers, because laws are formed by individuals. This created new notions of statistical fatalism. It’s worth observing that science at the time was also expected to account for morality.

Unusual experiments ensued. What, for example, could the chest circumference of a Scotsman tell us about the stability of the universe? (Yes, the measurement of Scottish chests was seriously considered by a Belgian guy named Adolphe Quetelet, who was trying to work out theories about the average man. When we get to Stephen Jay Gould’s The Mismeasure of Man several years from now, #21 in the Modern Library Nonfiction canon, I shall explore more pernicious measurement ideas promulgated as “science.” Stay tuned!) More nefariously, if you could chart the frequency of how often the working classes called in sick, perhaps you could establish laws to determine who was shirking duty, track the unruly elements, and punish the agitators interfering with the natural law. (As we saw with William Lamb Melbourne’s story, the British government was quite keen to crack down on trade unions during the 1830s. So just imagine what a rabid ideologue armed with a set of corrupted and unproven “laws” could do. In fact, we don’t even have to jump that far back in time. Aside from the obvious Hollerith punch card example, one need only observe the flawed radicalization model presently used by the FBI and the DHS to crack down on Muslim “extremists.” Arun Kundnani’s recent book, The Muslims Are Coming, examines this issue further. And a future Bat Segundo episode featuring Kundnani will discuss this dangerous approach at length.)

Throughout all these efforts to establish laws from numbers (Newton’s law of gravity had inspired a league of scientists to seek a value for this new G constant, a process that took more than a century), Charles Babbage, Johann Christian Poggendorf, and many others began publishing tables of constants. It is one thing to publish atomic weights. It is quite another to measure the height, weight, pulse, and breath of humans by gender and ethnicity (along with animals). The latter constant sets are clearly not as objective as Babbage would like to believe. And yet the universe does adhere to certain undeniable principles, especially when you have a large data set.

It took juries for mathematicians to understand how to reconcile large numbers with probability theory. In 1808, Pierre-Simon Laplace became extremely concerned with the French jury system. At the time, twelve-member juries convicted an accused citizen by a simple majority. He calculated that a seven-to-five majority had a chance of error of one in three. The French code had adopted the unusual method of creating a higher court of five judges to step in if there was a disagreement with a majority verdict in the lower court. In other words, if the majority of the judges in the higher court agreed with the minority of jurors in the lower court that an accused person should be acquitted, then the accused person would be acquitted. Well, this complicated system bothered Laplace. Accused men often faced execution in the French courts. So if there was a substantial chance of error, then the system needed to be reformed. Laplace began to consider juries composed of different sizes and verdicts ranging from total majority (12:0) to partial majority (9:3, 8:4), and he computed the following odds (which I have reproduced from a very helpful table in Hacking’s book):

hacking-juryerror

The problems here become self-evident. You can’t have 1,001 people on a jury arguing over the fate of one man. On the other hand, you can’t have a 2/7 chance of error with a jury of twelve. (One of Laplace’s ideas was a 144 member jury delivering a 90:54 verdict. This involved a 1/773 chance of error. But that’s nowhere nearly as extreme as a Russian mathematician named M.V. Ostrogradsky, who wasted much ink arguing that a 212:200 majority was more reliable than a 12:0 verdict. Remember all this the next time you receive a jury duty notice. Had some of Laplace’s understandable concerns been more seriously considered, there’s a small chance that societies could have adopted larger juries in the interest of a fair trial.)

French law eventually changed the minimum conviction from 7:5 to 8:4. But it turned out that there was a better method to allow for a majority jury verdict. It was a principle that extended beyond mere frequency and juror reliability, taking into account Bernoulli’s ideas on drawing black and white balls from an urn to determine a probability value. It was called the law of large numbers. And the great thing is that you can observe this principle in action through a very simple experiment.

Here’s a way of seeing the law of large numbers in action. Take a quarter and flip it. Write down whether the results are heads or tails. Do it again. Keep doing this and keep a running tally of how many times the outcome is heads and how many times the coin comes up tails. For readers who are too lazy to try this at home, I’ve prepared a video and a table of my coin toss results:

edcointoss

The probability of a coin toss is 1:1. On average, the coin will turn up heads 50% of the time and tails 50% of the time. As you can see, while my early tosses leaned heavily towards heads, by the time I had reached the eighteenth toss, the law of large numbers ensured that my results skewed closer to 1:1 (in this case, 5:4) as I continued to toss the coin. Had I continued to toss the coin, I would have come closer to 1:1 with every toss.

galtonbox

The law of large numbers offered the solution to Laplace’s predicament. It also accounts for the mysterious picture at the head of this essay. That image is a working replica of a Galton box (also known as a quincunx). (If you’re ever in Boston, go to the Museum of Science and you can see a very large working replica of a Galton box in action.) Sir Francis Galton needed a very visual method of showing off the central limit theorem. So he designed a box, not unlike a pachinko machine, in which beans are dropped from the top and work their way down through a series of wooden pins, which cause them to fall along a random path. Most of the beans land in the center. Drop more beans and you will see a natural bell curve form, illustrating the law of large numbers and the central limit theorem.

Despite all this, there was still the matter of statistical fatalism to iron out, along with an understandable distrust of statistics among artists and the general population, which went well beyond Disraeli’s infamous “There are three kinds of lies: lies, damned lies, and statistics” quote. Hacking is a rigorous enough scholar to reveal how Dickens, Dostoevsky, and Balzac were skeptical of utilitarian statistics. Balzac, in particular, delved into “conjugal statistics” in his Physiology of Marriage to deduce the number of virtuous women. They had every reason to be, given how heavily philosophers leaned on determinism. (See also William James’s “The Dilemma of Determinism.”) A German philosopher named Ernst Cassirer was a big determinism booster, pinpointing its beginnings in 1872. Hacking challenges Cassierer by pointing out that determinism incorporated the doctrine of necessity earlier in the 1850s, an important distinction in returning back to Peirce’s idea of absolute chance.

I’ve been forced to elide a number of vital contributors to probability and some French investigations into suicide in an attempt to convey Hacking’s intricate narrative. But the one word that made Perice’s contributions so necessary was “normality.” This was the true danger of statistical ideas being applied to the moral sciences. When “normality” became the ideal, it was greatly desirable to extirpate anything “abnormal” or “aberrant” from the grand human garden, even though certain crime rates were indeed quite normal. We see similar zero tolerance measures practiced today by certain regressive members of law enforcement or, more recently, New York Mayor Bill de Blasio’s impossible pledge to rid New York City of all traffic deaths by 2024. As the law of large numbers and Galton’s box observed, some statistics are inevitable. Yet it was also important for Peirce to deny the doctrine of necessity. Again, without chance, Peirce pointed out that we could not have had all these laws in the first place.

It was strangely comforting to learn that, despite all the nineteenth century innovations in mathematics and probability, chance remains very much a part of life. Yet when one begins to consider stock market algorithms (and the concomitant flash crashes), as well as our collective willingness to impart voluminous personal data to social media companies who are sharing these numbers with other data brokers, I cannot help but ponder whether we are willfully submitting to another “law of large numbers.” Chance may favor the prepared mind, as Pasteur once said. So why court predictability?

* Peirce’s attempts to secure academic employment and financial succor were thwarted by a Canadian scientist named Simon Newcomb. (A good overview of the correspondence between the two men can be found at the immensely helpful “Perice Gateway” website.)

Next Up: Janet Malcolm’s The Journalist and the Murderer!

Operating Instructions (Modern Library Nonfiction #99)

(This is the second entry in The Modern Library Nonfiction Challenge, an ambitious project to read and write about the Modern Library Nonfiction books from #100 to #1. There is also The Modern Library Reading Challenge, a fiction-based counterpart to this list. Previous entry: Melbourne.)

mlnf99It is easy to forget, as brave women document their battles with cancer and callous columnists bully them for their candor, that our online confessional age didn’t exist twenty years ago. I suspect this collective amnesia is one of the reasons why Anne Lamott’s Operating Instructions — almost an urtext for mommy blogs and much of the chick lit that followed — has been needlessly neglected by snobbish highbrow types, even when hungry young writers rushed to claim transgressive land in the Oklahoma LiveJournal Run of 2006.

Lamott’s book, which is a series of honed journal entries penned from the birth of her son Sam to his first birthday, was ignored by the New York Times Book Review upon its release in 1993 (although Ruth Reichl interviewed her for the Home & Garden section after the book, labeled “an eccentric baby manual” by Reichl, became a bestseller). Since then, aside from its distinguished inclusion on the Modern Library list, it has not registered a blip among those who profess to reach upward. Yet if we can accept Karl Ove Knausgaard’s honesty about fatherhood in the second volume of his extraordinary autobiographical novel, My Struggle, why then do we not honor Anne Lamott? It is true that, like Woody Allen in late career, Lamott has put out a few too many titles. It is also true that she attracts a la