Tag Archives: book reviews

The death of literary criticism?

Slate‘s Jacob Silverman is worried about the Internet literary community’s impact on critics’ ability to be honest:

Reviewers shouldn’t be recommendation machines, yet we have settled for that role, in part because the solicitous communalism of Twitter encourages it. Our virtue over the algorithms of Amazon and Barnes & Noble, and the amateurism (some of it quite good and useful) of sites like GoodReads, is that we are professionals with shaded, informed opinions. We are paid to be skeptical, even pugilistic, so that our enthusiasms count for more when they’re well earned. Today’s reviewers tend to lionize the old talk-show dustups between William F. Buckley and Gore Vidal or Noam Chomsky (the videos are on YouTube), but they’re unwilling to engage in that kind of intellectual combat themselves. They praise the bellicosity of Norman Mailer and Pauline Kael, but mostly from afar. Mailer and Kael are your rebellious high school friends: objects of worship, perhaps, but not emulation. After all, it’s all so messy, and someone might get hurt.

Instead, cloying niceness and blind enthusiasm are the dominant sentiments. As if mirroring the surrounding culture, biting criticism has become synonymous with offense; everything is personal—one’s affection for a book is interchangeable with one’s feelings about its author as a person. Critics gush in anticipation for books they haven’t yet read; they <3 so-and-so writer, tagging the author’s Twitter handle so that he or she knows it, too; they exhaust themselves with outbursts of all-caps praise, because that’s how you boost your follower count and affirm your place in the back-slapping community that is the literary web. And, of course, critics, most of them freelance and hungry for work, want to appeal to fans and readers as well; so to connect with them, they must become them.

Not that there aren’t exceptions.

A reflection

It feels somehow appropriate that it is here, in the Mission district of San Francisco, that my writer’s block has finally begun to recede. For several weeks now, ever since I typed the last sentence of my fiftieth book review of the year, words had eluded me, replacing the year-long jackhammering of my fingertips for anxious table-tapping instead. Muddy’s Coffee House, at 1304 Valencia, is proving to be my long-awaited antidote, much as countless cafes and bars within walking distance provided a safe haven for yesteryear’s beatniks and the poets of today.

I am neither beatnik nor poet. I am, however, an Excel whiz: I create sales plans for an online company in New York, and I’m in the Golden State merely on business. But after reading Gregory Dicum’s recent feature in The New York Times, “A Book Lover’s San Francisco,” and eliciting a good friend’s boundless enthusiasm upon hearing of my trip to the West Coast, I decided a sign was a sign. Immediately after completing work today, I pointed my rental car, a Chevy Aveo with all the horsepower of a kitchen blender, in the direction of I-280 and my first-ever foray into the City by the Bay.

Although it has come to an end in San Francisco, mine is a literary journey that began last New Year’s Eve in Hong Kong, as I stood with my girlfriend atop the IFC mall to await the celebratory fireworks. She asked me if I had a New Year’s resolution. I’d always managed to steer clear of such reckless abandon in the past and, in retrospect, I blame the bitterly whipping wind and cacophonic house music emanating from the rooftop bar for my anomalous response: “I want to read fifty books this year.”

What soon followed was a rapidly growing stack of books that started with SuperFreakonomics and ended with Animal Spirits, swallowing over ten months and forty-eight books in between. To keep myself committed, I started a blog and reviewed each book as I read it, praising some, excoriating others, and – when hungry, tired or bored – barely devoted four paragraphs each to the rest. If, as some claim, a year is best measured in books, it seems I’d learned that lesson at long last. Other lessons, however, proved harder to grasp. Among axioms of literature, “reading a book is a journey” springs immediately to mind, a trope as true as it is clichéd. Yet my always-looming year-end goal rendered me the journeying equivalent of the five-year-old in the backseat, wondering, “Are we there yet?”

And so it seemed to me, just as to that precocious (hypothetical) toddler, that I never was. As the year progressed and the inaugural feverish pitch of my reading pace gradually ceded ground to work and procrastination, the practicalities of finding time just as subtly began to assert themselves. I decided, via executive fiat, to start reading shorter books. Cut out the dry non-fiction. Embrace short-story collections. These and other considerations crowded out my personal preferences, sacrificing the lengthy luxury of Jonathan Franzen’s 562-page Freedom and the satisfaction of Tolstoy’s War and Peace in favor of the immutable fifty-book bottom line.

Somewhere along the way, I became aware of the inevitable creeping sensation that my New Year’s resolution had shed its virgin luster. Where before was the refrain “only twenty-five left to go!” there now remained only a sulking “eight left until I’m finally done with this stupid thing.” The blog, too, had become a chore. The whole endeavor was feeling, quite uncomfortably, more and more like school.

This is not to say that the occasional book didn’t capture my imagination. Some certainly did, from Olga Grushin’s surrealist portrait of a declining Soviet Union in The Dream Life of Sukhanov to Michael Lewis’ hilarious recounting of Wall Street’s outsiders in The Big Short to Grégoire Bouillier’s self-psychoanalysis in his endlessly relatable memoir The Mystery Guest, and many more besides. But the act of institutionalizing my reading stripped the written word of one of its most potent weapons: the ability to fully immerse a reader into a world of the author’s creation. With a ticking clock as the omnipresent soundtrack, my suspension of disbelief was relegated to intermittent moments of reading, often lost amongst the more numerous minutes spent fretting over my remaining schedule.

While this may read like a cautionary tale against setting numeric goals for book reading, it’s actually something a little different: a suggestion to aim high but to learn to be satisfied with a less-than-100% success rate. Which is why, even as I celebrated the dissolution of my writer’s block in San Francisco, I suppose I’ll just have to accept the fact that I still didn’t finish this essay until now, back in New York.

Second-half review

So, j’ai fini. The fifty-book challenge can finally, and mercifully, be laid to rest — not that I didn’t enjoy it, because I most certainly did. (And I’ll get to that in a later post: the ups, the downs, the profound life lessons learned. Things like that. Hint: purchases of $25 or more on Amazon.com get free shipping. This was crucial in making the fifty-book challenge less challenging financially.) It’s strange: these days I’m reading Freedom by Jonathan Franzen, a lengthy novel I’d been putting off forever, and there’s absolutely no deadline for its completion. I’d almost forgotten what it was like to read without even a slightly gnawing sensation of panic.

Anyway, in adhering to tradition (by which I mean my solitary midpoint recap), please allow me to dole out the awards to best and worst of fiction and non-fiction, for my last twenty-five books. But first, a few statistics. On the year, I read thirty-five works by males and fifteen by females. (Believe it or not, this ratio actually improved in the second half of my challenge, with sixteen books by men and nine by women. I am ashamed. In my defense, most of my selections were culled directly from major publications’ book review sections, which are overwhelmingly biased towards male authors.) And although I considerably improved my fiction exposure (fifteen of my last twenty-five books, or sixteen if one counts the hopelessly naive polemic by Roger D. Hodge, The Mendacity of Hope), I ended the challenge with an even split between fiction and non-, with twenty-five books apiece. I would go into further detail — divisions by nationality, book length in pages, median year published, etc. — but that would only serve to depress me and, even worse, would require actual research, which (as anyone who’s kept up with this blog should know by now) is the bane of my passively critiquing online existence.

Onward, then.

Best Non-Fiction Book: The Mystery Guest, by Grégoire Bouillier

This feels a little like cheating. As a memoir, The Mystery Guest hovers somewhere between the realms of fiction (from which all memoirs take their cues) and fact (to which all memoirs purportedly aspire). But while the genre is ambiguous, the quality of the story, and the depth of feeling it achieves, is anything but. Grégoire Bouillier manages to capture, in the space of a tidy little book with a very skinny spine, the inner psychotic that rears its ugly head in all of us, given the right (wrong?) circumstances. In the case of Bouillier, this circumstance is his invitation to a birthday party of a woman he does not know, as the “mystery guest” of a former lover who had left him without explanation five years before. Perfectly depicting the protagonist’s — his own — frayed nerves amid the taut ambiance that builds throughout the party itself, Bouillier courageously unravels the mysteries of his mind, laying bare his insecurities and thus affording grateful readers an eerily familiar reminder of the sheer insanity of romance.

Honorable mention: Unfortunately, none.

Best Fiction Book: The Thieves of Manhattan, by Adam Langer

Perhaps it’s the gleeful manner with which Adam Langer mocks every aspect of the publishing industry. Or perhaps it’s simply the fact that, in getting such literary bunk published, Langer’s distaste for editors’ discernment was vindicated by his novel’s very existence. But whatever the reasons, The Thieves of Manhattan is at once a laugh machine and a sober inspection of the challenges facing modern writers in a shifting publishing landscape. Employing a niche jargon so drenched in industry particulars that he includes a glossary at the end, Langer hilariously documents the commercialization of literature, a transformation that has placed the works of ex-cons and Pulitzer Prize winners on the same bookshelf at the local Barnes & Noble. Clearly, Langer is a man more amused than outraged at the rapidly disappearing distinction between novels and non-fiction, and he references numerous hoaxes, forgeries, and plagiarisms within his own novel. It may be that Langer, exhausted by high-minded denunciations of authorial appropriation, decided that the best rebuttal was to mirthfully engage in the practice himself. For this, The Thieves of Manhattan won’t snag him a Pulitzer Prize, but it will provide his readers with a basic, and far more useful, reward: a most enjoyably clever story.

Honorable mention: The Lotus Eaters, by Tatjana Soli; and All Is Forgotten, Nothing Is Lost, by Lan Samantha Chang

Worst Non-Fiction Book: The Disenchantment of Secular Discourse, by Steven D. Smith

This may come as a bit of a surprise, since I was not unkind to Steven D. Smith in my review of his book. But my own brand of disenchantment is owing not to lack of substance but of style: The Disenchantment of Secular Discourse is, quite bluntly, not that interesting. Smith’s particular axe to grind revolves around a practice he calls “smuggling:” the influence of moral judgments on public dialogue despite their conspicuous absence as explicitly delineated premises. In the author’s view, this results in a disingenuous conversation: the participants cannot help but unconsciously draw on their individual belief systems but are prevented, through a collective desire for credibility among peers, from admitting these principles’ central role. The concept of “smuggling” is an intriguing one, but The Disenchantment of Secular Discourse (as suggested by the title itself) is, to put it lightly, an extremely dry analysis of its effects. Really, though: thumbs up for the idea.

Dishonorable mention: Once again, I didn’t read any particularly horrible non-fiction books in the second half. It was, overall, a steadily decent non-fiction batch (without many outliers) this time around.

Worst Fiction Book: One Day, by David Nicholls

David Nicholls likely deserves better from me. It’s not exactly fair for a beach read to be judged as a Serious Book. Then again, One Day was once reviewed in The New York Times. As Spiderman’s uncle once explained, not unkindly, “With great power comes great responsibility.” Mr. Nicholls, I do hope your film adaptation of the book does well at the box office, since (as I mentioned in my earlier review) that was clearly the objective you had in mind the entire time. There is nothing wrong with this, except for the fact that books written as screenplays tend to exhibit, well, diminished literary value. And I don’t think I’m being cruel here. The driving concept of the book — a peek, on the same calendar day of each successive year, at a pair of mutually-obsessed protagonists — is better suited for straight-to-TV fare than for serious dissection. But read it I did, and skewer it I must. One Day is probably not so bad when the only alternatives are celebrity gossip mags and racy tabloids. People Magazine he is not, but neither is he Ian McEwan or Margaret Atwood. John Grisham, then?

Dishonorable mention: Tinkers, by Paul Harding; and If You Follow Me, by Malena Watrous

I’m still not quite finished with this blog. There’s definitely one more post coming, at the very least. Keep checking back!

#50: Animal Spirits

How did John Maynard Keynes know I’m not rational? Or at least, not always rational. According to authors George A. Akerlof and Robert J. Shiller, this is one key precept that vanished somewhere along the line from its initial expression by Keynes to the onset of the Great Recession seventy years later. The duo’s book, Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism, is a concise attempt at its revival.

It is now nearly a foregone conclusion that humans act rationally as pertaining to economic decisions. So in the aggregate, the macro-economy will reflect thousands and millions of minor judgment calls that, taken together, constitute the long-sought-after equilibrium. The problem with this theory (even if this never seemed to bother its creator, Milton Friedman) is in its idealism. Are human beings rational? To an extent, yes. At other times, “people really are human, that is, possessed of all-too-human animal spirits,” the authors write.

What are these animal spirits, and what do they do? The definition given here is “the thought patterns that animate people’s ideas and feelings.” This sounds suitably vague, which is precisely the point. In the rush to transform economics into a science, overweening economists threw the baby out with the bathwater, discarding the very real enigma of human behavior along with the failed economic theories of prior eras. Akerlof, the 2001 Nobel Prize-winner in economics, and Shiller want nothing more than to reintroduce these animal spirits to the field of economics and the public at large.

But first, a re-branding. What was then “animal spirits” is now studied as “behavioral economics.” The authors propose five psychological aspects of this discipline: confidence, fairness, corruption and bad faith, money illusion, and stories. Each of these plays a unique role within the macro-economy, but not always intuitively. Money illusion, for example, describes what takes place when wage cuts are instituted following a deflationary trend. Even when the decrease in pay is commensurate with the drop in prices, employees usually feel cheated. A perfectly rational decision by an employer thus becomes an object lesson in the existence of money illusion (and influences the employees’ perception of relative fairness as well).

This flies in the face of classical economics, in which humans are presumed to be supremely rational. (That such theories persist alongside the ongoing public fascination with the likes of Paris Hilton or, say, the British royal family is its own nifty testament to the inscrutability of the human mind.) So Akerlof and Shiller dutifully document the effects of each of their five factors before launching into eight key questions whose answers only make sense in light of the findings of behavioral economics.

This is an enlightening book, and one made all the more pleasant for its conspicuous lack of angry demagoguery. On a spectrum of bitterness from Joseph Stiglitz to Paul Krugman, the authors of Animal Spirits are clearly more aligned with the former. This is an unexpected reprieve, which understandably lends additional gravitas to their cause. Their case can be summarized thusly: don’t buy too literally into the cult of the “invisible hand.” Markets do fail, which is precisely why government regulation (and occasional intervention) is necessary. Of course, with the benefit of hindsight since Animal Spirits was published, it appears their advice — like that of Stiglitz, Krugman, et al — has gone largely unheeded. What comes next is anyone’s guess.

Rescuing the Facebook generation

For the November 25th issue of The New York Review of Books, author Zadie Smith contributed an essay titled “Generation Why?” Ostensibly, the column was a review of Aaron Sorkin’s much-ballyhooed film, The Social Network, but Smith clearly had bigger fish to fry than nerdy billionaires (especially since Sorkin and director David Fincher had already undertaken this task so elegantly themselves).

No, the issue at stake was not Facebook but the “generation” for which it was created and for whom, perhaps, its existence circumscribes theirs. Smith, in attempting to extricate Facebook from its inevitable foundation myths, nevertheless concludes that she will someday “misremember my closeness to Zuckerberg [she, too, was on Harvard’s campus for Facebook’s birth in 2003], in the same spirit that everyone in ‘60s Liverpool met John Lennon.” And yet an acute sense of separation haunts her, as much for its seeming incongruity (Smith is only nine years Mark Zuckerberg’s senior) as for its depth.

“You want to be optimistic about your own generation,” Smith muses, with a touch of nostalgia. “You want to keep pace with them and not to fear what you don’t understand.” She would be wise to heed her own advice. For what she contends in “Generation Why?” – that for the unwashed masses who fancy Facebook, Twitter, et al among life’s requisites, their online reincarnations have themselves become unhinged from, or even superseded, reality – is as emblematic of the anachronisms of the old-guard cohort (whom she affectionately dubs “1.0 people”) as it is a functional indictment of their successors.

The New Yorker’s Susan Orlean stumbles into the same trap, albeit somewhat more amiably. On her blog, “Free Range,” she posits a new hierarchy of friendship: the Social Index, a ranking of relationships by the relative frequencies of online vs. offline contact. “Human relationships used to be easy,” she explains. But “now, thanks to social media, it’s all gone sideways.” Orlean then proceeds to delineate these subtle distinctions: between “the friend you know well” and “the friend you sort of know” and “the friend, or friend-like entity, whom you met initially via Facebook or Twitter or Goodreads or, heaven help us, MySpace,” and so on. Wisely, she keeps the column short and employs a jocular tone, one whose comic value is reaffirmed by her promotion of the Social Index on – where else? – Twitter, using the hashtag #socialindex.

But one can detect a beguiling undercurrent of cynicism beneath Orlean’s evident joviality. What Zadie Smith and Susan Orlean share – in addition to their niche of the “celebrity lifestyle” whose attainment, Smith assures us, is the raison d’être of the Facebook generation – is the creeping suspicion, despite reaching a career zenith, of their continuing exclusion from the proverbial “Porcellian Club” of Zuckerberg’s collegiate fantasies. This, then, is a fate to which both they and those they pity are likewise consigned. The irony, of course, is their refusal, or inability, to identify these “People 2.0” as their kindred spirits.

Smith opts instead for the appeal to authority. In this case, that role falls to Jaron Lanier, a “master programmer and virtual reality pioneer.” (Smith, who is 35, quickly reminds us that Lanier, 50, is “not of my generation,” an assertion whose brashness once more belies her commonalities with that perpetually group-conscious underclass of Facebookers.) Quoting extensively from Lanier’s book, You Are Not a Gadget, Smith appropriates the tech-philosopher’s arm’s-length aspect toward technology as her own, spraying the reader with snippets of his wisdom. (In the book’s preface, Lanier cautioned against this Girl Talk-esque brand of mishmash, lamenting that his words would be “scanned, rehashed, and misrepresented by crowds of quick and sloppy readers into wikis and automatically aggregated wireless text message streams.”)

But Smith and Lanier have separately, and preemptively, doomed themselves to contemporary irrelevance by adhering to a retrograde narrative of the modern condition. Together, their worst nightmare is the narrowing of human existence into unintentionally confined spaces. This process takes place via “lock-in,” a series of inadvertently interacting steps which, taken together, preclude the possibility of reversal or alteration. Such was the case, Lanier argues (and Smith dutifully recounts), in the invention of the MIDI file type, a once-cutting edge format for storing and playing digital music, whose binary limitations preternaturally forced the beautiful infinity of analog melodies into a prepackaged sepulcher of bits and bytes. Once the standard had been formalized, the jig was up: there was no turning back. Music had forever changed, and not necessarily for the better. Lanier unwittingly reformulates – on behalf of the self-described “software idiot” Zadie Smith – these same fears in regard to social media.

These visions of doom are misplaced. One can feel almost viscerally the bored sighs emanating from countless millennials’ diaphragms as Zadie Smith ages before their very eyes: “When a human being becomes a set of data on a website like Facebook, he or she is reduced. Everything shrinks. Individual character. Friendships. Language. Sensibility. In a way it’s a transcendent experience: we lose our bodies, our messy feelings, our desires, our fears.” Such generous hyperbolizing obscures whatever consideration Smith’s fretting may warrant on the margins. If rescuing this Lost Generation is her utmost objective, then her plea for sanity, easily mistaken for groveling, will scatter Zuckerberg’s millions of disciples like so many cards in a two-bit parlor trick.

Notably, Zadie Smith gently ridicules the Facebook era’s emphasis on connectivity, remarking snidely that Zuckerberg “used the word ‘connect’ as believers use the word ‘Jesus,’ as if it were sacred in and of itself.” The quality of those interactions, she worries, is not worth the minimal effort exerted to vivify them. And yet she comes agonizingly close, on multiple occasions, to grasping the essence of this generation that remains simultaneously adjacent to, but seemingly unreachable from, her own. “Watching this movie, even though you know Sorkin wants your disapproval, you can’t help feel a little swell of pride in this 2.0 generation,” Smith concedes. “They’ve spent a decade being berated for not making the right sorts of paintings or novels or music or politics. Turns out the brightest 2.0 kids have been doing something else extraordinary. They’ve been making a world.”

Sound familiar? It should. The specter of John Lennon, the one “that everyone in ’60s Liverpool met,” haunts every word of “Generation Why?”  Even Zadie Smith, for whom Lennon (unlike Lanier) is clearly not a peer, cannot ignore the contemporary relevance of the former’s transformative impact on society. Culture may move more rapidly in the digital era than it did in the 1960s, but its disruptive rhythm has survived largely intact. Rebellion, experimentation, innovation: these are all hallmarks of the creative subculture, as each subsequent breakthrough quickly buries its predecessors. Mark Zuckerberg, then, is the spiritual descendant of John Lennon’s “Imagine.” We are, indeed, all connected (much to Smith’s everlasting surprise).

This is the epiphanic truth that the Facebook generation has uncovered, even if in so doing they remain blissfully unaware of the historical import of their actions. To be sure, their self-absorbed ignorance of a chronology of innovation is itself a product of the ever-shifting nature of modern culture. A generation once encompassed two or three decades; now, an absence of even five years from civilization would reduce the most precocious techie to the countenance of a Luddite. But, somewhat paradoxically (considering her alarm at Facebook’s social impact), Smith digests technology’s ephemeral nature with ease, as she states at the end of her essay: “I can’t imagine life without files but I can just about imagine a time when Facebook will seem as comically obsolete as LiveJournal.”

If this is the case, then what, precisely, is the cause for concern? Conceivably, Zadie Smith, who teaches literature, senses an intellectual fence over which the social media-savvy yet literarily deficient minds of her young charges are unable to vault. Perhaps, for a ponderous writer such as Susan Orlean, who once penned a 282-page paean to orchids, it is a fear of losing her audience to ever-decreasing attention spans. For Jaron Lanier, it may be the horror at a remix culture in which the devolution of works of art into haphazardly scissored segments (à la David Shields’ Reality Hunger) threatens the very nature of public expression. Perhaps Zadie Smith and Susan Orlean and Jaron Lanier and so many others of their age and temperament, finding themselves unable to “keep pace with [the younger generation],” succumb to the all-too-human instinct to “fear what [they] don’t understand.” In short, they face the same challenge that confronted the parents and teachers and writers of the ‘60s generation, fifty years later. They, like Mark Zuckerberg and the hordes of Facebook users who followed him in the quest for digital immortality, face the fear of oblivion.

#49: The Last Utopia

In just a few short weeks, the world will celebrate the sixty-second anniversary of the Universal Declaration of Human Rights. Adopted by the United Nations on December 10th, 1948, the document ushered in an unprecedented era of international rights norms that has since culminated in the prominence of human rights organizations such as Amnesty International and Human Rights Watch.

What Samuel Moyn argues in his book, The Last Utopia: Human Rights in History, is that the thematic line running from the UDHR’s adoption in 1948 through today is misrepresented in the nascent field of human rights studies. Although cemented now as the defining moment that gave human rights its beginning, the Universal Declaration’s appearance was, Moyn insists, “less the annunciation of a new age than a funereal wreath laid on the grave of wartime hopes.”

This is a decidedly irreverent perspective on a movement whose brief and explosive history has (especially in recent years) been lionized as proof of civilization’s continuing evolution. But Moyn is certain that these celebrants of human rights’ march to glory have it all wrong. In fact, he argues, the UDHR was, if anything, more detrimental than it was helpful in facilitating the cause of human rights as it is known today. The UDHR’s adoption “had come at the price of legal enforceability:” by its inability to transcend ancient notions of state sovereignty, the declaration in effect bequeathed to nation-states the power of adjudication over their own adherence to human rights standards. Moyn’s contention revolves around the fact that world leaders in the 1940s were understandably reluctant to cede any jurisdiction to the whims of a supranational institution, notwithstanding (or perhaps directly due to) its supposed impartiality.

I found the author’s thesis compelling at first, as he explicitly delineated the prevailing global consensus of political leaders in the post-World War II era: a strong desire for peace was complemented by a profound wariness of others’ intentions. In such an environment, the idea of subordinating a national legal framework to an international structure — especially one in which the state itself could be held blameworthy — was not an attractive proposition to any elites. And thus was born the Universal Declaration of Human Rights, a document whose noble goals disguised an impotent enforcement mechanism.

But Samuel Moyn’s continued pounding on the heads of his readers quickly grows old. I cannot count the number of times (or the plethora of ways) he tries to convince his readers that today’s edition of human rights bears little resemblance to, or is only a distant relative of, that of the 1940s. “As of 1945,” Moyn writes in one instance, “human rights were already on the way out for the few international lawyers who had made them central in wartime.” Elsewhere: “Instead of turning to history to monumentalize human rights by rooting them deep in the past, it is much better to acknowledge how recent and contingent they really are.” And, “what mattered most of all about the human rights moment of the 1940s, in truth, is not that it happened, but that — like the even deeper past — it had to be reinvented, not merely retrieved, after the fact.”

Virtually nothing is as consistently unsurprising as professorial loquacity. But even among academics, Moyn tests the limits of repetition. His mantra seems to have been: if something is worth writing, it’s worth writing one hundred times. In this regard, then, he has succeeded. Unfortunately, much like human rights themselves for a time, Moyn proves far more adept at defining their history negatively than positively. It is obvious that he considers the UDHR only nominally relevant in jump-starting the human rights movement; what is less transparent is his perspective on its true origins.

Human rights constitute the eponymous last utopia of his book’s title, but Samuel Moyn does little with this concept other than to restate it over and over (just as he does with his repudiations of the movement’s alleged foundation myth). “When the history of human rights acknowledges how recently they came to the world,” Moyn writes, “it focuses not simply on the crisis of the nation-state, but on the collapse of alternative internationalisms — global visions that were powerful for so long in spite of not featuring individual rights.” It was, in a sense, the worldwide disillusionment with grandiose visions of the past that gradually led to the introduction of human rights as a viable alternative. It offered a (facially) moral ideal where before had existed only political ones.

In short, “human rights were born as the last utopia — but one day another may appear.” Other than brief mentions (and like so much else in The Last Utopia), Samuel Moyn leaves this final speculation largely unaddressed. As to the idea that modern human rights came about due to the Universal Declaration of Rights, however: well, that horse has already been beaten quite to death.

#48: Salvation City

Earlier this year I expressed the need to stop reading manifestos. This time it’s dystopias that have drawn my ire: I think I’ll take a break on these too. Salvation City, a novel by Sigrid Nunez, is no duller than some of the other post-apocalyptic books I’ve read in the past few years. It’s also not particularly memorable.

Cole Vining is a thirteen-year-old orphan whose atheist parents died in a flu epidemic. The atheist bit matters, in this case, since so much of the narrative is focused on the conflicting identities of the young protagonist, as the storytelling jumps back and forth in time to pull all the strings together. Following his parents’ death, and after spending time in an orphanage known as Here Be Hope, young Cole was then delivered to the rural Indiana home of Pastor Wyatt and his wife Tracy, in a place called Salvation City.

The kindly clergyman — who, Cole notes ambivalently, “always looks right into the face of the person he is talking to” — and his spouse are devout, fundamentalist Christians, and their peculiar lifestyle is frequently juxtaposed against Cole’s earlier years under the emotionally fraught relationship of his irreligious parents. In Salvation City, and I refer here both to the book and to the town, the question is raised as to what exactly constitutes a rescue from tragedy, if not throwing into doubt the very nature of tragedy itself.

For Cole’s mother, Serena, even those neighbors who had opened their doors for assistance, as the flu swept through cities and towns, were deserving of the utmost suspicion: “But they were Jesus freaks, his mother said, and she didn’t want to get involved with them. ‘I mean, these people are actually happy about this catastrophe. They think any day now they’re going to be sucked up to heaven.'” Her twin sister, Addy, in an attempt to reclaim Cole from his new home following Serena’s death, expresses much the same sentiments: “‘These fanatics will use religion to justify anything — especially the ones who believe in the imminent rapture. You do understand, don’t you? That’s what these monsters were counting on? The Messiah was supposed to show up before I did.'”

Cole sees things somewhat differently. As he contemplates looming adulthood (from the wide-eyed vantage point experienced uniquely by young teens) and his adoptive father claims divine guidance in trying to persuade him to stay, Cole wonders: why “didn’t Jesus send a message to him and Addy, too? Wouldn’t that have helped them all?”

Sigrid Nunez leaves many questions such as this one open-ended, a seeming mockery of faith that becomes less flippant upon closer observation. Salvation City dwells on choices and asks, implicitly, the important question of what makes a home. But, as often befalls works of fiction whose circumstances require a great leap of imagination, the elusive answers never seem as important as the author intended them to be, and an apathetic reader is the disappointing consequence.

#47: The Mendacity of Hope

Roger D. Hodge is angry. The Mendacity of Hope: Barack Obama and the Betrayal of American Liberalism, a colorful expression of the author’s outrage at failed objectives and broken promises, begins with a lament that bespeaks profound disappointment in our current president. “Barack Obama came to us with such great promise,” Hodge writes. “He pledged to end the war in Iraq, end torture, close Guantánamo, restore the Constitution, heal our wounds, wash our feet. None of these things has come to pass.”

The Mendacity of Hope has been largely skewered by critics. In a Washington Post review, Alan Wolfe deemed Hodge’s polemic “a sloppily organized, badly argued and deeply reactionary book unlikely to have any influence at all on the way Americans think about their president.” In The New York Times, Jonathan Alter took issue with Hodge’s uncompromising position vis-à-vis the liberal purity of Obama’s policies: “Really?” Alter challenges. “Since when did the tenets of liberalism demand that politics no longer be viewed as the art of the ­possible?”

What we have seen to date, in the nearly two years since Obama’s inauguration, is a veritable influx of books, articles, essays, and magazine profiles critiquing his policies from the right. But while MSNBC, The Daily Show, and a smattering of other outlets have tweaked the president from the left, a substantive book-length rendering, by a liberal, of the inadequacies of the Obama administration’s policies has been largely nonexistent. This is owing at least as much to institutional inertia (Obama is already the president, and dissent is usually most effective when originating in the opposition) as it is to the fear that airing liberals’ disillusion could actually exacerbate the problem by causing miffed lefties to sit out the midterm elections.

Thus, after devoting much of his showtime, over the past year and a half, to unfavorable comparisons of the Barack Obama of today to the one who campaigned on such “high rhetoric” two years ago, The Daily Show‘s Jon Stewart was downright hospitable when the president appeared on his show on October 27, a mere six days before Election Day. Whether the abrupt change in the host’s demeanor was due to timidity or shrewd political strategy is unclear, but the consequence followed a general trend: outside of some niche circles, President Obama has not been held to accountability — in a protracted, thorough manner — by his liberal base.

But there is, I think, another reason that the left has kept largely silent. And that is the admission that, notwithstanding the collectively disaffected state of American liberals, Obama has indeed pushed through some truly formidable legislation. Health care reform, however trimmed-down and neutered its final edition, is still reform, as is financial regulation and other measures. Yes, Obama’s embrace of gay rights has been tepid at best, and his African-American constituency is less than pleased with his reluctance to embrace its plight. There are other grievances as well. But the progressive successes, largely lost amidst a torrent of obstructionism and party-line politics, remain, even as their legacy is overshadowed by perpetual congressional impasse and decreasing approval ratings.

It is this understanding — captured by the axiom “do not allow the perfect to be the enemy of the good” — that has eluded Rodger D. Hodge. In railing against “the mundane corruption of our capitalist democracy,” Hodge hammers away at “the obscene intimacy of big corporations and big government.” But his disillusionment is encased within a quixotic fantasy of liberal American governance. To Hodge, the conservative position is, for all intents and purposes, a politically impotent entity in the face of progressive ideology that is properly divorced from moneyed interests.

This is a somewhat absurd conclusion, given the populist (or demagogic, depending on perspective) stirrings that gave birth to the Tea Party and are expected to sweep the Republicans back into power in the House on Tuesday. Fortunately, Hodge’s animus is far more persuasive in his wholesale denunciation of corporate interests’ influence on American politics. Although at times a bit wonky, Hodge nevertheless portrays, with astounding clarity, fund-raising contributions whose origins and scale were strikingly at odds with the Obama brand’s stated philosophy. “The results were impressive,” the author writes. “Against a token candidate who raised a mere $2.8 million, Obama in his Senate race raised $14.9 million — in his first attempt at national office, in a relatively short time, with significant contributions from out-of-state donors such as Goldman Sachs, JPMorgan Chase, and George Soros. Indeed, 32 percent of his contributions came from out of state.”

Contrast this with a 2006 speech Obama made, in which he expressed empathy with Americans for their disgust with “a political process where the vote you cast isn’t as important as the favors you can do” and proclaimed that Americans were “tired of trusting us with their tax dollars when they see them spent on frivolous pet projects and corporate giveaways.” Indeed, Hodge would argue that the president stole from the playbook of former New York governor Mario Cuomo, who famously noted that political candidates “campaign in poetry but have to govern in prose.”

Interestingly, it is Roger D. Hodge’s prose that remains the highlight of The Mendacity of Hope. At times his phraseology perfectly straddles the line between comedy and outrage, as when he deems the doctrine of the “unitary executive” to be “a partial-birth abortion of the Constitution.” Later, decrying the lack of retributive justice for Ronald Reagan’s perceived crimes in relation to the Nicaraguan Sandinista government, Hodge sulkily concludes, “Impeachment would have to await Oval Office fellatio.” Yet however sincere his repulsion for Obama’s gradual backslide from his campaign’s lofty poetry, Roger D. Hodge is doomed to eternal disappointment if his vision for American leadership, as espoused in his book, remains so far removed from the reality of the possible.

#46: Blink

What does Malcolm Gladwell have in common with Glenn Beck, Adam Lambert, Ronald Reagan, Paul Krugman, John Grisham, Nicolas Sarkozy, and Jesus Christ? An uncanny ability to polarize, that’s what. (As for his tendency to invent categories of strange bedfellows, well, he’ll just have to share that dubious distinction with yours truly.) Gladwell and his book, Blink, have evoked praise from writers at The New York Times, The Boston Globe, The Wall Street Journal, Time, and the Associated Press. He has also attracted criticism, sometimes from unlikely corners. Highly regarded Seventh Circuit Court judge Richard Posner dismissed Blink as “a series of loosely connected anecdotes, rich in ‘human interest’ particulars but poor in analysis.” More bitingly, he notes that “one of Gladwell’s themes is that clear thinking can be overwhelmed by irrelevant information, but he revels in the irrelevant.”

Harsh words are these, but one must consider the source. Who appointed Posner the judge of right and wrong? (OK, so Ronald Reagan.) And when’s the last time a casual reader willfully plunged into the dark recesses of a judicial opinion? For all of Posner’s eminent reasonableness, his jurisprudence has the popular appeal of an electrocardiograph. Interestingly enough (or not), just such a transmission is one of the subjects of Malcolm Gladwell’s Blink. “The ECG is far from perfect,” Gladwell informs us, and so are his analogies. But at least in the latter’s case, a quick skimming is still a decently pleasant endeavor and one whose proximate cause is curiosity, not heartburn. Mr. Posner, know thy audience.

This isn’t to say mild discomfort won’t accompany the book-reading. Blink deals in just the sort of Ripley’s Believe It or Not-esque anecdotes that shoo us scurrying over to Wikipedia for furious fact-checking even as we wallow in vague notions of gullibility. Like the counterfeit kouros sculpture to which Gladwell’s gaze continually returns, Blink “had a problem. It didn’t look right.” Whether this instinctive skepticism regarding the book’s simplistic reasoning can be attributed to thin-slicing or careful analysis, I know not. I am armed only with an incredulity that the long-term success of a marriage can be diagnosed within fifteen minutes, or that commission-seeking car salesmen discriminate not intentionally but due to the unconscious “kind of biases that many of us carry around in the nether regions of our brains.” And while I can believe that information overload actually reduces our ability to formulate practical solutions, I’m not so certain the answer is to “put screens in the courtroom” to protect defendants — who would remain “in another room entirely, answering questions by e-mail or through the use of an intermediary” — from race-, sex-, and age-based discrimination.

This Gladwellian resort to logical deus ex machinas has rattled many a critical reviewer. It is one thing to remind readers that “a black man [in Illinois] is 57 times more likely to be sent to prison on drug charges than a white man.” It is quite another to mount a defense of this same criminal justice system in the very next paragraph, in which Gladwell elaborates, “I don’t think the car salesmen in the study meant to discriminate against black men…Put a black man inside the criminal justice system and the same thing happens. Justice is supposed to be blind. It isn’t.”

A more generous take on law enforcement may not exist. In fact, while we’re at it, we might as well remind aspiring historians that the Holocaust’s targeted killing of Jews was nothing more than a slight statistical anomaly, and that the Ku Klux Klan’s public disgrace was due entirely to a silly cultural misreading of the burning of crosses on minorities’ front lawns. One would think that, on the occasion of the black-over-white incarceration multiplier reaching double digits, there may be sufficient evidence to suspect systemic abuse. But then, Malcolm Gladwell is nothing if not unsuspecting. In Blink, he argues that what we process in the first two seconds of any given event is often more valuable than the subsequent (and more detailed) analysis. His editors and proofreaders, God bless’ em, appear to have taken his advice quite literally.

#45: Before You Suffocate Your Own Fool Self

Danielle Evans is the kind of author that gives one pause. And this is before one even reads a word she’s written. At the age of twenty-three, Evans’ work had already seen the glorious light of publication in The Paris Review. Now, three years and a critically acclaimed short-story collection later, Evans teaches literature at American University in Washington, D.C. And, presumably, ends world hunger.

The above-mentioned short-story collection is Before You Suffocate Your Own Fool Self, a title borrowed from “The Bridge Poem,” by Donna Kate Rushin. Shortly after the phrase that gives Evans’ book its title, Rushin’s poem ends with a declaration: “I must be the bridge to nowhere / But my true self / And then / I will be useful.”

Having just finished reading Before You Suffocate Your Own Fool Self, I think “The Bridge Poem” is key to understanding the undercurrent of displacement among African-Americans that permeates Evans’ stories. In “Virgins,” the collection’s first story and the one that landed its young author in the vaunted pages of The Paris Review, a teenage girl vacillates between instinct and adolescent curiosity as she timorously embraces her budding sexuality. It should be noted that, refreshingly, this and the other short stories are remarkably unpretentious, no small feat in this genre. The main character in “Virgins” displays the fledgling snark that marks a phase suffered through by all urban youth, with which readers’ near-universal familiarity makes it hard not to grin when she consoles her friend, “The only difference between that girl and the subway…is that everybody in the world hasn’t ridden the subway.”

Underneath such faux-witticisms lies a deep-seated unease with concurrently, and contrarily, demanding social pressures. For Erica, the first-person narrator of “Virgins,” this conflict pits the assertion befitting her ascendancy into adulthood against familially-bred perceptions of danger. Crystal struggles to reconcile her fraying ties to her high school best friend with a desire to escape the quiet desperation of a ghetto, in the ironically-titled “Robert E. Lee Is Dead.” And in the poignant voice of a military veteran in “Someone Ought to Tell Her There’s Nowhere to Go,” a small lie takes on new shape when the soldier’s daughter becomes a pawn in his grasping plea for recognition and acceptance.

These, and all the stories, are framed delicately on the fringes of white America, as the characters are forced by circumstance into engagement with the Other and yet remain substantively disenfranchised from the majority’s perceived benefits. At one point, betraying a worldly cynicism that belies her youth, a high school student reminds her pal that “white kids do senior pranks. When we try it, they’re called felonies.”

This comment, joined by Evans’ other, far subtler nods to the plight of African-Americans, painfully casts even the banal aspects of Stateside dhimmitude into sharp relief. When, in “Harvest,” an inadvertent pregnancy spawns a tragic debt that cuts across racial lines, the burden of social exclusion is harshly exposed; elsewhere, implication is preferred. Regardless of methodology, however, the subtext of alienation — from country as from family — is a troubling constant. And I expect that its vivid rendering by Danielle Evans will take the author one step closer to something resembling inclusion.