History News Network - Front Page History News Network - Front Page articles brought to you by History News Network. Tue, 18 Jun 2019 14:41:13 +0000 Tue, 18 Jun 2019 14:41:13 +0000 Zend_Feed_Writer 2 (http://framework.zend.com) https://blog.hnn.us/site/feed The Lavender Scare and Beyond: Documenting LGBTQ History from the Great Depression to Today

 

 

The Lavender Scare, a new documentary that will air on PBS on Tuesday, June 18th, documents the systematic firing and discrimination of LGBT people under the Eisenhower administration. The film is based on David Johnson's book The Lavender Scare: The Cold War Persecution of Gays and Lesbians in the Federal Government. The film is directed and produced by Josh Howard, a producer and broadcast executive with more than 25 years of experience in news and documentary production. He has been honored with 24 Emmy Awards, mostly for his work on the CBS News broadcast 60 Minutes. Josh began his career at 60 Minutes reporting stories with correspondent Mike Wallace. He was later named senior producer and then executive editor of the broadcast.  Following that, he served as executive producer of the weeknight edition of 60 Minutes.

 

Recently, Eric Gonzaba interviewed Director Josh Howard via phone. This interview was transcribed by Andrew Fletcher and has been lightly edited for clarity. 

 

 

Gonzaba: I just wanted to let you know, I actually got to watch the documentary last night. I knew a little bit about it before watching it. I know the content quite well, but I knew of your work beforehand and I just want to say it was really fabulous to watch it finally and I find it extremely credible. It’s well covered and has lots of fantastic aspects to it, so I’m excited to talk to you today. 

 

Howard: Thank you, thanks so much.

 

G: Now I’m curious to start off thinking about your own part of this documentary. What drew you to the subject matter; what prompted you to think about this period in general?

 

H: Well, to tell you the truth, I came across David Johnson’s book, The Lavender Scare, and I was just surprised that I didn’t know this story. I’m a little bit of an American history buff and I thought I knew LGBTQ history, being old enough to have lived through a lot of it, and it was just shocking to learn how systematically it was that the government discriminated against gay people. I worked in TV news my entire career and I was happily retired from that career, but after reading this I thought this isn’t just history really, this is a news story; this is something that people don’t know about. It seemed natural to try to capture the stories of these people on film. That’s what drew me to it.

 

G: Now it’s funny, when I think about this story, especially thinking about the Red Scare in the fifties and the Lavender Scare at the same time period, I think grade school education in Social Studies and History have pushed this understanding of the Red Scare in different ways – even in other facets like The Crucible in literature classes and whatnot. It seems to be that public education has this knowledge that the Red Scare is an important part of history, but I’m curious why you think the story of the Lavender Scare hasn’t been told before and is not understood by the public.

 

H: Well, a couple of things. I think partly, gay history has traditionally been marginalized. It’s really only in the past three decades that we’ve come to recognize the need to understand the histories of different minority groups. We’re just more recently coming to the understanding of the need for gay history to be acknowledged as well. But I think the big reason that people didn’t know about this and even people within the community really didn’t know about it is that when this was going on in the fifties and sixties, and into the seventies, eighties, and nineties – you know, having seen the film, that it wasn’t until the 1990s that this policy was reversed – but particularly during those early years, it was in everybody’s interest not to talk about it. The gay men and lesbians who were being fired didn’t want to talk about, even to their close friends and family, why they had been fired because they did feel a need to remain in the closet at that time. The government, after some initial publicity about how we’ll track down these people and get them out of the government, as the years went on and the firings continued, the government stopped talking about how many people were being fired because then the question started to become ‘well why did you hire them in the first place? Why are you only finding out that there are gay people working for the government now, after they’ve been there for all these years? Why didn’t you have better security systems?’ It was really in everybody’s interest not to talk about it. The remarkable thing is even someone like Frank Kameny, who was right in the center of this battle for all these years, he didn’t know how widespread this was and how many people were either denied employment or fired. It wasn’t until the 1990s when a lot of documents from this time period were being declassified that David Johnson was able to do the research that really put together the enormity of what happened. It’s really a combination of the lack of gay history being taught but also the lack of knowledge in general of this time period. 

 

G: Something you said earlier that struck me a little bit was you said that thinking about this project, when you were reading Johnson’s book, you were thinking about how it’s not just history, it’s also kind of a news story with your news background. What do you mean by that – what is the difference between history and a news story?

 

H: Well, what specifically I was referring to was that it’s a news story because people don’t know about it. I worked at “60 Minutes” for many years and the goal was to come upon some stories that would surprise people and put some issue into a broader context, so on a very basic level it’s a news story because it was news to me. On that level it was news. But beyond that I think there is a real relevance to the message today that frankly I wasn’t expecting there to be when I started working on this, believe it or not almost ten years ago. I think we’re living in a precarious time right now and it is very similar to what was going on in the 1950s. The homophobia of the fifties, as the film explains, was a pretty direct backlash against an earlier period during which there was much less discrimination against LGBTQ people. We’ve obviously made enormous strides in the past decade and more, but I think one message of the film and one of the things that makes it relevant is to remember that progress in issues of social equality doesn’t necessarily continue in a straight line, and there can be a step back for every couple of steps forward. I think we have to be aware of that. On a broader perspective, the film explores a time when a specific minority group was demonized in the name of national security and patriotism and so forth. You could argue that we’re seeing a repeat of that today with different minority groups. I think there’s a message here that history has not looked kindly on by those who have embarked on those kinds of policies, whether it be Japanese-Americans during World War II or LGBTQ people. There’s a whole list, sadly. 

 

G: You know, funny that you mention that – I think one of the most interesting things for me and one of the things that I really enjoyed about the documentary was that you don’t just focus on the 1950s. When we think about the title of your film, we think you’re just going to be talking about the Lavender Scare the entire time, but we also hear about this incredible time in D.C. in the 1930s that you show; we hear about World War II and the birth of gay liberation at Stonewall and beyond that. I guess I’m curious about going beyond the moment of the 1950s – what does that do to your story? Obviously, it provides some context, but what were you trying to show by giving a broader narrative rather than just focusing on the fifties?

 

H: I think it was able to show how the discrimination of the fifties, just as that was a result of a more permissive earlier time; it also set the stage for the reaction of the sixties, in which people did decide to stand up for their rights and say ‘this is wrong.’ I think it’s really important – Stonewall obviously is a huge milestone in our history, but I think it’s really important to acknowledge that there were incredibly brave people in the 1940s and particularly the fifties and early sixties who were sowing the seeds of the Stonewall Rebellion and pay tribute to their contribution, but also to really remember and to respect the activism and commitment that they made – to keep in mind that as much progress as we’ve made, we have to keep at it. 

 

G: Well what’s interesting too is that going into this film, I always assumed that because we’re approaching the fiftieth anniversary of Stonewall, that people are still obsessed with the seminal moment in 1969. Frank Kameny’s efforts and activism is, like you said, something that we need to acknowledge, so what I loved about your film too was that you argue that Kameny is also reacting to a gay activism that was before him, that was fighting along lines of civil rights but was also fighting along lines of separatism, or difference I should say, not along lines of marriage or employment rights or anything like that. That’s something that for a gay historian was really interesting to think of, even pre-Stonewall activists not being united about how to go forth in politics or culture. 

 

H: Absolutely, and it really is. One of the things that attracted me to the story is that there are three distinct acts in this story. There’s the Depression and World War II time when gay people are finding each other and building communities, and then there’s the fifties and the Lavender Scare when those communities are really under attack, and then we see how the community picks itself up and began the fight that led to Stonewall and led to marriage equality and where we are today. 

 

G: I’m curious, when did you begin this project? I was just thinking about how Frank Kameny is such a central figure to this film, as he should be, and I’m curious – did you get to interact with him at all? I know he passed away in 2011 I believe, but I’m curious if you had any contact with him and how his story helped you craft this larger story.

 

H: I read David’s book in 2009 and reached out to him. I had assumed that a documentary must have been made on the subject because it just seemed like such a natural. I tracked him down and my question really was where can I find the documentary and obviously he told me that none had been done. David and I met for the first time to discuss the possibility of doing this film on July 4th, 2009. Not only is this the fiftieth anniversary of Stonewall, but it’s also the tenth anniversary of David and I [beginning the project]. It’s a big year for anniversaries. So we talked about it and I’d never done an independent film before. I had always worked for broadcast companies – CBS and later NBC – so I really didn’t appreciate the difficulty in raising funds and really doing something on my own. But in any event, reading the book and talking to David and learning about the story – I did realize that Frank was, in a way, the central character. And so really before seriously figuring out how to raise money or go about doing this, I hired a camera crew and spent three days with Frank in July 2010. [I] interviewed him over three days, so the interviews of Frank that you see in the film were done by me. I spent three days with him, and it was, you know, the word ‘fascinating’ and ‘an honor’ and all those words really undersell it. I knew, even before reading David’s book, of Frank and certainly knew the details of all his contributions and activism. We didn’t shoot in his house – you might have noticed the snapshot of Frank sitting next to his desk which is piled high with folders and papers. Frank’s house didn’t lend itself to being a place that we could get a camera crew into. We shot at a different location, and for each of the three days I drove to Frank’s house and picked him up and drove him to the interview location, and I remember driving thinking ‘this is the Rosa Parks and Susan B. Anthony; this is the person that started our movement,’ and it was just incredibly moving to be able to interact with him. I will say, after three days I started to have some sympathy for the people in the federal government who had to interact with him because Frank is single-minded and doesn’t take direction easily and is quite a character, which is why he became the incredible person he did. It was just an amazing experience to be with him and I’ll never forget it. 

 

G: I liked in the film, John D'Emilio called him stubborn and that stubbornness has gotten him in trouble, but in some ways, it also helped fuel the movement that needed someone like him, to believe how right he was.

 

H: Absolutely. We estimate about 5,000 people had been fired before Frank, and all of those 5,000 obviously went quietly, and it never would have occurred to Frank to go quietly. Yet it also worked against him in ways. This didn’t make it into the film, but Frank founded the Mattachine Society of Washington and was the driving force and so forth, and at some point, he was thrown out of the organization because he was so difficult to work with. He was voted out as president by his own organization, and many years later he was quoted as saying, ‘the only thing I did wrong with the Mattachine Society was making it a democratic institution.’ That captures Frank and it was that personality, though, as you say, that without that he wouldn’t have been who he was and who knows when someone would’ve come along to start the movement that he did. 

 

G: I’m curious, Frank’s such a fascinating person, and I think for the wider public, even among historians, I think Kameny’s name is nothing, in terms of general knowledge about LGBT history, anything to Harvey Milk, who is lauded in the movement. Do you see your film as kind of correcting this larger historical ignorance of Kameny and the early activist work in the 1950s and 60s? 

 

H: I do. I mean, I really do think he deserves more attention and more credit than he’s gotten. A friend of mine does trivia, runs a little weekly trivia contest at a bar in San Diego, and every once in a while, he’ll throw in the question: who was Frank Kameny? Younger gay people, and older gay people as well I assume, don’t know. I asked him [Frank] about this when I interviewed him, and he was thrilled with the recognition he got at the White House, and he liked that idea that he was, as he put it, on a first name basis with President Obama, but he didn’t seem overly concerned about his place in history. I think it really is he did what was right for him to do, and if people know about it, great, and if they don’t, that’s ok too. I think he should be on a stamp, and he deserves recognition, because he did incredible things. I should also add, there were a handful of other people – Barbara Gittings, Jack Nichols – who were equally active and vocal, but no one like Frank who really stuck to it his entire life and really devoted every minute of the rest of his life to the struggle. 

 

G: Moving a little bit to your filmmaking, one of the interesting aspects of the documentary is that we’re not just hearing from people who were kicked out of their occupations or just from their families or even just from historians, we’re also, in the documentary, we get to hear from the very people who were involved in the kicking-out process. You actually have interviewed people like John Hanes and Bartley Fugler. What was your interest in including their perspectives on the story and what was your reaction after hearing those perspectives?

 

H: Well, I’m so happy you ask that, because in a way, those stories are my favorite stories in the film. The people who were fired – obviously their stories are moving and tragic and infuriating – but you kind of know how the story is going to go. You understand what happened to them. I found fascinating, talking to these guys all these years later, who initiated these policies and carried them out, and were generally unremorseful. The most they would say would be ‘I wouldn’t do the same thing today,’ but every one of them said, for the times, it was the right thing to do. The credit for this, by the way, goes to my brilliant associate director Jill Landes, who I had worked with at 60 Minutes and then later at NBC and I dragged her into this project as well. She was the one who tracked down the government officials. David, in his book, really focused on the victims. Jill was able to find the investigators, and particularly John Hanes, who was the number three person in the State Department at the time and was directly responsible for this policy. I just thought it was so amazing, here was a guy who Frank Kameny wrote to in 1960-something, and Haynes responded to him and said don’t write to me anymore because I’m not changing the policy. When we interviewed him, he had no recollection of who Frank Kameny was, or why they were corresponding. You mentioned Fugler – Fugler was the one person, the one government official who said he would still not hire gay people today. After the interview, my director of photography said to me as we were cleaning up that ‘you must have wanted to slug that guy,’ and I said as a filmmaker, to tell you the truth, I wanted to give him a kiss, because that’s what you need – someone to be honest on camera and portray the story that needs to be told. I’m really grateful to him, to Fugler, that he was completely honest and shared his point of view. 

 

G: What does it mean for those people, for these two men, to agree to this interview knowing that they’re probably not going to be on the side that this documentary is going to be supportive of? By them sticking to their points of view, what does that tell you, or did that surprise you at all?

 

H: It surprised me that they agreed to the interview as readily as they did, but I think – I’ve given this a lot of thought because it’s a great question – I think they felt that they really didn’t have anything to hide and didn’t do anything wrong, and if anything they were happy to defend their positions. From our point of view, I might think they’re going to run from the cameras because they don’t want to be associated with this, and I think from their perspective this is what they did, they were right when they did it, and they were happy to talk about it. None of the interviews ended with any confrontation. With John Hanes, after we shot the interview – he has since passed away but at the time he lived in Montana, in Boseman – we went and we had drinks at his house, it was all very friendly. You know, they said what they wanted to say and believed it, and that’s great. 

 

G:  Moving away from that side, you say David Johnson focused so much on the people who were fired; I don’t want to give anything away to people who are going to read this, but there are some great details about some of the people that you followed in this documentary, like Madeleine Tress and Carl Rizzi, that you leave for the very end of the documentary, literally the last minute or two, that are some really major details and some of the biggest things that I keep thinking about, which is an applause for your fantastic documentary style. I’m just curious about why: was that a conscious placement of those details at the very end, or were you trying to elicit responses from the audience?

 

H: Well I guess, I mean everything at some point is a conscious decision. We went through several different versions of scripts and structures, and at one point there were a couple more characters who didn’t wind up getting included. The first rough cut was something like three hours long. As a filmmaker, you know that people only have so much attention and time they can devote to something. I think the way it worked out, the epilogue there was a way to encapsulate everybody’s stories and review really what their contributions were. 

 

G: Something that stuck with me is the very last thing I hear about Madeleine Tress is that she’s continually denied a passport to travel away from our country. Her experience of the Lavender Scare is not something of the distant past, it’s something she has to live with the rest of her life, which speaks to the terror that so many people have, right? We can laud Kameny for being great, but in some ways hearing stories like that you totally understand why people stayed private, or like you say in the beginning of the film, it’s one thing to get fired, but so many people just didn’t even want to face that; they resigned their posts. I’m curious what you’ve seen as the biggest reaction to the film since you started screening it and what’s the biggest surprise you’ve gotten from that reaction from viewers?  

 

H: Well I guess the biggest surprise I see from audiences is just the story itself, that people say how is it possible that I didn’t know this, particularly from older people who lived through the McCarthy era or at least shortly in the period thereafter and were familiar with it. That’s the biggest surprise from audiences. I think for me, a big surprise, and I guess this is naïve of me still – we did close to 100 film festival screenings – and I would say, at more than half of them, someone got up during the question and answer period and said ‘I worked for the government, and I was fired because I was gay.’ I guess I’m still surprised when I see how many people this affected. There was one event – we were at a screening in Ocean Grove, New Jersey in the basement of a church, so not a big event. There might have been 50 people there. After the film and after the Q&A these two elderly women came up to me – I later found out that they were both in their nineties – and they told me that they had met in the 1950s when they were both secretaries for the Social Security Administration. They were partners and have been together since, and back in the 1950s when it was discovered that they were lesbians they were fired, they were both fired from the Social Security Administration. They said to me, ‘we never knew, for all these years, that this was part of something bigger. We thought it was just us. Thank you for telling our story.’ It’s that kind of thing that really sends the message home about how many people this affected and as you just said, how it remains with you your entire life. 

 

G: It was a very powerful documentary, and I was excited to see that it will continue to be screened more this summer, and I’m excited to hear more reactions from the film, because it’s something that definitely needs to be seen, so congratulations. 

H: We’re going to be at the Avalon Theatre, so on June 5th I’ll be there if you can stand seeing it again. CBS Sunday Morning is doing a piece about it and it looks like they’ll be filming the question and answer there, and David Johnson (and Jamie Shoemaker) will be there as well. Hope maybe you’ll be there!

 

Note: HNN did go to the screenign at the Avalon Theatre. You can read Andrew Fletcher's excellent write-up of the event here

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172253 https://historynewsnetwork.org/article/172253 0
The Vatican's Latest Official Document Is An Insult to the LGBTQ Community and History

Martyrs Saints Sergius and Bacchus

 

During the fourth-century, Sergius and Bacchus, two inseparable Syrian soldiers in the Roman emperor Galerius’ army, were outed as secret Christians when they refused to pay homage to the god Jupiter. The incensed emperor ordered them beaten, chained, and then, as their fourth-century hagiographer explained, paraded through the barracks with “all other military garb removed… and women’s clothing placed on them.” Both men were sent to trial; Bacchus refused to abjure his faith in Christ and was beaten to death by his fellow Roman soldiers as punishment. The night before Sergius was to be similarly asked to recant his Christianity, the spirit of Bacchus appeared before his partner. With his “face as radiant as an angel’s, wearing an officer’s uniform,” Bacchus asked, “Why do you grieve and mourn, brother? If I have been taken from you in body, I am still with you in the bond of union.” 

 

Bacchus continued to offer his protection to Sergius, stealing the resolve of the later, so that when he was tortured and murdered the following day, he did it steadfast in his faith and love, the very voice of God welcoming the martyred saints into heaven as a pair. Historian John Boswell explained that in writings about the two, they were often referred to as “sweet companions” and “lovers,” with Sergius and Bacchus representing “to subsequent generations of Christians the quintessential ‘paired’ military saints.” There’s an anachronism to the term perhaps, but there’s credible reason to understand both Sergius and Bacchus as a gay couple. And, most surprisingly for some, the early Church had no issue with that reality.

 

Sergius and Bacchus were not a token example of same-sex love countenanced by the Church in the first millennium; there are several other pairs of canonized romantic partners, and both the Orthodox and Catholic Churches allowed for a ritual of same-sex confirmation called Adelphopoiesis. This ritual sanctified unions of “spiritual brothers” that was common among monks in the Latin rite West until the fourteenth-century, and that continued in the East until the twentieth-century. Boswell wrote in his (not uncontroversial) 1994 study Same-Sex Unions in Pre-Modern Europe that far-from the blanket homophobia which we often see as tragically defining Christianity, adherents of the early Church saw much to “admire in same-sex passion and unions.” In his book, Boswell argued from his extensive philological knowledge of sources in a multitude of original languages that Adelphopoiesis was not dissimilar to marriage, and that it allowed men to express romantic feelings towards their partners in a manner not just allowed by the Church, but indeed celebrated by it. 

 

Obviously, this is a history which Bishop Thomas Tobin of Providence is unaware of, having tweeted on June 1st that “Catholics should not support orattend LGBTQ ‘Pride Month’ events held in June. They promote a culture and encourage activities that are contrary to Catholic faith and morals,” and with seemingly no trace of either irony or self-awareness added “They are especially harmful for children.” Medieval commentators wouldn’t necessarily fault a bishop on his lack of historical expertise or context, the healthy anti-clericalism of the period acknowledging that the intellectual heft of the Church wasn’t always necessarily robed in priestly vestments. 

 

However no such forgivable ignorance concerning the complicated history of gender and faith can be proffered on behalf of the Vatican document “Male and Female He Made Them” released on Monday June 10th, which condemns what it calls “gender theory,” and which even more egregiously and dangerously denies the very existence of transgender and intersex people. Hiding the reactionary cultural politics of the twenty-first century under the thin stole of feigned eternity, the author(s) write that Catholic educators must promote the “full original truth of masculinity and femininity.” 

 

From a secular perspective, much prudent criticism can be made concerning this document’s obfuscations and errors. A physician can attest to the existence of both transgender and intersex people, making clear that to define away entire categories of humans and their experience is its own form of psychic violence. The much-maligned gender theorist could explain the definitional fact that biological sex, as broadly constituted, can’t be conflated with the social definitions and personal experience of gender. As philosopher Judith Butler writes in her classic Gender Trouble: Feminism and the Subversion of Identity, “There is no gender identity behind the expressions of gender; that identity is performatively constituted.” 

 

Beyond the unassailable secular critiques of the Vatican’s recent comments on gender theory, there are historical criticisms that can be leveled against it. To claim that the Vatican’s recent statement is incorrect from both medical and sociological positions is one thing, but I’d venture that the it also suffers from a profound sense of historical amnesia too, as demonstrated by the icons and mosaics of Sergius and Bacchus which hang in Roman basilicas. The so-called “Culture War” which defines twenty-first century politics infiltrates the Church every bit as much as it does any other earthly institution, but conservatives like Bishop Tobin cloak what are fundamentally twenty-first century arguments in the language of posterity, claiming that the Church’s position on gender has been continuous and unchanged. Don’t fall for it. 

 

While it’s doctrine that the Church doesn’t change its teachings, a cursory glance at the history of Christendom demonstrates that that’s a hard position to hold in any literal sense. Furthermore, while the Church has evolved over the centuries in at least a temporal manner, it doesn’t always abandon the more intolerant for the more progressive – in some regards our forerunners actually had more welcoming positions. A reading of either Same-Sex Unions in Pre-Modern Europe or Boswell’s earlier book Christianity, Social Tolerance, and Homosexuality: Gay People in Western Europe from the Beginning of the Christian Era to the Fourteenth Century illuminates that definitions of “masculinity” and “femininity” change over the course of history, and that the Church of late antiquity and the Middle Ages could sometimes have a surprisingly tolerant understanding of homosexual relationships. 

 

An irony is that even the well-known history of the Church demonstrates the manner in which understandings of heterosexuality, not to speak of homosexuality, can change over the centuries. The ideal of marriage as primarily a romantic institution – a union of a woman and man in love who produce children and exist in a state of familial happiness –  is one that doesn’t widely emerge until the Reformation, as celebrated by early evangelicals in the marriage of the former monk Martin Luther to his wife, the former nun Katharina von Bora. This ideal of marriage was one that became widely adopted by Christians both Protestant and Catholic, but it’s obvious that the priestly ideal of celibacy (itself only made mandatory by the eleventh-century) is by definition not heteronormative. Our understandings of romance, family, sexuality, and gender have been in flux in the past – within the Church no less – and no amount of thundering about “How the Vatican views it now is how it has always been” can change that. And as Boswell’s studies make clear, there are Catholic traditions from the past that are preferable to today, with current opinions having more to do with right-wing social politics than with actual Christian history. 

 

For the Medieval Church, homosexuality wasn’t necessarily condemned more than other behaviors, and as Boswell writes “when the Christian church finally devised ceremonies of commitment, some of them should have been for same-gender couples.” Monks committed themselves to each other as icons of Sergius and Bacchus smiled down, and an expansive and different set of relationships, some that we’d consider homosexual by modern standards, were countenanced. This is crucial to remember, a legacy more in keeping with Pope Francis’ welcome claim of “Who am I to judge?” when asked how he would approach lesbian and gay Catholics and less in keeping with his papacy’s unwelcome document released this week. 

 

Writing as a baptized Catholic, who welcomes and celebrates the important role that the Church has played for social justice (despite Her copious sins) and who furthermore understands that the energy of the Church has often been driven by her committed LGBTQ parishioners who do the difficult work of faith despite the Vatican’s intolerance, it’s important to enshrine the legacy of men like Sergius and Bacchus. Bluntly, the Vatican’s decision to release their statement is hateful, even more insulting during Pride Month; Bishop Tobin’s remarks are hateful; the reactionary line of the Magisterium is hateful. Not only is it hateful, it’s ahistorical. For LGBTQ Catholics, it’s crucial to remember that the Magisterium has never been synonymous with the Church. Editor of America Magazine, and vital progressive voice, Jesuit priest Fr. James Martin writes that the Vatican’s document is one where the “real-life experiences of LGBT people seem entirely absent.” Presumably such an act of erasure would include Sergius and Bacchus, who unlike any living bishop are actual saints. 

 

As the former Jesuit James Carrol eloquently wrote in his provocative article from The Atlantic “Abolish the Priesthood,” there are ways to be Catholic that don’t reduce the faith to idolatrous clericalism, suggesting organizations of lay-worshipers who “Through devotions and prayers and rituals... perpetuate the Catholic tradition in diverse forms, undertaken by a wide range of commonsensical believers…Their ranks would include ad hoc organizers of priestless parishes; parents who band together… [and] social activists who take on injustice in the name of Jesus.” As a humble suggestion, perhaps some of these lay parishes would consider resurrecting a venerable ritual, the commitment of Adelphopoiesis? For such should be rightly regarded as a sacrament, a proud reminder during Pride Month of the love and faith which once motivated two martyred soldiers. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172254 https://historynewsnetwork.org/article/172254 0
Why is Brazil so American?

 

A recent article written by Jordan Brasher about a “Confederate Festival” held in a town in the countryside of São Paulo State, Brazil, drew the attention of readers in the United States. It was not the first time this festival had made the news in America as three years before the New York Times had reported this same event. The difference now is that Brazil is ruled by a new president, Jair Bolsonaro, someone who is becoming renowned for his controversial opinions on race, gender, sexuality and other topics important to various sections of Brazilian civil society. This pulls the “legacy” of the American Confederate movement into the centre of an ethnic-racial discussion at a delicate time in the history of Brazil. To make the situation worse, last May, the president of Brazil visited Texas and saluted the American flag just before giving his speech. Many in the Brazilian media subsequently accused Bolsonaro of being subservient to the U.S.

 

It may be difficult for an American audience to assimilate this information. After all, apparently, there is not a great deal of convincing evidence within Brazil´s history, language or culture of its affectionate bond towards the U.S.A. However, the truth could not be more different. Since the 18th century, the United Stated has greatly influenced the history of Brazil. Our Northern neighbor gradually became a role model for some of Brazil´s failed independence revolutionary leaders, such as the Minas Gerais Conspiracy

 

Nevertheless, it is also true that after gaining independence from Portugal in 1822 the choice for a Monarchic regime diminished the impact of American influence, until November 1889, when Brazil became a Republican Nation and looked upon the U.S. Constitution as its ideal inspiration. This can be both exemplified by the adoption of the Federated Regime, and an introduction of a new name: United States of Brazil (this remained Brazil's name until 1967). At the dawn of this new regime, Americanism inspired our own diplomatic model and even briefly enchanted some of our intelligentsia, especially within the education field.

 

The 1930s were characterized by a process of modernization of our society with assurance of rights, industrialization and education. This could be considered a kind of Americanism with a Brazilian twist as this process was not led by the civil society but by Getúlio Vargas, an authoritarian ruler of Brazil from 1930 to 1945. Such a modernization required the arrival of immigrants from various parts of Europe, especially from Italy. Thus, in the same way that had happened in the United States, a distorted image of this cultural melting pot has gained traction in the Brazilian identity as a solution for a multiethnic society composed of Indigenous peoples, Europeans and peoples of African descent. 

 

Then, automobile factories arrived and gradually the American inspiration once felt in our political structure spread to the culture sphere. Symbolically, the great ambassador of this transition was the Disney cartoon character, José Carioca, a rhythmic parrot from Rio de Janeiro. Alongside him, Carmen Miranda, who after a long stay in the United States, returned to the Brazilian stage singing: “they said I came back Americanized”. Our Samba began to incorporate elements of Jazz, and Bossa Nova became another vehicle united Brazil and the US, exemplified by the partnership between Tom Jobim and Frank Sinatra.       

 

In the 1970s, a new dictatorship accelerated this process of Americanization of the Brazilian cultural sphere. This was because of the Military´s projects that occupied territories of the West (our version of American expansionism and the “Wild West”), and because it seemed to unleash a more “selfish” type of society. In other words, the project of a development cycle implemented by the Brazilian Military Regime accentuated values such as individualism, consumerism and the idea of self-realization. The self-made man became the norm of our country. As the historian Alberto Aggio points out, “after 20 years of intense transformation of society, Americanism has reached the ‘world of the least affluent’ and because of this the ‘revolution of interests’ has reached the heart of social movements”.

 

And that was how Brazilian society, from the lowest class up to the highest, became interested in American culture and tried to emulate its habits and cuisine (as we can see in the current “cupcake revolution” in our bakeries), venerated American television series and artists, Americanized children´s names, incorporated words into its vocabulary, and even changed its predilection for sports. For instance, it is not difficult to find a fan of NBA or NFL teams in our country. This is why, though embarrassing, it is fully understandable that our current president, a retired military man from the Brazilian middle class, has this kind of genuine admiration for Trump and his followers. 

 

In addition to the vastness of its territory, its multiethnic identity, the adoption of presidential federalism, industrial modernization and middle classes inspired by the American way of life, there are other elements that bring Brazil closer to the United States. This includes their history of slavery, the racism bequeathed by that despicable practice and its counter legacy of social activism. Starting in the 1930s, the Brazilian black movement tightened its ties with African American intellectuals and activists, which generated exchanges of writings, symbols and experiences of resistance. At first, Brazilians inspired American blacks, especially in the North, but the Civil Rights Movement reversed the terms of influence. The Black Power Movement, particularly the Black Panther Party with its charismatic leaders and affirmative policies, reshaped the Brazilian black movement, as described by Paul Gilroy in his book Black Atlantic. Similarly, transformations in the production of studies on slavery in the United States, beginning in the Civil Rights era, also impacted Brazilian academic studies from the 1980s onwards, inspiring the historiography dedicated to this subject in a decisive way. However, not everything had a positive outcome. The reaction towards the empowerment of subordinate sections of Brazilian society, especially black people, shed light on Brazilian racism that had been veiled until that point. This reveals a difference between Brazil and the USA: while racism has always been somewhat explicit in American culture, in Brazil, the absence of any “WASP pride” and broader adherence to an idealized image of a cultural melting pot society has often led to “informal” demonstrations of racism.For example, the practice of asking a black customer to leave a restaurantat the request of a white customer was common until the 1990s. Thus, on one hand Americanism stimulated a greater struggle for rights and representativeness, yet on the other, it helped bring racism to light.

 

Regarding the “Confederate Festival”, its origins can be traced back to traditions established by a colony of Southern U.S. immigrants that reached São Paulo State in the late 19th century. However, I would not be surprised if after acknowledging the existence of this Festival, several Brazilians were to spontaneously replicate that “celebration” elsewhere in our country. After all, as the American social scientist and aficionado of Brazil, Richard Morse, once wrote, Latin Americans had the chance to embark on a path to the West without “disenchanting” their culture, in the Weberian sense of the concept. To describe the relationship nurtured by Latin America and the U.S, Morse created a metaphor: a mirror that reflects an inverted image of itself. Apparently, Brazilians got used to being this mirrored version of Northern America. Or put in terms that an Americanized Brazilian, perhaps even Jair Bolsonaro, would understand, we live in a Stranger Things Upside Down of the U.S. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172255 https://historynewsnetwork.org/article/172255 0
The Origins of American Hegemony in East and Southeast Asia – And Why China Challenges It Today

 

 

It is hard to ignore the escalating rivalry between the United States and China. The Sino-U.S. trade war hogs the headlines; China’s explicitambitions for hegemony in the Asia Pacific have induced the Trump administration to increase U.S. defense spending and strengthen its partnerships with Asian allies; and experts wrestle with the matter of China’s challenge to America’s longstanding hegemony in East and Southeast Asia, pondering the decline of U.S. influence.

 

But how did America rise to hegemony in East and Southeast Asia in the first place? The popular history of U.S. involvement in Asia after 1945 suggests that American predominance faded fast. By the 1950s, U.S. military supremacy had been punctured in Korea. Instead, Chinese forces proved formidable, driving the U.S.-led coalition deep into South Korea, prompting General Douglas MacArthur’s desperate (and unheeded) call for Washington to use atomic weapons on China. Further humiliation would greet America’s military in Vietnam. Richard Nixon’s rapprochement with China in the early 1970s seems an effort to slow America’s waning paramountcy in world affairs. 

 

If America suffered mostly embarrassment and defeat in Asia, how could U.S. hegemony have emerged in the region? Commentators have rarely addressed this question, stating simply that America enjoyed a “surprisingly advantageous” position in Southeast Asia despite failing in Vietnam.

 

My book, Arc of Containment: Britain, the United States, and Anticommunism in Southeast Asia (Cornell UP), examines the under-studied rise of U.S. hegemony in Southeast Asia during the Cold War and its impact on wider Asia. It shows that after 1945, Southeast Asia entered a period of Anglo-American predominance that ultimately transitioned into U.S. hegemony from the mid-1960s onward. Vietnam was an exception to the broader region’s pro-U.S. trajectory.

 

British neocolonial strategies in Malaya and Singapore were critical to these developments. Whereas Vietnamese and Indonesian revolutionaries expelled the French and Dutch, Britain collaborated with local conservatives in Malaya and Singapore, steering them toward lasting alignment with the West. Drawing from Britain’s repertoire of imperial policing and counterinsurgency, British and Malayan forces decimated the guerrillas of the Malayan Communist Party while Singapore’s anticommunists eliminated their leftist opponents. Beyond Malaya and Singapore, Thailand had already turned toward America (to resist Chinese influence) and the U.S.-friendly Philippines played host to massive American military installations. By the early 1960s, therefore, a fair portion of Southeast Asia had come under Anglo-American predominance. 

 

The rise of Malaysia would strengthen the Anglo-American position. In 1963, when Singapore moved toward federating with Malaya and Britain’s Borneo territories (Sabah and Sarawak) to create Malaysia, President John Kennedy declared Malaysia the region’s “the best hope for security.” After all, Kennedy officials had envisioned that forming Malaysia would complete a “wide anti-communist arc” that linked Thailand to the Philippine archipelago, “enclosing the entire South China Sea.”

 

But Malaysia did not enjoy universal acclaim. President Sukarno’s left-leaning regime in Jakarta and his main backers, the pro-China Indonesian Communist Party (PKI), deemed Malaysia a British “neocolonial plot” to encircle Indonesia. Sukarno had fair grounds for his accusation. Malaya had, soon after independence in 1957, aided Britain and America’s botched attempt to topple Sukarno. Moreover, British military bases in Singapore had supported this effort and were set to remain under London’s control for the foreseeable future. Sukarno’s answer was Konfrontasi(confrontation), a campaign to break Malaysia up. His belligerence would bring his downfall and usher Indonesia into America’s orbit.

 

Malaysian officials responded to Konfrontasi by launching a charm offensive into the Afro-Asian world that diplomatically isolated Indonesia while British troops secretly raided Indonesian Borneo to keep Indonesia’s military on the defensive. These moves worked well. In 1964, Afro-Asian delegates the annual non-aligned conference condemned Konfrontasi; in January 1965, the UN Security Council accepted Malaysia as a non-permanent member, legitimizing the federation and undermining Sukarno’s international influence. Equally, Konfrontasi severely destabilized Indonesia’s economy and battered its armed forces. Now, conservative elements of the Indonesian Army, which America had courted and equipped since the late 1950s, prepared to execute a coup d’etat. When a few elites of the PKI attempted in October 1965 to preserve Sukarno’s authority, Major General Suharto led the Army’s right-wingers to seize power, alleged the PKI and its Chinese patron intended to subvert Indonesia, and massacred the PKI (Sukarno’s power base) in a bloody purge. The Suharto government then tilted Indonesia—the world’s fifth largest nation—toward Washington and broke diplomatic relations with China. 

 

Ironically, President Lyndon Johnson Americanized the Vietnam conflict that same year—supposedly to rescue Southeast Asia from communism—when most of the region’s resources and peoples already resided under pro-West regimes opposed to Chinese expansionism.

 

Southeast Asia began transitioning from Anglo-American predominance to U.S. hegemony at much the same time. Konfrontasihad so taxed Britain’s economy that British leaders (in opposition to Prime Minister Harold Wilson) insisted on a full military retreat from Malaysia and Singapore. As British power waned, Singapore and Malaysia entered America’s sphere of influence, throwing themselves into supporting the U.S. war in Vietnam. As America faltered in Vietnam, it also raced to consolidate its newly-acquired ascendancy in Southeast Asia, forging intimate ties with, and pumping economic and/or military aid into, Indonesia, Malaysia, the Philippines, Singapore and Thailand. These nations formed a geostrategic arc around the South China Sea. 

 

Here, then, are the overlooked origins of U.S. hegemony in East and Southeast Asia. 

 

For, by the late 1960s, the arc of containment had effectively confined the Vietnamese revolution and Chinese regional ambitions within Indochina, causing Premier Zhou Enlai to express frustration that China was “encircled” and increasingly “isolated” from regional and world affairs. In this light, Nixon’s rapprochement with China was undertaken not from a position of weakness but de facto hegemony in East and Southeast Asia. Indeed, Nixon found Chinese leaders eager to thaw relations with America. Even when the Indochinese states came under communist control in 1975, the arc of containment remained firm, its leaders keen to reinforce their ties with America. 

 

It seems remiss to contemplate Sino-U.S. rivalry today without acknowledging this history of American hegemony in East and Southeast Asia, particularly the outsized role of regional actors. For while China has today mounted a profound challenge to America in Asia, during the Cold War America’s generous economic programs and overwhelming military power won little in Indochina. Rather, U.S. hegemony before, after, and during the Vietnam War was created by  anticommunists who chose to cast their lot with America against Chinese expansionism. In the days ahead, it is likely that the regional powers’ choices and actions will again determine how the Sino-U.S. rivalry plays out.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172256 https://historynewsnetwork.org/article/172256 0
The Cold War Spy and CIA Master of Disguise Writing the History of CIA Tactics in the Cold War

 

Jonna Mendez is a former Chief of Disguise with over twenty-five years of experience as a CIA officer working in Moscow and other sensitive areas. She is the coauthor with husband Tony Mendez of Spy Dust and her work has been featured in the Washington Post, WIRED, NPR, and other places. Her husband, Antonio (Tony) Mendez, perhaps best-known from his book-turned-film ARGO, was one of the most celebrated officers in CIA history. He, sadly, passed away in late January. THE MOSCOW RULES: Tactics That Helped America Win the Cold War is their last book together.

 

 

What was it like going from being “in disguise” as a CIA agent to the whole world knowing that you were once an operative? What as that transition like?

 

I worked for the CIA for 27 years. That whole time I was under cover, whether living in the US or overseas. The cover would vary to fit my circumstances. It usually revolved around other official US government entities. While my colleagues knew, of course, of my true affiliation, my social contacts did not. This would include some close friends over many years – who thought I worked a very boring job for the US government. Some members of my family knew, but none of my friends. When Tony and I came out publicly, it created a good deal of friction with friends I was close to, and in fact I lost several friends who could not believe that I had deceived them over the years. That was painful. My foreign friends probably understood better than my American ones. It was also actually difficult to speak publicly at first. We were so inured to obfuscating that speaking the truth, about such a simple thing was hard.

 

What do you think your personal role is in history and what was it like writing about it?

 

Tony Mendez and I worked together for many years. After our marriage the duality continued. When we began speaking and writing about our work, we did it together. Of course, he was the catalyst for our being able to speak – when others could not. But we had done much of the same work and we had many similar experiences. I think his role in history is heroic, while my role will be helping to publicly un-demonize the CIA. We thought that our role was to personalize the CIA; to demonstrate that it was composed of normal Americans trying to do the best job possible for their country. An apolitical group of really excellent employees. It may sound simplistic, but I think that together we opened up the door to afford a peek inside – at the machinery of this government agency and the people who work there.

 

I also feel that I had a creative role to play in the Disguise arena. We were beginning to produce very advanced disguise systems, modeled after some we had seen in Hollywood, and they became necessary tools in the denied areas of the world, the hard-to-work-in places where surveillance would almost prevent you from working at all – like Moscow. We were constantly innovating and creating new tools to enable our case officer colleagues to work on the streets even though they were surrounded by surveillance.

 

Does the current political climate shape how you discuss your work as an author and as a former CIA agent?

 

The politics do not shape the discussion as much as the need for sensitivity to the information that is classified. The CIA maintains a fairly tight rein on its former employees, insisting on publication review of any written material and keeping a watchful eye on pubic discussions. It is not politics that limit what we say, but the need to protect sources and methods. I have always been glad to comply. I have no desire to divulge classified information. On the other hand, when the CIA has seemed heavy-handed, I have not hesitated to question their decisions. Neither Tony nor I have felt constrained by the CIA in what we say or write.

 

You were a clandestine photographer and are still an avid photographer. What are the similarities and differences between preserving history through photography and the written word?

 

I really do believe that a photo is worth a thousand words. When two people are caught in the act of passing classified information, when the license plate of the car is clear in the print, when the face of the traitor is captured on film, this is evidence that is incontrovertible. In fact, no words are necessary. The photo is proof. But I would never dismiss the written word, the analytical approach to solving the problem, the connecting of the dots. However, if you have a photograph of the minutes of the meeting, or the scene of the crime, you have proof positive. Historically you want to have both.

 

As a member of the Advisory Board for the International Spy Museum, can you speak on public history and the importance of sharing your knowledge with wide audiences?

 

I see this as the primary role of the museum, an opportunity to educate the public and to shine some light on an area that has typically been off limits – the world of espionage. The American public is fascinated by this covert world and seems always interested in the subject. Being a member of the Spy Museum gives me an opportunity to explain how it works, how the tools are used through expansive training programs, and what the work product might look like. We are an international museum, so approach these subjects with a wide-angle lens, so to speak. The museum connection offers a rare opportunity to connect with and educate the pubic at large.

 

There is a fascination of spy life that is often portrayed in the media, particularly in movies and television. Do you think this excitement is justified? Are there accurate portrayals?

 

It took me years to understand this fascination. I believe it is based in part on the pop culture image of the spy (Ian Fleming, Graham Green, John LeCarre), the also on the lure of the unknown, the secrecy surrounding all intelligence work.  There is a basic curiosity about the work, and an assumption about the glamour surrounding the work, that draws the public in. If they only knew that for every five minutes of excitement, there are hours and hours of mundane planning, meetings and administrative details. There are few portrayals that I have seen that seem real and that is why I really don’t watch much espionage-themed media. One exception was The Americans – a TV show that I believe thoroughly captured the ethos of the culture of the spy. The characters seemed real; the situations close to life, and the disguises were fabulous. BBC also did some nice productions of John LeCarre’s work.  And Jason Matthews’ recent novels have an ability to place me back on the snowy streets of Moscow with danger around each corner.

 

As the former Chief of Disguise, are there any historical events that you think disguises played a role in? If not, how do you think disguises have helped shape the history of the world?

 

Yes, there are a number of historical events that revolved around the use of disguise and we have described some of them in our new book, The Moscow Rules. In a city where we could not meet face-to-face with our foreign agents, where the KGB surveillance was smothering our case officers, and where the use of tradecraft was the only thing that allowed our operations to take place, disguise was a tool that allowed operations to move forward. We used unique proprietary disguise techniques, derived from the make-up and magic communities in Hollywood, to protect our CIA officers and their Russian agents. These tools allowed the intelligence product to be delivered to American hands, resulting in a number of incredibly successful clandestine operations in the Belly of the Beast, the name we gave to Moscow. Failure in Moscow would result in the arrest and execution of our foreign assets. This was a life and death situation.

 

You also co-authored the book Spy Dust with your husband Tony Mendez. Why did this one seem important to write next?

 

Spy Dust was a natural follow-on to The Master of Disguise. We met with our editor after the publication of MOD over cocktails, and she asked about how we had met during our days in CIA. When she heard the story she basically commissioned the next book, Spy Dust. She thought the story would make a very interesting book. As it turned out, her publishing house was not the one that bought the manuscript. In fact, there was a heated discussion, once the manuscript was done, about whether our romance belonged in the middle of a spy story. We insisted that there was no book without that story, and so it stayed. It was difficult to write, as it involved the break-up of my marriage, but it was important to us, on several levels, to tell the story truthfully. And so we did.

 

Why should people read The Moscow’s Rules? What message do you hope they take away from it?

 

Many people feel that the Cold War is over and that we should move on with normalized relations with our old antagonists. The Moscow Rules opens with a late night scene at the gate of the American Embassy in Moscow. Set in June 2016, it details the savage beating of an American diplomat by the FSB, successor to the KGB, as he attempts to enter his own embassy. The beating continued into the embassy foyer, legally American soil. The American was medically evacuated the next day with broken bones. This was in 2016, in the middle of our most recent presidential campaign.

 

The FSB was exhibiting a consequence of The Moscow Rules; the heretofore unwritten but widely understood rules of conduct for American intelligence officers in Russia. My best guess was that the American had violated one of those rules: Don’t harass the opposition. The FSB is heavy-handed, as is Putin, a former intelligence officer.

 

The Moscow Rules were the necessary rules of the road when working in Moscow, the understood methods of conducting yourself and your intelligence operations that had proven themselves over the years. They were never before written down, but were widely understood by our officers. And they are dirt simple: Use your gut. Be nonthreatening. Build in opportunity but use it sparingly. Keep your options open. Use misdirection, illusion and deception. All good examples of The Rules.

 

What do you hope this book adds to the legacy of your husband, Tony Mendez, as well as your own?

 

The Moscow Rules is Tony’s fourth book and my second, third if you count my work on the book ARGO. Neither of us is looking for a legacy. Tony’s legacy is already well established; my goal lies more in the educational area. We always believed that our unique opportunity to speak for the CIA and to educate the public on the work that is done in their name was a chance to open the door to a myriad of career opportunities for young Americans who might never give the intelligence field a second thought. While I am not a traditional feminist, I can serve as an example of the continuing, on-going success of women in this field. And in our work with the International Spy Museum we have tried to further these same goals. Between the two of us, and in the books we have written, we have tried to further these goals.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172251 https://historynewsnetwork.org/article/172251 0
The Origins of the Lost Cause Myth

 

 

The two most significant issues that led to war between the North and South were, most scholars acknowledge, slavery and states’ rights. Northern states had fully abolished slavery by 1804, when New Jersey was the last Northern state to do so, and with an economy that did not depend on the labor of slaves, it demanded that the South do the same. Yet in demanding that the South follow suit, the North, Southerners maintained, was in contravention of the issue of states’ rights—that each state had the right to craft and implement its own laws and policies without Federal governmental intrusion. Yet while the two were independent issues in theory, in praxis they were not. That comes out in Southerner apologist George William Bagby’s somewhat mawkish essay, “The Old Virginia Gentleman”:

 

Fealty to the first great principle of our American form of government—the minimum of state interference and assistance in order to attain the maximum of individual development and endeavor—that was the Virginian’s conception of public spirit, and, if our system be right, it is the right conception.

 

Aye! but the Virginian made slavery the touchstone and the test in all things whatsoever, State or Federal. Truly he did, and why?

 

This button here upon my cuff is valueless, whether for use or for ornament, but you shall not tear it from me and spit in my face besides; no, not if it cost me my life. And if your time be passed in the attempt to take it, then my time and my every thought shall be spent in preventing such outrage.

 

According to Bagby, it was effrontery for Northerners to demand an end to slavery in the South. In making such a demand, the two issues became dependent. Southerners fought hard to keep their slaves only, or at least chiefly in Bagby’s view, because they were told that they could not keep them.

 

That to which Bagby alludes has come to be called the Lost Cause—a sort of treacly revisit to the days before the Civil War, to a paradise moribund, never again to be revived. The Southern attitude is in some sense easy to understand. With up to 700 thousand men lost in the war on both sides—some 100 thousand more Union soldiers than Confederate soldiers and astonishingly nearly 25 percent of the soldiers on each side—Southerners do not have recourse to the sort of warrant available to Northerners: We won the sanguinary war, so that in itself is proof that God’s justice was on our side. Southerners, to justify the loss of some 260 thousand men, had to try to understand, from their perspective, why God slept while they fought.

 

The term Lost Cause was first used by Edward A. Pollard in The Lost Cause: A New Southern History of the War of the Confederates, published the year after Civil War. Three Southern publications—Southern Historical Society Papers (1869), Souther Opinion (Richmond, 1867), and Confederate Veteran (1893)—entrenched the term and gave birth to a movement. The impassioned, lucid voice of Gen. Jubal “Old Jube” Early, the hero of the Battle of Lynchburg, who spent the final years of his life in Hill City after the Civil War, was prominent in Southern Historical Society Papers.

 

In 1866, Early wrote A Memoir of the Last Year of the War for Independence, in the Confederates States of America, in which he stated that he initially opposed secession of Southern states from the Union, but firmly changed his mind because of “the mad, wicked, and unconstitutional measures of the authorities at Washington, and the frenzied clamour of the people of the North for war upon their former brethren of the South.” Lincoln and his cronies were the real traitors of the Constitution. Recognizing the right of revolution against tyrannical government “as exercised by our fathers in 1776, … I entered the military service of my State, willingly, cheerfully, and zealously.”

 

The Civil War, Early unequivocally said in The Heritage of the South, was never about slavery from the perspective of the South. “During the war, slavery was used as a catch word to arouse the passions of a fanatical mob, and to some extent the prejudices of the civilized world were excited against us; but the war was not made on our part for slavery.”

 

Early argued that Southerners had long ago grasped that there was nothing objectionable, moral or otherwise, about the institution. Slavery was a natural state of affairs for Blacks, he stated, because of their biological inferiority.

 

The Almighty Creator of the Universe had stamped them, indelibly, with a different colour and an inferior physical and mental organization. He had not done this from mere caprice or whim, but for wise purposes. An amalgamation of the races was in contravention of His designs, or He would not have made them so different.

 

Blacks, added Early, in their state of subjugation were better off than they were in West Africa, where they wallowed in barbarism, sometimes to the extent of practicing cannibalism. In addition, said Early, black slaves on Southern plantations and in Southern cities were certainly better treated that the Blacks, and Whites, in Northern industrial sweatshops. 

Early next turns to an oft-given objection to slavery: Jefferson’s Declaration of Independence. The argument asserts that slavery is wrong, because “all men are created equal.”

 

The assertion that “all men are created equal,” was no more enacted by that declaration as a settled principle than that other which defined George III to be “a tyrant and unfit to be the ruler of a free people.” The Declaration of Independence contained a number of undoubtedly correct principles and some abstract generalities uttered under the enthusiasm and excitement of a struggle for the right of self-government. … If it was intended to assert the absolute equality of all men, it was false in principle and in fact.

 

Observation, Early intimates, is sufficient to show the de facto falsity of the equality of Blacks and Whites.

 

Yet Jefferson did intend the equality of all men in his Declaration. In his original draft of the document, he castigates George III for keeping “open a market where MEN should be bought & sold.” The capitalization of men is Jefferson’s and it occurs to underscore the notion that Blacks qua men are deserving of the same fundamental rights of all other men. Moreover, Jefferson was a cautious, diligent writer who—his Summary View of the Rights of British America perhaps being an exception—was wont not to be moved by “enthusiasm and excitement.”

 

Jefferson did likely believe that Blacks were intellectually and imaginatively inferior to Whites. In Query XIV of Notes on the State of Virginia, he stated that Blacks, though very likely intellectually and imaginatively inferior to Whites, were morally equivalent to all other persons, and thus, they were undeserving of “a state of subordination.” That Isaac Newton was intellectually superior to all others of his day was not warrant for him having God-sanctioned rights that others did not have. God’s justice for Jefferson looks to the heart, not to the head. Early’s dismissal of Jefferson’s Declaration as containing an ineffective argument against slavery is harefooted, unpersuasive.

 

Old Jube then turns to what might be construed as a legal argument for slavery—an argument from precedence. There is constitutional sanction of slavery, because there is constitutional sanction of states’ rights.

 

The Constitution of the United States left slavery in the states precisely where it was before, the only provision having any reference to it whatever being that which fixed the ratio of representation in the House of Representatives and direct taxation; that in reference to the foreign slave trade, and that guaranteeing the return of fugitive slaves. Had it been proposed to insert any provision giving Congress any power over the subject in the states, it would have been resisted, and the insertion of such provision would have insured that rejection of the Constitution. The government framed under this Constitution being one of delegated powers entirely, those powers were necessarily limited to the objects for which they were granted, but to prevent all misconception, the 9thand 10thamendments were adopted, the first providing that “The enumeration in the Constitution of certain rights shall not be construed to deny or disparage others retained by the people,” and the other that: “The powers not delegated to the United States by the Constitution, nor prohibited by it to the states, are reserved to the states respectively, or to the people.

 

Early here claimed constitutional warrant for slavery being an issue subordinate to states’ rights. That is not necessarily to say that it was less of an axiological issue—as it may be the people on the whole feel more strongly about slavery, pro or con, than about states’ rights—but that the issue of slavery, whatever its worth, was to be decided by each state and not by the federal government. However Northerners, at least some of them, thought the issue was of such significance that it transcended states’ rights.

 

Here again we might profit in analysis by comparing Early’s view on constitutional warrant with Jefferson’s. Jefferson too worried mightily about the issue of states’ rights and slavery—the Missouri problem brought it into focus—and he too recognized that there was no constitutional warrant for eradication of the institution. Thus, he too maintained that the issue of slavery ought to be up to the individual states.

 

Yet Jefferson did not have the same regard for the sacrosanctity of the Constitution that Early did. For Jefferson, constitutions were living documents that needed overhaul with each generation, as each generation in the main advanced in knowledge and such advances needed to be instantiated.

 

Moreover, Jefferson, following other Enlightenment philosophers, certainly paved the path in his Declaration for a new way of looking at persons and fundamental rights: They were natural, God-given, and given to all “MEN.” Jefferson knew that it was only a matter of time before the Constitution would bend on the issue of slavery, as slavery was an institution at odds with the natural state of affairs, and God’s justice, he says in Query XVIII, “cannot sleep for ever.” And so his attachment to the right of each state to determine for itself the issue of slavery was provisional. For Early, the Constitution’s silence on the issue was the final word, and that seems somewhat desperate.

 

In sum, the problem with Old Jube’s efforts to vindicate the South by showing that their participation in the war was due to states’ rights, not slavery, is that the two issues, theoretically distinct, were in praxis intertwined. One sees that quite plainly in the general’s many arguments in The Heritage of the South on behalf of the South not entering the war on account of slavery.

 

Yet Northerners, and enlightened Southerners like Jefferson, were increasingly coming to see that no group of humans ought to have the status of property to another group—that slavery was one issue that ought not to be swept under the states’-rights rug. One large reason for that illumination was the global respect won by Thomas Jefferson’s own Declaration of Independence over the years and its unbending assertion of the moral equality of all persons.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172250 https://historynewsnetwork.org/article/172250 0
Silent Spring: Why Rachel Carson’s words still ring true today

 

Rachel Carson was one of the early pioneers of environmental science. She fought against the tide of establishment repression, and her own ill health, to get her meticulously-conducted research across. As a woman in science, she was viewed as an outsider but she made her voice heard through her writing: highly readable and approachable accounts of scientific facts.  Man was damaging the environment, and our climate, and action was needed. But the alarm bells she activated are often ignored, nearly sixty years later.

 

Women have always been involved in science. Across biology, chemistry, physics and medicine, countless unacknowledged women have participated in scientific endeavours. But, struggling against oppression, their voices have rarely been heard. Rachel Carson is one of the few exceptions. That’s mainly due to the importance of her message, the way she got it across and the battles that ensued.  

 

Rachel Carson was immersed in nature from an early age. Inspired by her mother and their regular nature expeditions, she crafted her observations into beautiful written stories. Although she initially majored in English at University, her love of Biology, combined with her early prowess as a writer, meant she was perfectly placed for her lifetime work.

 

Rachel’s rural upbringing also meant she witnessed the effects of humans on the environment. Several family members worked in the nearby Pittsburgh industrial power plants,with their towering chimneys which spewed out toxic chemicals into the atmosphere. One particular toxic chemical would propel Rachel into the limelight. 

 

 

After the success of a trilogy of books about the marine environment, she wrote Silent Spring.  Using rigorous scientific assessment, Rachel explained how pesticides like DDT entered the food chain and damaged a whole host of creatures beyond the intended target. Silent Spring kick-started the environment movement and began a roller coaster ride, as Rachel adjusted to fame and was diagnosed with terminal cancer. 

 

Rachel met resistance at every turn. Silent Spring was hard hitting and attacked commercial companies, condemning them for sacrificing the health of the environment in order to generate more profit. She criticized many scientists who were researching insecticides because chemical companies were pouring money into universities. Big business was under Rachel’s forensic gaze. 

 

Whilst the politicians were initially equally skeptical about Rachel’s findings, the tide started to turn. In 1962, the year Silent Spring was published, President John F Kennedy cited the book and appointed a committee to study pesticide use. Over the next two years, the government increasingly called for heightened vigilance and gradual reductions in the use of environmentally-unfriendly pesticides.

 

Sadly Rachel didn’t live to see the effects of Silent Spring. She died in 1964, at the age of 56. In 1972, DDT was banned in the USA, although it is still in use in some countries. The Clean Water Act was passed in 1972 and the Endangered Species Act in 1973.

 

Rachel’s message didn’t just influence the government or public: it also influenced the business community. As Rachel’s message of environmental and climate protection began to percolate, the executives at Exxon, the world’s largest oil company, wrote a memo. Distributed internally to a select group in 1982, it spelled out that maintaining a stable climate would require “major reductions in fossil fuel combustion”, otherwise, “there are some potentially catastrophic effects that must be considered. Once the effects are noticeable, they may not be reversible.”

 

Today, Rachel Carson’s message is as important as ever. President Donald Trump has alleged climate scientists have a political agenda. Rachel faced similar accusations, as critics suggested Silent Spring was part of a left-wing conspiracy to bring down capitalism.In 2017, Trump pulled the US out of the landmark 2015 Paris climate agreement, claiming the international deal to keep global temperatures below 2 degrees Celsius was disadvantageous to US industry and its workforce. Trump continues to ignore warnings from his own government agencies, dismissing a 2018 report of the devastating economic consequences from climate change. Any future US president would do well to heed the likes of Rachel Carson and her successors and invest in clean energy policies, protect the environment, and promote biodiversity.

 

Rachel’s legacy also lives on as other women are prominent advocates for combatting climate change.  The teenage activist Greta Thunberg is a lucid and unremitting climate campaigner. Speaking at the World Economic Forum in Davos earlier this year, she told attendees. “I don’t want you to be hopeful, I want you to panic. I want you to feel the fear I feel every day. And then I want you to act. I want you to act as if our house is on fire. Because it is.”

 

Rachel Carson was certainly fearful. She saw the smoke of the factories in her childhood and the sight never left her.  Without the immediate and far-reaching power of the internet, she used writing as her tool. Rachel knew how to construct a story and this story won’thave a happy ending, unless we act.

 

Like Greta’s, Rachel’s message was unsettling. Cutting edge science often is. Building hope for the future is part of the equation but that has to be balanced with an understanding of the impact of our actions on our planet and humankind. The stories of the other biologists, chemists, physicists, and doctors featured in Ten Women Who Changed Science and the World, reveal that they too understood the power of science to change lives. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172252 https://historynewsnetwork.org/article/172252 0
Make History Accessible: The Case for YouTube  

Crash Course is a Youtube Channel that covers historical events

 

History is in trouble. 

 

This is not a new observation. Benjamin M. Schmidt wrote a fantastic piece detailing the severe decline of history majors since the Great Recession. From 2011 to 2017, the total number of history majors awarded has dropped nearly 33%. Moreover, it does not take a lot of effort to notice this decade’s persistent focus on supporting STEM related ventures at the expense of the humanities. The unfortunate perception is that history is not a valuable undergraduate degree and history departments can do little to fight back because of their limited resources. This is a crisis.   

So how can we, as historically-minded people, alleviate this crisis? Part of the answer could come from the widely popular video-sharing service YouTube. It presents a great opportunity for both professional history educators and amateurs to enhance the public’s interest in history.  YouTube’s Strengths 

 

YouTube might seem like an odd choice. It’s not a service exclusively built for history and has received bad press recently that has hurt the business. But, these issues do not take away from YouTube’s four strengths: design, reach, lack of restrictions, and community-building.    

1. Design: History requires a medium that encourages long-form communication and YouTube encourages just that. A simple way to understand this relationship is: the longer the video is, the more likely it is to have ads, creating more advertisement revenue for the creators, their partners, and YouTube itself. Even YouTube’s “recommended” algorithm has been suggesting longer videos to its users when compared with the user’s starting video. This intentional design is one of the reasons why YouTube is so popular and provides such a lucrative educational opportunity. There are only benefits in uploading history lectures to YouTube, and its design can enable information to spread like wildfire.   

2. Reach: On a given day, more than a billion people visit YouTube—and that number is only growing. If you are an internet user, chances are you will visit YouTube no matter your age. But, the most impressive statistic is that almost a third of those people are dedicated users who watch multiple channels and spend a vast amount of time using the service. So not only is the reach vast, but also it can be concentrated to particular users. This reach and active user base allows for niche histories to thrive that otherwise could not have without a global audience. It is personalized mass media and that is an important educational opportunity.   

3. Lack of Restrictions: While a potential weakness, this is also YouTube’s greatest strength: its lack of restrictions. Theoretically, one can start a multi-million dollar business with just a video camera and editing software—and it has been done many times over resulting in the phenomenon of the “YouTube Celebrity.” In contrast to undemocratic and centralized cable companies, YouTube is far more democratic and decentralized which creates a more conducive atmosphere for its users and creators. While cable companies spend enormous amounts of resources garnering millions upon millions of views, YouTube creators spend only time and few material resources to create a smaller but similar impact. If a history professor wanted to share their course online, then they would simply need a camera and some editing software to reach millions worldwide.   

4. Community-Building:One reason why YouTube is so dominant is because of its already-existing communities. YouTube is so ingrained that adopting a new service requires that service to be far better than YouTube for creators and users to even consider switching. Luckily, there is already a healthy amateur history presence on it. Notable examples include the channel “The Great War” (173,372,564 views and 1,030,696 subscribers) and Crash Course’s World History course (54,542,220 views). Moreover, YouTube could also serve as a community “video-library” of sorts storing everything from historical archive footage to “pop history”. One popular case includes the Iowa State University Archives, which in 2008 transitioned to using YouTube and has experienced considerable success.   

These factors, even when combined, do not make YouTube unique. But, currently it is a great forum for historical discussion, appreciation, and education. Granted it is no substitute for an undergraduate study in history, but it is both a great complement and an introduction. Moreover, using YouTube as an educational tool is not a new idea; in fact, it is a successful idea. Historian Joe Coohill argued that incorporating images and videos into his lectures had a positive impact and Alan Marcus makes a similar case but with film and secondary education.   

There are a whole list of problems using YouTube, but they all fall under three general categories: misinformation, disinformation, and the “scarcity or abundance” problem proposed by the late Roy Rosenzweig. But, only the last problem is unique to the internet—the other two problems are amplified by the internet, but are not new issues.  If YouTube is not your cup of tea and you prefer Coursera, Khan Academy or a university open source initiative, then the point still stands. YouTube is one suggestion, but the overall point is one of accessibility. Accessibility is central to education and we should adapt and ensure that more people have access to serious history. As a history student, I encourage historians to use YouTube for it is their duty to ensure that people know and appreciate the past. This can include uploading lectures onto YouTube, partnering up with services like Coursera, consulting with popular history creators or even starting their own podcasts. The important takeaway is that adapting to new communication technologies is imperative and historians should feel free to experiment. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172257 https://historynewsnetwork.org/article/172257 0
A Fresh Take on Watergate Illuminates the Present

 

As evidence of illegal activity in the recent presidential election mounts, the attorney general appoints a special prosecutor. The president, after denouncing the news media for false reporting, calls a press conference to insist he has done nothing wrong.  In court hearings, evidence of campaign dirty tricks and secret pay-offs emerges and a growing chorus of Congressional Democrats call for impeachment proceedings. 

 

While these could be scenes from recent CNN coverage, they actually come from 1973-4, the last years of the Nixon presidency. 

 

Washington journalist John Farrell’s book, Richard Nixon, a Life, provides a fascinating narrative that takes the reader inside the mind of a troubled president who is obsessed with taking down his perceived “enemies.” 

 

Farrell, a former White House correspondent for the Boston Globe, has written two previous books including Tip O’Neill and the Democratic Century; he brings a deep understanding of Washington D.C. culture and an eye for telling anecdotes. 

 

Farrell’s book is focused exclusively on Nixon, concluding with his death in 1994, and never mentions Donald Trump. However, readers will inevitably reflect on the current presidential crisis as the author leads us through the racist rants, paranoid visions and surreal plotting against opponents that was a regular feature of Nixon’s White House. 

 

While Richard Nixon’s rise and fall has been repeatedly examined — there are more than a dozen biographies of him — Farrell’s account offers many new insights. He has tapped into a rich trove of new material, drawing from the 37,000 hours of White House tapes (many held in secret until 2013), 400 oral histories from Nixon associates compiled by Whittier College and books such as The Haldeman Diaries.  Farrell has combed through these voluminous files to craft a day-by-day account of how the Watergate fiasco unfolded.  By weaving together key taped conversations and candid observations from his close associates, he provides a day-by-day, sometimes hour-by-hour, account of the dark world of Richard Nixon’ restless mind. Farrell shows his obsession with “enemies” who include Jews, blacks, Democrats and Ivy League “eggheads.”       

                                

Although Trump and Nixon appear very different in demeanor and family background (e.g. Nixon’s father was dirt poor), they share some important personal traits.       

      

Both men had a distant mother and demanding father, both endured the death of a favored older brother and both harbored deep insecurities that led to driving ambition and a “win-at-any-cost” attitude. Both men displayed an unnatural sensitivity to criticism and an obsession with striking back at perceived enemies. Both tried to conduct their affairs in deep secrecy, obtained money from dubious sources and hired unsavory characters to carry out their dirty work for them.    

 

While the basic facts of Watergate have been recounted many times most notably in the book and film of All the President’s Men, these are primarily views from the outside. Farrell’s book takes us inside the White House, detailing the daily interactions of Nixon and his closest lieutenants, Haldeman, Ehrlichman and Kissinger. We see Nixon’s restless mind, mulling foreign policy initiatives and the domestic political scene, but also returning time and again to “getting even” with his enemies, real and imagined.

 

The first years of Nixon’s presidency, 1969-70, were largely successful. He began to wind down the Vietnam War, he ended the draft, lowered the voting age, began nuclear weapons limitation talks with the Soviets and founded the Environmental Protection Agency. Although he enjoyed high favorability ratings and seemed assured of re-election, he railed against anti-war protesters and exploded in anger when critical articles appeared in the press. 

 

Sometimes, his impulsive reactions to news events were irrational.  After a Middle East airplane hijacking, he demanded the Air Force bomb Damascus, Syria. Fortunately, his staff and cabinet officers ignored this and other dangerous thoughts and Nixon usually forgot about them and moved on to other subjects.

 

Haldeman and Ehrlichman, whom the press dubbed “the Pretorian Guards” of the administration generally acted as filters, screening out Nixon’s most irrational instructions before they reached lower-level personnel.  Unfortunately, after two years, they suffered from overwork, became distracted and allowed the formation of “The Plumbers,” a group of a dozen ex-spies and free-lance thugs recruited to stop internal leaks and gather damaging material on opponents.

 

The Pentagon Papers

The calm at the White House was shattered on June 13, 1971, when The New York Times published a 5,000-word excerpt from The Pentagon Papers, a 700-page study of the origins and conduct of the Vietnam War. It had been taken from the Defense Department by one of its authors, Daniel Ellsberg.  

 

Ironically, Henry Kissinger’s immediate reaction to the story was one of relief. “This is a gold mine,” he told Nixon, “It pins it all on Kennedy and Johnson.”  But the president Nixon reacted differently. He saw it as part of a conspiracy, a plan to bring him down. He worried that other documents might be leaked, ones that revealed the dark secrets of his Vietnam War policies: the clandestine carpet bombing of Cambodia, the plans for bombing dikes in North Vietnam and the consideration of atomic weapons. 

 

Nixon now pressed harder than ever for retaliation, demanding widespread wiretapping and burglaries to obtain opposition files. Earlier restraints on the Plumbers and other operatives were lifted.  

 

At 3 a.m. on June 17, 1972, a team of burglars, headed by Gordon Liddy and E. Howard Hunt, clad in business suits and surgical gloves, was arrested after breaking into the Democratic National Headquarters in the Watergate office complex. The Washington Post assigned a pair of young reporters, Bob Woodward and Carl Bernstein, to cover the story.

 

This began the two-year long Watergate scandal that would force Nixon out of office. By July 1974, the House Judiciary Committee approved three articles of impeachment against Nixon: for obstruction of justice, abuse of power and contempt of Congress. The full House was getting ready to conduct formal impeachment hearings when Nixon resigned on August 9 and Gerald Ford became President. 

 

Karl Marx famously wrote in 1852 (commenting on the dictatorship of Napoleon III in France) that “history repeats itself, first as tragedy, then as farce.” 

 

In the course of Donald Trump’s presidency, with its illegal campaign activities and defiance of Congress, history is repeating itself.  So far, it is a farce, but it could quickly turn into a tragedy. 

 

Richard Nixon, A Life provides a fascinating, insight into a 20th century presidential crisis. In 1974, our Constitution and the division of power among the three branches of government was severely tested, but it survived intact. Now, in the 21st century, we can only hope for a similar, successful outcome.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172247 https://historynewsnetwork.org/article/172247 0
Environmental Historian Sara Dant: "History Is As Relevant Today As Ever."

 

Sara Dant is Professor and Chair of History at Weber State University. Her work focuses on environmental politics in the United States with a particular emphasis on the creation and development of consensus and bipartisanism. Dr. Dant’s newest book is Losing Eden: An Environmental History of the American West (Wiley, 2017), a "thought-provoking, well-written work" about the interaction between people and nature over time.  She is also the author of several prize-winning articles on western environmental politics, a precedent-setting Expert Witness Report and Testimony on Stream Navigability upheld by the Utah Supreme Court (2017), co-author of the two-volume Encyclopedia of American National Parks (2004) with Hal Rothman, and she has written chapters for three books on Utah: “Selling and Saving Utah, 1945-Present” in Utah History (forthcoming), “The ‘Lion of the Lord’ and the Land: Brigham Young's Environmental Ethic,” in The Earth Will Appear as the Garden of Eden: Essays in Mormon Environmental History, ed. by Jedidiah Rogers and Matthew C. Godfrey (Salt Lake City: University of Utah Press, 2019), 29-46, and “Going with the Flow: Navigating to Stream Access Consensus,” in Desert Water: The Future of Utah’s Water Resources (2014). Dr. Dant serves on PhD dissertation committees, regularly presents at scholarly conferences, works on cutting-edge conservation programs, and gives numerous public presentations around the West.  She teaches lower-division courses in American history and upper-division courses on the American West and US environmental history, as well as historical methods and the senior seminar.

 

What books are you reading now?

 

I just finished E.C. Pielou’s A Naturalist’s Guide to the Arctic in preparation for a 12-day river trip on the Hulahula River through the Arctic National Wildlife Refuge.  I feel a real urgency to see this remarkable landscape and its plants and animals before it vanishes or becomes something entirely different as a consequence of global warming and climate change.

 

Currently, I’m reading Doug Brinkley’s epic biography of Theodore Roosevelt, The Wilderness Warrior: Theodore Roosevelt and the Crusade for America, in part to bring some historical context to my involvement with the conservation efforts of American Prairie Reserve, which is attempting to restore some of the ecosystems lost in the late 19th century on the Great Plains of Montana. It’s a book I wish I could assign in my classes because of the sparkling writing and complex historical context Brinkley provides, but it’s 1,000 pages long and I fear my students would likely stampede for the door if they saw it on the syllabus.

 

One other really different book that I read recently is Rob Dunn’s Never Home Alone: From Microbes to Millipedes, Camel Crickets, and Honeybees, the Natural History of Where We Live.  It goes into remarkable detail about the largely invisible-to-us habitat that our homes and even bodies provide and how being dirty can actually be healthy.  It makes you want to change your showerhead immediately, though.

 

What is your favorite history book?

 

I’m not sure I could really pick a “favorite,” but I can tell you about the book that motivated me to pursue environmental history: William Cronon’s Changes in the Land: Indians, Colonists, and the Ecology of New England.  I read this as part of an early America field course in graduate school and, at the time, environmental history (the interaction of people and nature over time) was a relatively new line of inquiry.  Cronon’s elegant discussions of how the introduction of capitalism and the market so transformed nature that “by 1800, Indians could no longer live the same seasons of want and plenty that their ancestors had, for the simple reason that crucial aspects of those seasons had changed beyond recognition” (169) resonate right up to the present.  I have found this book to be incredibly useful and inspirational in my own work but also in the classroom. Cronon has a real gift for explaining complex ideas in a way that makes them accessible to all readers.  I find myself returning to it again and again and each time, taking away something new.

 

Why did you choose history as your career?

 

I did so with great reluctance, actually.  I come from a long line of teachers, so naturally, I wanted to be anything but a teacher.  My undergraduate degree is in Journalism and Public Relations, which was a terrific initiation into writing directly and concisely, although I had minor in history simply because I loved it.  I did a master’s degree in American Studies with the idea that I could learn broadly about the American past by combining my intrinsic love of history with literature, culture, and economics.  But I had no idea what to do next, so I took a job at a two-year college where I was the entire history department.  That really did it for me.  I loved interacting with students, I found a way to teach environmental history that combined classroom learning with outdoor field experience, and I finally discovered that I was and always had been a historian.  With that resolved, I returned to graduate school, earned my PhD in history, and have been doing what I love - writing, researching, and teaching - ever since.

 

What qualities do you need to be a historian?

 

I think the best historians are inherently curious and tenacious.  We not only ask “why” but also “how did this happen”?  Often, though, the answers to those questions aren’t easy to find, so a good historian has to be a bit of a detective.  In environmental history, we have the advantage of drawing upon other disciplines to help answer tough questions.  If you want to know if log and tie drives occurred on a particular river in the late 19th century, for example, you need to look at journals and newspapers, naturally, but stream-flow and tree-ring data are invaluable as are coniferous tree re-growth rates and forest composition studies.  The best historians are the ones who think unconventionally about their sources. 

 

Who was your favorite history teacher?

 

This is not the typical response: the high school football coach.  Like so many students, I went to a high school where the football coach was also the history teacher.  Unlike most students, I got a brilliant history teacher - Jesse Parker, who taught me to love history and football.  He made my brain hurt.  Then when I went to graduate school, I was fortunate to have LeRoy Ashby at Washington State as my mentor.  His genuine love of history and his brilliance in the classroom inspired and inspires me.

 

What is your most memorable or rewarding teaching experience?

 

I’m not sure I can pick a specific event.  The best part of my job as a professor is watching the lights go on in a student who “gets it.”  I think the best student evaluation comments that I receive are ones where the student “never liked history” before and now is completely fired up.  I had onestudent, for example, who used the research skills he learned from a class project to completely restructure the recycling practices of the company he works for.  But it’s also equally rewarding when a non-traditional student brags about the previous evening’s dinner conversation where she got to tell her kids what really caused near-extinction for bison “and they thought I was SO smart!”  

 

What are your hopes for history as a discipline?

History is as relevant today as ever.  We have many challenges - environment, politics, social, cultural - that are the end-product of our historical arc.  Understanding how we got here, what has worked and what hasn’t in the past, gives us the best chance of moving forward successfully.  My work places particular emphasis on the creation and development of consensus and bipartisanism, and I firmly believe that people care about what they know, so understanding more about one another facilitates the kind of dialogue and communication that fosters community and sustainability.

 

How has the study of history changed in the course of your career?

History has become ever more inclusive, which makes it more challenging to tell complete stories about the past.  The best history is complicated and messy, just like the present, but figuring out how to convey that complexity can be tricky.  When I wrote Losing Eden: An Environmental History of the American West, for example, I wanted to make it accessible and compelling - I wanted to re-arrange the furniture in people’s heads - so that they could look at the world around them, the American West in particular, with renewed appreciation and clarity.  But it also meant I wasn’t going to write about wars or race relations or politics (much).  The best understanding of the past must ultimately come from reading broadly and deeply across many fields and interpretations and I think the discipline has gotten better about giving voice to the many rather than the few. 

 

What is your favorite history-related saying? Have you come up with your own?

My students know that my favorite question is: at what cost?  Who or what pays the price for the decisions we’ve made and how does that play out over time?  To me, it’s a terrific shorthand for getting at the essence of history - the study of change over time - and for ensuring a comprehensive and complicated understanding of both the past and the present.  It’s the driving question in Losing Eden and I think it’s invaluable to making history about more than just names and dates.

 

What are you doing next?

Next up are a couple of projects - a journal article and a report on the historical uses of rivers in Utah as a window into the larger role of river commerce in the interior West in the late 19th century.  The other is a long-standing project to examine the development and implementation of the Land and Water Conservation Fund - the economic engine behind many significant conservation efforts of the 20th century and a remarkable model of political bipartisanism that endures in the 21st century.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172249 https://historynewsnetwork.org/article/172249 0
Ideologies and U.S. Foreign Policy International History Conference: Day 1 Coverage

 

On May 31, 2019, and June 1, 2019, Oregon State University hosted the “Ideologies and U.S. Foreign Policy International History Conference,” organized by Dr. Christopher McKnight Nichols, associate professor of history and director of the Oregon State University Center for the Humanities, by Dr. Danielle Holtz, a post-doctoral fellow at Oregon State University, and Dr. David Milne, a professor of modern history at the University of East Anglia, in Norwich, England.  The conference brought together academic and independent historians and political scientists from universities all-round the US and England to discuss the history of ideology and US foreign policy from many different points of view.

 

The first day of the conference began with introductions from organizers Drs. Nichols, Holtz and Milne. Dr. Nichols, in his introductions, made sure to thank all the major sponsors of the conference, which included Oregon State University, the Richard Lounsbery Foundation, the Carnegie Corporation of New York, Patrick & Vicki Stone, Citizenship & Crisis, the Oregon State University Center for the Humanities, and the Stoller Family Estate. Dr. Nichols also shared with attendees that the essays presented at the conference will, in a near future, became chapters of book that will delineate the state of the field for what he termed the “intellectual history of U.S. foreign policy,” an emerging disciplinary area that he suggested is roughly ten-to-fifteen years old. 

 

Dr. Nichols’ introductory remarks concluded that ideas and ideologies heavily shaped US foreign policy, as affirmed by Michael Hunt in his path-breaking book “Ideology and U.S. Foreign Policy” published in 1987.  Dr. Nichols stressed the need for continuous discussions about the intersection of ideology and foreign policy, where Hunt’s book can be used as the frame work for it and for moving beyond that project to build on the vibrant new directions established by recent work on the role of ideas in U.S. foreign relations broadly defined.  

 

Dr. Nichols’ remarks were followed by remarks from Dr. Holtz, who brought to the discussion an emphasis on the ideological clashes in Trump’s White House to reveal how even seemingly non-or-under-ideological Administrations can act ideologically; this, she argued, provides a reason why it is important to evaluate the intersection of ideology and foreign policy historically. She added that Trump’s aggressive posturing when dealing with those who do not agree with him, sign to a state of ideological fragmentation which interferes in how foreign policy is executed.  To help frame the multi-day conference and approach to the resulting book Holtz elaborated on philosophical and theoretical approaches to ideologies and ideology critique, drawing on concepts advanced by Louis Althusser, among others. She emphasized that often the historical record reveals that ideological debates are about “a struggle over the obvious,” which was a proposal that seemed to resonate with the audience. 

 

To Dr. Milne, who concluded the introductory remarks, there is a great deal of significance in approaching foreign policy and ideology differently then Michael Hunt, and added the importance of differentiating intentional from accidental ideology.  Milne made the case that Hunt’s three-part schema, that focused on “visions of national greatness,” “the hierarchy of race,” and a strong counterrevolutionary impulse – offers a compelling framework to think through the dynamic between ideology and U.S. foreign policy, but not the only one. He asked the contributors and audience: “Does Hunt’s definition function for your intervention or do you opt to define “ideology” differently? What might a theoretical reconfiguration of ideology do to open new avenues of inquiry for historians of ideas and U.S. foreign relations?” Citing numerous examples from the history and from the works of the contributors to the conference, Milne concluded by stating that the “work at this conference represents a broad and diverse series of interventions in the historiography” of U.S. foreign relations. Praising Dr. MichaelaHoenicke Moore’s insights in her chapter, Dr. Milne suggested this historiography “has encouraged essentializing interpretations and a reductionist impulse, searching for and finding recurring and persistent patterns, often promoted through elite culture, marginalizing and dismissing voices and movements that resisted, questioned, and rejected the call to arms.” When reading these papers and listening to the series of talks, he encouraged the audience, it is worth reflecting on the advantages and disadvantages of defining ideology and U.S. foreign relations broadly and of moving beyond what some of the older models of an “ideology-elite-shaped policy” nexus.You can watch the introductory remarks on C-SPAN.

 

 

(Dr. Christopher McKnight Nichols Introductory Remarks)

 

After the introductions, the first panel, moderated by Dr. Nichols, presented papers about “concepts of the subject-state.”  Dr. Matthew Kruer, from the University of Chicago, presented an essay called “Indian subjecthood and White Populism in British America,” where he discussed the subjecthood of Indians in relation to the British crown in the 17th and 18th centuries.  At that point in history, the crown accepted the Indian tribes in North America as subjects of the Empire while allowing them to keep their sovereignty.  The British accepted this arrangement, since in their world view, it was better to have subjects in North America rather than conquered peoples.  However, according to Kruer, all this changed in the mid 18th century after certain tribes began to challenge British authority, and white settlers, feeling endangered by the Indians, unprotected by crown, and likely unmoored by being equal to indigenous peoples in the “great chain of being” under the King, massacred Indians.  This violence against the Indians were justified by settlers indicating that being equals to Indians threatened their rights as Englishman.  This, Kruer argued, was the historical shift from subjects to citizens, it defaulted to western colonial power and as such became a de facto endorsement of white supremacy.  

 

The second essay of this panel was presented by Dr. Benjamin Coates, from Wake Forest University.  Under the title “Civilization and American Foreign Policy” Coates traces the historical rhetoric and the complex, and sometimes dualistic meaning of the word “civilization” in the US, from as early as the 18th century, all the way to the present.  He shows how it was applied over the decades in the context of the US foreign policy, specially by presidents when addressing the nation, and how the rhetorical meaning of the word morphed to keep up with ideological shifts in US foreign policy championed by US presidents.

 

The third essay of the panel was presented by Dr. Michaela Hoenicke-Moore, from University of Iowa.  The tittle of the essay was “Containing the Multitudes: Nationalism and U.S. Foreign Policy Ideas at the Grassroots Level.”  In this essay, Dr. Hoenicke-Moore argues that the voices of the people during and shortly after the Second World War, had very little, to no impact on how the US exercised its foreign policy.  She arrived at this conclusion by researching foreign policy from the bottom up.  Dr. Hoenicke-Moore does point out in the essay that a great deal of fear mongering was present in the political discourse, but in reality, after the war for instance, few people at the grassroots level, saw the Soviets as enemies, a fact that was largely ignored by those in government.  She concludes by adding that, in the end, the elites were able to establish their will, the people at the grassroots levels, were not. 

 

The last essay of this panel was presented by Dr. Mark Bradley, from the University of Chicago, and it was titled “The Political Project of the Global South.”  In this essay, Dr. Bradley argues that there is an imaginary global south that cannot fully and completely became an object in foreign policy. Thus, trying to study the global south separately, allows us to see US foreign policy differently, by looking at ideology from a different angle that would eventually bring us back to US history. Dr. Bradley then brings into question continuity and rupture.  What is more important?  Should continuity, sometimes take the back seat to rupture?  He argues that the late 20th century is a time of rupture, a time of change.  Not only because of the end of the Cold War, but because of many other structural, economical, and technology changes in the world.  By analyzing yearly UN speeches from by world leaders, Dr. Bradley is also able to see the rhetoric used by these world leaders when referring to the global south, concluding that there are changes on the rhetoric about that region of the globe.

 

After Dr. Bradley’s presentation, the Q&A portion of this panel began. There were several questions and discussions with the panel, starting with a discussion surrounding Michael Hunt’s book and how it directed the research of the panelists.  In this respect, the panelists spoke about the intersection of effect and ideology, about power and power relations and how those power relations brought about new social change.  The traditional ways of thinking of power was also discussed in the context of Hunt’s book.  The discussion then shifted to the lack of grassroots influence in the decisions made by government officials in respect to foreign policy.  The panel concluded that grassroots usually is mute in those matters, while elites tend to make their voices heard, and many times are able to get what they are after.   The panel ended the Q&A by briefly discussing the rhetoric and the how the word “civilization” is used in the US and in other parts of the world. This ended the morning proceedings.

(Doctors David Milne, Marc-Willian Palen, Nicholas Guyatt, Danielle Holtz, and Matt Karpp)

 

After lunch, the second panel was introduced by the moderator, Dr. David Milne.  The theme of the panel was “Concepts of Power” and it kicked off with an essay by Dr. Marc-William Palen, from the University of Exeter, in the United Kingdom.  Dr. Palen’s essay was titled “Competing Free Trade Ideologies in the US Foreign Policy.” In the essay, Dr. Palen traces the US free trade ideology and how it shifted in the later 19th century, early 20th century.  Dr. Palen separated the trade ideology of the US into three major phases, the Jeffersonianism of the 1840s, which had a protectionist attitude, to the cottonism of the 1900s, and neo-liberalism of the present-day free trade.  The freedom to free trade, according to Dr. Palen, has kept the peace, an approach that can be considered radical.  The value of free trade has been so high for the US, that support for dictators and other problematic governments have been part of the modus operandi of this country.  Free trade has also been a tool for punishments, in the way of increased tariffs for instance, as a shift back to the protectionism of the 19th century, which can be seen in US free trade today.  

 

The next panelist was Dr. Nicholas Guyatt, a reader in North American history at University of Cambridge, in the United Kingdom.  Dr. Guyatt’s essay was called “The Righteous Cause: John Quincy Adams and the limits of the American Anti-imperialism.”  Dr. Guyatt began his presentation of his essay by quickly explaining the Opium wars in China in 1839. In the Opium wars, the British government went to war with China to force the Chinese into agreeing with trade terms that were beneficial to Britain and terrible for China.  The Chinese were incredibly mismatched against a much more powerful British military. This war lasted until 1842.  Dr. Guyatt then shows John Quincy Adam’s take of the war.  Adams believed that Britain was right in going to war with China, because Britain, in his view, was well within its right to demand such advantageous trade agreements.  China, on the other hand, was violating a world order by challenging Britain.  Adams was a firm believer in a world order ruled by Christians, which China was not. This points to a world view held by Adams that did not place China in equal footing to white European Christian societies for the world, thus making China a colonialized space, rather than a place with rights.  Dr. Guyatt concluded by saying that Adams held a position that the US was exceptional, better than others, and he used the law to get to his objectives, or to justify his positions.  

 

Doctor Guyatt was followed by Dr. Danielle Holtz, a visiting research fellow at Oregon State University.  Dr. Holtz’s essay was titled “’An Imaginary Danger’: White Fragility and Self-Preservation in the Territories.”  Dr. Holtz traced white fragility and self-preservation to the 1840s debate in Congress involving Florida’s proposal for statehood.  In their proposal, Floridians wanted to be able to dictate who could come, and who could live in the state.  In other words, they did want the presence of African Americans in the state. Dr. Holtz argued that black bodies meant danger to white southerners, and their presence alone, was enough to trigger an instinct of self-preservation, that was reflected in organizational racism, and later in eugenics.  During the presentation of her essay to the panel, Dr. Holtz also compared the 1840s Florida debates in Congress with the current president’s immigration policies that seem to have at its core, the preservation of whiteness. 

 

The panel closed with a presentation by Dr. Matt Karp, from Princeton University.  Dr. Karpp’s essay was titled, “Free Labor and Democracy: The Early Republican Party Confronts the World.”  Dr. Karpp began the presentation of his essay by talking about his last book, where he wrote about slave holders and US foreign policy, then tied that project to the essay he was presenting.  In this essay, he looked at the Republican party of the 1850s as an anti-slavery party, and a threat to the South.  Members of the Republican party were a threat to the South’s ideological struggle to maintain free labor viable through slavery, while turning the population of the North against this so important institution.  

 

After Dr. Karpp’s presentation, the Q&A and commentary session for panel two began.  The discussion and questions to the panel revolved around the overlap of ideologies in the four different essays, ranging from the John Quincy Adams’ concern with the maintenance of the certain world order, to how power played its part on the works presented.  The panelists also discussed how we arrived at the state of “white fragility” that we see today in America, and how science and changes in infrastructure helped in the ideologies mentioned in the panel’s essays.  

            

The final panel of the day was moderated by Dr. Danielle Holtz and had as its theme “Concepts of the International.”  

 

(from left to right, Drs. Emily Cornroy-Krutz, Raymond Haberski Jr., and Penny von Eschen)

 

The first presenter was Dr. Emily Conroy-Krutz, from Michigan State University. Dr. Conroy-Krutz’s paper was titled, “’For Young People’: Protestant Missions, Geography, and American Youth at the End of the 19th Century.”  In this essay Dr. Conroy-Krutz, investigated how religious missionaries talked about Africa at the end of the 19th and beginning of the 20th centuries, and how that rhetoric informed US foreign policy.  The essay began in the 1840s, when missionaries saw other peoples of the world as savages.  She used an example where a missionary speaks of Hindus as heathen, thus telling the children that were reading this literature, that Hinduism was a horrible religion.  However, she shows that by the 1970s, the same missionaries were writing materials for children that showed life in places like Africa, as an adventure, but also as a racist ethnography.  Dr. Conroy-Krutz concluded that this “religious intelligence” was transferred into children literature in order to teach adult ideologies to children and to shape how they saw the world. 

 

The next panelist was Dr. Raymond Haberski Jr., from Indiana and Purdue Universities.  Dr. Haberski’s essay was titled “Just War as Ideology: The Origins of a Militant Ecumenism.”   In this essay, Dr. Haberski shows how religion has been a great part of American identity, and how ideology can be mascaraed by religion.  He pointed out that after the Vietnam war, there was a religious crisis that saw the emergence of ecumenical militarism.  The ecumenical militarism meant that catholic and evangelical bishops and pastors became the moral compass for the country. Always opposed to the Vietnam war, the catholic bishops, especially after the war, began to question if the US was in moral charge of the world.  This debate over the morality of war, ended up influencing foreign policy.  This influence, Dr. Haberski concluded, began to appear in foreign policy as “just war” where moral justifications for wars were sought. 

 

Dr. Penny von Eschen, from University of Virginia and Cornell University, followed Dr. Haberski.  Her essay was titled “Roads Not Taken: the Delhi Declaration, Nelson Mandela, Vaclav Havel, and the Lost Futures of 1989.”  Dr. von Eschen began the presentation of her essay by sharing that it is a result of her research for a new book that talks about the legacies of the Cold War.  One of those legacies was the meetings between president George H. W. Bush and Nelson Mandela and Vaclav Havel, and what resulted from those meetings, especially considering the ideological differences between the American president and Mandela and Havel.  Dr. von Eschen sees that the breakup of the Soviet Union was a rupture moment in which the US had to establish itself as the only global power, asserting that no other power from the East was to emerge.  This was accomplished by using an ideology that normalized violence, especially from those who surrounded president Bush, such as Dick Cheney, Robert Rumsfeld and others.  Dr. von Eschen concludes by saying that this ideology was also based on fear from the outside, fear of rogue states, which created and solidified a “us vs. them” ideology.  

 

Dr. Andrew Preston was introduced next.  Dr. Preston is from Claire College, University of Cambridge, and his paper was titled, “Fear and Insecurity in US Foreign Policy.”  In this essay, Dr. Preston takes on, as the title suggests, fear and insecurities in US foreign policy.  He uses the long-standing crisis between the US and North Korea to show how the US goes into moments of panic over tensions in the Korean peninsula, when South Korea, for instance, does not have the same reactions. South Korea, who’s capital Seoul, can be wiped out by North Korean artillery in a moment’s notice, does not share the fear and panic the US shows.  Dr. Preston pointed out that, although this fear is very present in US foreign policy, it is not an ideology, but part of the American culture, which could be seen in 1941, and be traced to the present.  He concluded by reminding everyone that despite the fears always run high on the US’s part, the situation never really changes.  

 

(form right to left, Drs. Emily Conroy-Krutz, Raymond Haberski Jr, Andrew Preston, and Christopher Nichols)

 

Dr. Christopher Nichols, associate professor of history at Oregon State University closed the presentations of this panel with an essay titled “Unilateralism as Ideology.”  In this essay and presentation, Dr. Nichols explored his views about how ideas and ideologies evolve over time, noting his own model as one premised on a vision of punctuated equilibrium. He asserts that the U.S. ideology, from the beginning and with important shifts and pivotal moments, has been defined by a core element of unilateralism. Unilateralism “as ideology” he remarked, was clearly present at the beginning, in the Declaration of Independence and in the nation’s first “Model Treaty” of 1776, designed to minimize U.S. reliance on foreign nations and to steer clear of foreign entanglements by privileging bi-lateral and non-binding agreements. The recent turn to unilateralism, Nichols remarked, is thus not remarkable, nor is it new. Unilateralism is at least evident, if not influential, in virtually all historical debates over international engagement since 1776. This then prompts several questions: Why? Nichols made the case that unilateralism has functions as both ideology and behavior, or tactic, helping a weak nation maneuver in world of larger powers and competing interests at least until the late 19th century. But another question lingers, according to Nichols: Why does or did the U.S. enter into conflicts unilaterally, when it could potentially have benefited more from multilateralism?  Dr. Nichols believes the answers to this are in the longer historical record and he asked the audience to help assess them. Unilateralism, as an impulse to place the nation first, has been foundational, linked to Washington, Jefferson, and Monroe, from 1789 through 1823, differentiated at times in terms of the U.S.’s role in the hemisphere versus around the world. In light of the longer patterns in foreign policy thought Nichols sees unilateralist policy ideas as fundamentally a product of a kind of arrogance set on a bedrock of exceptionalism. After giving several examples of unilateral decisions made by the US, from the War of 1812 to WWI, both examples of the U.S. entering conflicts with no formal allies or in terms of being only an “associate power,” even if that one-sidedness came at a great cost, up to the post-9/11 Iraq War, Dr. Nichols concludes that unilateralism is a cultural ideology that revolves around a core calculus about “vital interests” such that foreign policy decisions must always be conceived, evaluated, and implemented on U.S.’s terms. 

 

After Dr. Nichols’ presentation the Q&A session for this panel began with a question about fear in foreign policy, and if it was something unique to the US or generated by fear of potentially losing power.  All the members of the panel pondered on these questions, and it seems that there was an agreement that the fear, if not unique, was at least unusual, and likely triggered by the perception that the US’s power was declining, and the country was losing its overall status before the world.   A question regarding morality in foreign policy was asked which precipitated the assessment that a morality rhetoric was commonly attached when explaining conflicts where the country was asking its military to kill or to die.  Similarly, the morality of unilateralism seemed at stake, too.  The Q&A ended with a discussion amongst the panelists about the evangelization of children by using missionary ethnographies, and how it affected future generations, which Dr. Conroy-Krutz and Dr. Nichols discussed in terms of children as “time-shifted adults.”  The last bit of discussion was about how a “just war” was used as currency in unifying arguments to justify armed conflicts, particularly in creating a kind of theology of conflict in the wake of the attacks of 9/11.

 

After the Q&A, the conference was adjourned until 7pm, when keynote speaker James Lindsay presented the talk called “Donald Trump and Ideology,” which was part of the 2018-2019 Governor Tom McCall Memorial Lecture.  This lecture will be covered in a separate post.  

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172258 https://historynewsnetwork.org/article/172258 0
Ideologies and U.S. Foreign Policy International History Conference: Day 2 Coverage

(From Left: Mark Bradley, Jeremi Suri, Matthew Kreur, Nick Guyatt, Jay Sexton, Daniel Immerwahr, Daniel Tichenor, Daniel Bessner, Benjamin Coates, Danielle Holtz, Emily Conroy-Krutz, Penny von Eschen, Ray Haberski Jr., Imaobong Umoren, Christopher McKnight Nichols, Melani McAlister, Matt Karp, David Milne, Michaela Hoenicke-Moore, Marc-William Palen. Photo by Mina Carson)

 

What was the global significane of the Civil War? What exactly is the definition of “freedom?” How are Donald Duck, Indiana Jones, and anti-modernizationists connected? The second day of the Ideologies and U.S. Foreign Policy International History Conference was highlighted by experts’ bold answers to these ambitious questions. Two separate panels of historians and political scientists shared their research on issues related to ideologies and U.S. foreign policy. 

The first panel of the day discussed “concepts of the international” in U.S. foreign policy ideology and was moderated by Mark Bradley of the University of Chicago. Daniel Tichenor, from the University of Oregon, spoke about his paper, “Contentious Designs: Ideology and U.S. Immigration Policy.” In his paper, Tichenor related concepts of the international to a discussion of how immigration policy in the U.S. has been framed by four ideological cluster groups– Classic Restrictionists, Liberal Cosmopolitans, Free Market Expansionists, and Social Justice Restrictionists– which have existed since the foundation of the nation. Tichenor tied the commonalities and tensions within these clusters to an explanation of why the United States has historically had such little progress with immigration policies. For him, this stalemate has been as a direct result of the isolating practices of these clusters. When meaningful legislation has occurred, such as with the passing of the 1965 Immigration and Nationality Act and the1986 Immigration Reform and Control Act, it has always been during rare times of what he called, “strange bedfellow politics.” In most instances though, these clusters tend to operate in isolation from–or opposition to -- one another. As a result, immigration policy in the United States has been stagnated except for infrequent moments of compromise and incongruous alliance.

 

(From Left: Daniel Tichenor, Imaobong Umoren, Melani McAlister, Jeremi Suri. Photo by Mina Carson)

 

Imaobong Umoren, from the London School of Economics and Political Science, then spoke about her paper, “Eslanda Robeson, Black Internationalism and U.S. Foreign Policy,” an extension of research she had done for her previous book Race Women Internationalists Activist-Intellectuals and Global Freedom Struggles (University of California Press, 2018). In her talk, Umoren explored the concepts of the international through the life of Eslanda Robeson, an African-American activist (wife of celebrated singer, actor, and international activist Paul Robeson) who situated her activities as an extension of hope. Umoren spoke of her understanding ofhope as a way to grasp not only Robeson’s activism but also transformational black internationalism, more broadly, and as a key element of a “romanticized” vision of certain organizations, such as the newly formed United Nation, which Robeson held. 

 

Umoren was followed by Melani McAlister fromThe George Washington University, who spoke about her paper, “‘Not Just Churches’: American Jews, Joint Church Aid, and the Nigeria-Biafra War.” McAlister focused on the international by examining the origins of humanitarian aid during the Nigeria-Biafra War, specifically aid given by American Jews. She explained that theaid from American Jews was largely shaped by larger political factors, such as the desire to reshape the public narrative and perception of Jews after the end of WWII as well as the lack of mainstream status of American Jews in the public narrative. Her analysis of American Jews represents a larger framework of how issues such as religion, money, perception, and memory have shaped humanitarian aid in situations of crisis. 

 

The first panel concluded with Jeremi Suri from the University of Texas at Austin. In his paper,“Freedom and U.S. Foreign Policy,” he abstractly dealt with notions of the international by discussing the different ways Americans have defined the internationally ubiquitious word “freedom” throughout history. First, in early America, freedom was considered to be Freedom From, such as freedom from the British system. Policy at this time was defined by what America was not rather that what it was. The second period took place after the Civil War and was mostly about Freedom to Produce. Suri suggests that this was largely framed by Americans’ insatiable need to expand, most acutely demonstrated by William H. Seward’s acquisition of Alaska. The third period took place around the end of WWII, or perhaps from the New Deal through the end of WWII and the building up what some call the “liberal world order,” as freedom came to be redefined and redeployed as Freedom of Hegemony. In this period, according to Suri, Americans felt as though they could only be free if they dominated, which was further cemented by a push towards unipolar U.S. world hegemony. Suri concludes that freedom has rhetorically been used as a reason for action, power, dominance, and mobilization. These various uses of freedom have left its definition fluid. 

 

Freedom of Hegemony may not accurately describe the current state, especially since, according to Suri, the failures of the Iraq war shattered that definition. As a result, Freedom is currently without definition, despite the fact that it is “perhaps the” most important word of American history. Hopefully, future historians will come to define this era. Suri encouraged the audience to consider the strengths and limits of such framing, and they seemed keen to engage and discuss the topic with him. 

 

After the panelists spoke, Bradley opened up the room to questions and comments from the audience. A lively discussion emerged around Suri’s definition of Freedom. One audience member felt as though Freedom to Produce seemed very similar, if not identical to Freedom From. Suri acknowledged their similarities but clarified his point by saying “old waters don’t go away.” Old definitions of freedom still exist, but new definitions emerge and for Suri, the “relative weights” of the definition of freedom are what really mattered. This prompted another audience member to ask Suri if Freedom of Hegemony was really just Freedom of Economic Plunder, to which Suri chuckled and promptly agreed.

 

Even though the the second panel of the day grappled with “concepts of progress” in U.S. foreign relations, which was moderated by McAlister, more connections between the panels emerged such as concepts of modernization and space.

The panel began with Jay Sexton from the University of Missouri who spoke about his paper, “The Other Last Hope: Capital and Immigration in the Civil War Era.” In relating to issues of progress, space, and modernization, Sexton set out to try and answer one ambitious question: what was the global significance of the Civil War? In answering this question, Sexton examined the various stages of American economic policy as shifting from one of heavy consumption to downshifts in spending and then to the chaotic aftermath of these abrupt shifts. To illustrate this point, Sexton used the metaphor of a vacuum or Hoover. The first stage of heavy consumption in the 1840s is akin to the hoover sucking in capital and labor at a time of unprecedented growth and an expansion of “space” in the United States. The downshifts in economic spending happened in 1857 with an abrupt blowing of the hoover- the United States faced a lot of economic pull factors such as instabilities in Europe like the 1848 Irish Potato Famine. And then after a period of blowing, the hoover got turned back on, but on “turbo mode” in 1846 when the economy was unable to control its growing patterns and in the wake of African colonization (of which the United States was not a part), Americans began intensified capital and labor scrambling. Through this sucking, blowing, and sucking of the hoover, Sexton deftly explained the flows of labor and power in the United States in the Civil War era. These changes in immigration and labor flows explore how the U.S. was attempting to modernize during in these years, as well as impacts on the immigration and economy as the physical boundaries of the country are changing. 

 

(From Left: Melani McAlister, Jay Sexton, Daniel Bessner, Daniel Immerwahr. Photo by Mina Carson)

 

The second speaker was the University of Washington’s Daniel Bessner who talked about his paper, “RAND and the Progressive Origins of the Military-Intellectual Complex.” In his talk, Bessner argued that RAND (an influentialmilitary minded research think tank that was established in the 1940s) was the apotheosis of Air Theory (the pervading idea amongst military leaders and politicians after WWII which held that the progressive developments of plane technology would eventually lead to the end of warfare).Air theory was developed during a time of heightened notions of progress leading to the betterment of society, and his work will focus on the works of several RAND elites in the hopes to situate his work amongst the dearth of histories surrounding defense intellectuals, a topic which Bessner argues highlights a unique form of American state building. 

 

The final speaker of the day, Northwestern University’s Daniel Immerwahr, connected the cartoon character, Donald Duck, to anti-modernization. His talk, “Ten-Cent Ideology: Donald Duck, Comic Books, and the U.S. Challenge to Modernisation,”situated the work of Carl Barks, the man behind Donald Duck, as a challenge to modernization. According to Immerwahr, Donald Duck was extremely well circulated and read in the postwar era, and the generation that grew up reading Donald Duck grew to become anti-modernizationists. Barks’ legacy, according to Immerwahr, seems to have, at least anecdotally, helped to usher in the end of modernization. To make this point, Immerwahr usedthe example of Steven Spielberg and George Lucas, both of whom have publicly admitted to being huge fans of Donald Duck when they were children. In the film Indiana Jones, which was the collaborative work of the two, clear anti-modernization perspectives are present. Furthermore, the film takes liberties from themes and storylines found in Barks’ Donald Duck comics, all to suggest that their anti-modernist agenda was influenced by Barks’ world of Donald Duck. 

 

McAlister ended the panel with questions and comments from the audience. Lots of people had questions about Donald Duck and the practice of analyzing comics and youth literature as a lens on foreign policy ideology. Several in the audience wanted to know ifthe character had been translated into different languages.I mmerwahr reported Donald Duck became even more ingrained in some translated countries’ cultures than in the United States. 

 

An interesting set of questions and discussions also emerged about the notion of “Farbackistan,” an imagined place that Donald Duck went to on one of his journeys. In Barks’ world, Farbackistan was always portrayed as empty and with no modern amenities. Bradley keenly commented that perhaps Farbackistan, while a fictional place in the world of Donald Duck, was actually a vision of place that the people of RAND and the Civil War era policy makers were also going. During the time when RAND was really in full swing, Americans went to Vietnam, a pseudo-Farbackistan that was often depicted in American media as remote and without modern comforts. During the Civil War,  one of the two most important policies that defined the global significance of America after the Civil War, according to Sexton, was the Homestead Act, which envisioned the American West as an empty expanse of nothingness, a kind of Farbackistan, that needed to be filled with people. Other questions of the panelists seemed to focus on the emerging similarities of the three scholars’ work despite the disparate themes. From these questions it seemed that the commonality of the three talks was hinged upon notions of place and stories of progress and modernization. 

 

 

(From Left: Jeremi Suri, David Milne, Christopher McKnight Nichols, Danielle Holtz, Daniel Bessner. Photo by Mina Carson)

 

The conference concluded with an intimate circle, where scholars and community members came together to reflect on the conference and the issues at large. Attendees agreed that the presentations and papers amount to a major intervention in the field and the group talked about key takeaways, new developments in the field, and how best to shape the essays into the best possible resulting book. Throughout the discussion and reflection, it became clear over the two days that ideologies are not static, as was once suggested in part by Michael Hunt, though there was some useful discussion and debate about tracing long arcs and consistencies in thought, action, and policy versus looking more for moments of rupture. Most agreed that the presentations clinched just how much ideas and ideologies matter for understanding the history of U.S. foreign relations and the “U.S. in the world,” and, perhaps most importantly, that there is virtue in the idiosyncratic. 

 

The conference was sponsored by the Richard Lounsbery Foundation, the Andrew Carnegie Foundation, Patrick and Vicki Stone, the Oregon State University Center for the Humanities, the Oregon State University School of History, Philosophy, and Religion, and the Oregon State University School of Public Policy

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172260 https://historynewsnetwork.org/article/172260 0
"Donald Trump and Ideology:" Dr. James Lindsay Delivers the Governor Tom McCall Memorial Lecture

(James M. Lindsay giving his talk. Photo by Mina Carson)

 

In July 2017, then Secretary of State Rex Tillerson and then Secretary of Defense James Mattis invited President Trump to a meeting at the Pentagon in the famously hyper-secure room known as “the tank.”  Tillerson and Mattis were concerned that Trump did not understand global politics and wanted to give him a crash course on U.S. world leadership. However, the lesson did not have the results that Tillerson and Mattis expected. Trump reportedly commented that the world order and American role described by his advisors  was “exactly what I don’t want.” Famously, Tillerson called Trump a “moron” after the meeting. 

 

The Tom McCall Memorial Lecture was established to foster intellectual exchange. Previously, politicians, journalists, and other esteemed members of the publichave delivered the address. Lindsay’s experiences uniquely suited him to give this year’s talk. A political scientist by trade, Lindsay has served as the director for global issues and multilateral affairs on the staff of the National Security Counciland was a consultant to the United States Commission on National Security. Currently Lindsay is the Senior Vice President, Director of Studies, and Maurice R. Greenberg Chair at the Council of Foreign Relations. In addition, Lindsay has written several books and articles. He also hosts several podcasts and writes a weekly blog, The Water’s Edge, which focuses on the politics of U.S. foreign relations and the domestic policies that underpin them. 

 

The lesson that Tillerson and Mattis attempted to teach Trump is grounded in American history and political thought.  The world created by the U.S. after World War Two, the so-called “rules based order” (RBO) or “liberal world order” (LBO), institutionalized American leadership through its role in international organizations and alliances. Those alliances were necessary, according to Lindsay, in order to create a world more conducive to U.S. interests and values such as open markets, democracy, human rights, and the rule of law.  While this was a radical approach to world leadership and a new step for the nation, it was based in the belief that this system woud help other countries flourish and in doing so, protect American propserity. Mattis and Tillerson recognized that this was not a perfect world order but it brought about an unprecedented period of peace and security and overall served America’s interests.

 

In contrast, Lindsay argued that Trump tends to consider American allies as adversaries or at least as potential competitors or freeloaders.  In Trump’s worldview, American allies use the U.S. for protection while trading on terms disadvantegous to the U.S., causing job losses and worsening the American economy.  To Trump, the global system is less a network based on interdependence and cooperation and more of a zero-sum hierarchy. Trump’s ideological framework is that international politics are always conditional and transactional, similar to Trump’s area of expertise: real estate. Interestingly, this is not exactly a position of isolationism, according to Lindsay, but one of transaction and reciprocity, where, for example, Trump wants allies to spend more, give more, provide more, etc.  In short, it is a worldview premised on a belief that if the U.S. demands more (especially from its allies), it will get it. 

 

Photo by Mina Carson

 

Lindsay challenged the ideas the president uses to support his criticism of  American allies.  For instance, Trump often threatens to bring home U.S. troops if allies do not increase their NATO military spending.  However, if the U.S. withdrew from places like Germany, Japan, and South Korea, it would actually cost U.S. taxpayers more money because the host countries pay up to 50% of the costs tostation those troops.  In trade, Lindsay pointed out that the U.S. remains the world’s largest and most vibrant economy. He argued  that bilateral trade deficits are not the consequence of bad trade deals but rather  simply reflects the fact that the U.S. spends far more than it saves, that it has bad tax policies, and lacks certain technology.  

 

Since taking office, Trump has certainly acted in accordance with his worldview. He has demanded NATO allies pay more, raised tariffs on American allies, began a trade war with China, withdrew from the Iranian nuclear deal and the Paris Climate Agreement (among others), and dismissed human rights abuses.  Trump offered wins, but in truth, those never came.  He claimed to have defeated ISIS, but still has no clear strategy on how to fight terrorism.  Negotiations with North Korea have not produced the results he promised.  The trade deficit is up by 15% and the re-negotiation of some trade agreements by Trump’s administration ended up benefiting other countries, such as the European Union, and hurt U.S. agriculture. 

 

So why, Lindsey asked, is Trump not “winning” as he promised?  Lindsey aruges this is in part because Trump lacks strategy, acts and speaks impulsively, considers unpredictability to be a virtue, and often disagrees with his national security team. But the biggest reason, Lindsey believes, is that Trump fundamentally misperceives the world: the application of raw power has not worked as he expected.  Enemies, by and large, have not buckled to threats and bluster.  The unexpected consequence of the application of raw power, according to Lindsay, was miscalculating not how much pain the U.S. can inflict, but how much pain the U.S. enemies were able to take. Allies may have given a bit more in the face of Trump’s unilateralist demands, but at the same time many allies now seem to see concessions as temporary, with new demands imminent or even treaties or agreements torn up on a whim. 

 

Lindsey worries that the largest impact of Trump’s presidency is that American allies increasingly believe they can work without the U.S. Lindsay fears the future without American leadership will be less secure, leading to another global battle for hegemony. During the first years of his presidency, Trump has managed to show the world what is like to not have American leadership as he pulled out of the Paris Accords, the Trans-Pacific Partnership, and the Iran Nuclear Deal. Trump renegotiated NAFTA, cheered for Brexit, maligned NATO, and more. The world does not like that, Lindsay argued powerfully. Some global leaders have come to the White House to seek increased American leadership, only to be told by the White House that their help is not needed.  The bottom line:  American first, America alone.   

 

After Lindsay’s talk, public questions and comments were encouraged. The questions touched on nearly all aspects of Lindsay’s talk, and really demonstrated an engaged and interested public. Despite the fact that some questions were hard hitting, and some were more lighthearted, in general, the audience seemed most keen to have Lindsay’s opinion on the future state of the United States in a post-Trump world. While Lindsay seemed initially reluctant to make conjecture about the future, even quoting Yogi Berra with, “predictions are hard to make, especially ones about the future” he eventually commented that, on balance, he believes that Trump’s legacy will most likely be like that of any president, an ever fluctuating series of ebbs and flows.         

 

Photo by Mina Carson

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172259 https://historynewsnetwork.org/article/172259 0
Photographs From My First Trip to China Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

I went to China in 1989 with a group of former Jewish residents of Shanghai to celebrate the first open Passover Seder in Shanghai in decades. Our visit, which included the first Israeli journalists allowed into Communist China, was part of the more general liberalization in Chinese politics and economics. Passover began on April 20.

 

I was very happy to be on this trip, because I was just beginning to research the flight of about 18,000 Jews from Nazi-occupied Europe to Shanghai after 1938. My grandparents had been among them, leaving Vienna in 1939. I was able to do eight interviews, see the neighborhoods where my fellow travelers had lived, and even get into my grandparents' former apartment. We met Chinese scholars who were doing research on Jews in Shanghai.

 

I like to wander around new cities with my camera. In Shanghai, and a few days later in Beijing, I saw things I didn't expect: students protesting the lack of freedom and democracy in China. Nobody I asked knew what was going on. I took these photographs, and a few others. I wish I had taken many more. These images were originally on 35mm slides, and were printed by Jim Veenstra of JVee Graphics in Jacksonville.

 

A Shanghai Street Protest, Shanghai, April 22, 1989

 

I was wandering in downtown Shanghai when a wave of young people came out of nowhere and marched past me. I had no idea what they were doing. Chinese streets are often filled with bicycles and pedestrians, but this was different. Protests had begun in Beijing a week earlier, and had spread to other cities already, but we knew nothing about them.

 

 

Tiananmen Press Freedom, Beijing, April 24, 1989

 

After a few days of personal tourism and memories in Shanghai, our group flew to Beijing to be tourists. Tiananmen Square is in the center of the city, right in front of one of most important tourist sites, the Forbidden City, home of the emperors for 500 years. Students had been gathering there for over a week already. The sign in English advocating A Press Freedom was surprising, since there were very few signs in English at that time. The flowers on the monument are in memory of Hu Yaobang.

 

 

Summer Palace, Beijing, April 25, 1989

 

The next day we visited the Summer Palace on the outskirts of the old city, built in the 18th century as a lake retreat for the imperial family. Busses poured into the parking lot with school children and adult tourists. At the entrance, students displayed these signs in English for every visitor to see, requesting support for their movement. Our guides could not or would not comment on them.

 

 

 

Inside Summer Palace, Beijing, April 25, 1989

 

Inside the grounds of the Summer Palace, students were collecting funds for their cause: democracy and freedom in Chinese life. The bowl is filled with Chinese and foreign currency.

 

 

Beijing Street March, Beijing, c. April 25, 1989

 

I'm not sure exactly when or where I took this photograph. We were taken around to various sites in a bus, including some factories in the center of Beijing. We were not able to keep to the planned schedule, because the bus kept getting caught in unexpected traffic. I believe I took this photo out of the window of our bus, when it was stopped. The bicyclists and the photographer in front of the marchers show the public interest in these protests.

 

 

 

Our Chinese trip was supposed to last until April 30, but the last few days of our itinerary were suddenly cancelled, and we were flown to Hong Kong. There was no official public reaction to the protests we saw, but government leaders were arguing in their offices over the proper response. I was struck by the peaceful nature of the protests I had seen and the interest shown by the wider Chinese public. The protests spread to hundreds of Chinese cities in May, and Chinese students poured into Beijing.

 

On May 20, the government declared martial law. Student protesters were characterized as terrorists and counter-revolutionaries under the influence of Americans who wanted to overthrow the Communist Party. Those who had sympathized with the students were ousted from their government positions and thousands of troops were sent to clear Tiananmen Square. Beginning on the night of June 3, troops advanced into the center of the city, firing on protesters. Local residents tried to block military units. On June 4, Tiananmen Square was violently cleared. "Tank Man" made his stand on June 5.

 

All the Communist governments in Eastern Europe were overthrown in 1989. The Soviet Union collapsed in 1991. The Chinese government survived by repressing this protest movement. Since then all discussion of the 1989 protests is forbidden. Western tourists on Tiananmen Square are sometimes asked by local residents what happened there.

 

I wonder what happened to the students pictured in these photos.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/blog/154219 https://historynewsnetwork.org/blog/154219 0
Roundup Top 10!  

What Naomi Wolf and Cokie Roberts teach us about the need for historians

by Karin Wulf

Without historical training, it’s easy to make big mistakes about the past.

 

Free Speech on Campus Is Doing Just Fine, Thank You

by Lee C. Bollinger

Norms about the First Amendment are evolving—but not in the way President Trump thinks.

 

 

Don't buy your dad the new David McCullough book for Father's Day

by Neil J. Young

McCullough appears to have written the perfect dad book, but it's romantic view is the book's danger.

 

 

Voter Restrictions Have Deep History in Texas

by Laurie B. Green

Texas’ speedy ratification of the 19th Amendment represents a beacon for women’s political power in the U.S., but a critical assessment of the process it took to win it tells us far more about today’s political atmosphere and cautions us to compare the marketing of voting rights laws with their actual implications.

 

 

What Does It Mean to be "Great" Amidst Global Climate Change

by David Bromwich

How can Robert Frost, Graham Greene, Immanuel Kant, and others help us understand values and climate change?

 

 

How the Central Park Five expose the fundamental injustice in our legal system

by Carl Suddler

The Central Park Five fits a historical pattern of unjust arrests and wrongful convictions of black and Latino young men in the United States.

 

 

The biggest fight facing the U.S. women’s soccer team isn’t on the field

by Lindsay Parks Pieper and Tate Royer

The history of women in sports and the discrimination they have long faced.

 

 

I Needed to Save My Mother’s Memories. I Hacked Her Phone.

by Leslie Berlin

After she died, breaking into her phone was the only way to put together the pieces of her digital life.

 

 

How to Select a Democrat to Beat Trump in 2020

by Walter G. Moss

In a Democratic presidential candidate for 2020 we want someone who possesses the major wisdom virtues, virtues that will assist him/her to further the common good. In addition, we need someone with a progressive unifying vision.

 

 

Warren Harding Was a Better President Than We Think

by David Harsanyi

An analysis of presidential rankings and a defense of Warren G. Harding.

 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172245 https://historynewsnetwork.org/article/172245 0
Material History and A Victorian Riddle Retold

 

I spend a lot of time thinking about things. This is not to claim that I am unusually reflective or deep: quite the contrary. I mean this literally. I think a lot about stuff. 

 

I am fascinated by the ways objects and spaces shape us, reflecting and communicating our personalities. And, of course, I am not alone in this fascination. Anyone who has crashed a realtor’s open house to check out the furniture or plans their evening strolls to view their neighbors’ décor through lighted windows, appreciates the pure joy of pretending to know others through their domestic goods. Leather or chintz? Marble tile or oak plank? Curated minimalism or unstudied clutter? These are the choices that reveal us to family, friends, guests, even ourselves, giving clues about how we behave (or hope to) in private. CSI meets HGTV.

 

For me, such musings are both busman’s holiday and occupational hazard. A historian of nineteenth-century American culture, I study the significance ordinary women and men gave to furniture, art, and decoration. I want to understand how they made sense of the world and assume that, like me, they did so through seemingly mundane choices: Responsible adulthood embraced in a sturdy sofa; learnedness telegraphed, if not always realized, through a collection of books; love of family chronicled in elegantly framed photos grouped throughout the house. 

 

After all not everyone in the past wrote a political treatise or crafted a memoir, but most struggled to make some type of home for themselves, no matter how humble or constrained their choices. Less concerned about personality than character, nineteenth-century Americans believed the right domestic goods were both cause and effect, imparting morals as much as revealing the morality of their owners. For them, tastefully hung curtains indicated an appreciation of domestic privacy, but also created a private realm in which such appreciation and its respectable associations might thrive. For almost twenty years, I have lost myself in this Victorian chicken-or-egg riddle: Which came first, the furniture or the self?

 

My practice of home décor voyeurism, recreational and academic, was tested when my 88-year-old mother walked out of her home of fifty years without a backwards glance. She had told me for years that she wanted to die at home, but a new type of anxiety was supplanting the comfort of domestic familiarity. A night of confusion, punctuated by fits of packing and unpacking a suitcase for a routine doctor’s visit, convinced us both that her increasing forgetfulness was something more than the quirky charm of old age. Within a week, she moved from New York to Massachusetts, to an assisted living residence three minutes from my home. She arrived with a suitcase of summer clothes and a collection of family photos. No riddle here: My mother came first; the stuff would come later.

 

Dementia evicted my mother from her home and then softened the blow by wrapping her thoughts in cotton gauze. As I emptied her old home and set up the new one, I marveled at the unpredictability of her memory. She did not recognize my father’s favorite chair twenty years after his death, but knew that she had picture hooks in a drawer 170 miles away in a kitchen that was no longer hers. 

 

Visiting my mother in her new apartment with its temporary and borrowed furnishings, I wondered who she was without her things. This was not a moral question as it was for the long-dead people I study, but an existential one with a healthy dose of magical thinking. I told myself that there must be some combination of furniture, art, and tchotchkes able to keep my mother with me. Could I decorate her into staying the person I knew? With this hope, I would make her bed with the quilt of upholstery fabric her father had brought home from work, long a fixture in her bedroom. Next I would give a prominent place to the Tiffany clock presented to my father on his retirement and cover her walls with the lithographs, drawings, and paintings collected from the earliest days of my parents’ marriage.

 

For several months, I brought my mother more of her own things – a sustained and loving act of re-gifting. First her living room furniture, then the silver letterbox with the angels, then the entryway mirror. I replaced the borrowed lamps with ones from her New York bedroom. When she started losing weight, I showed up with a favorite candy dish and refilled it almost daily. She greeted each addition like a welcome but unexpected guest, a happy surprise and an opportunity to reflect on when they last met. As in dreams, every guest was my mother, walking in and taking a seat beside herself, peopling her own memory. Looking around, she would announce that her new apartment “feels like my home.”

 

But this was only a feeling – no more than a passing tingle of recognition on the back of the neck. Where exactly do we know each other from? Among her own things, she would ask when we needed to pack to go home. I answered, “This is home. Look at how nice your art looks on the walls. The clock is keeping good time.” Every object a metaphor: a time capsule for my mother to discover, an anchor to steady her in place, a constant silent prayer: This is home because your things are here. You are still you, because your things will remind you of what you loved best: beauty, order, family, me. 

 

My job is to know what my mother mercifully does not understand: Her home, the apartment of her marriage and my own childhood, is empty. I emptied it. Even as I assembled my mother in her new home, I dismantled her in the old one. No matter how many of her things are with her, still more are gone – passed on to family, sold, donated, thrown away by my hand: her memories in material form, scattered and tended to by others. They are like seeds blown on the wind to take root in new soil or the whispered words of a game of telephone transforming as they pass down the line: ready, beautiful metaphors for losing parts of my mother. 

 

Six months after her move, my mother came to dinner and didn’t recognize her things newly placed in my house. The good steak knives beside my everyday dishes, the Steuben bowl now filled with old Scrabble tiles, the hatpin holder with a woman’s face set on the mantle… I recycled them into my own. To be fair, they looked different now, less elegant and more playful. Sitting in my living room, my mother told me that I have such a warm home. And the next day on the phone, “I can feel the warmth of your house here in my apartment.” For the moment, we had outsmarted the riddle; each of us living with her things and concentrating on what comes next.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172189 https://historynewsnetwork.org/article/172189 0
Women Have Fought to Legalize Reproductive Rights for Nearly Two Centuries

Image from "Marriage and Its Discontents at the Turn of the Century"

 

 

Mississippi state representative Douglas McLeod was arrested last week for punching his wife when she didn’t undress fast enough for sex. When deputies arrived, he answered the door visibly intoxicated, with a drink in his hand, and yelled, “Are you kidding me?” Police found blood all over the bed and floor, and had to reassure his frightened wife, who stood shaking at the top of the stairs, that they would protect her. In January, McLeod had co-sponsored a bill making abortions in Mississippi illegal after detection of a fetal “heartbeat” beginning at six weeks, before most women even know they are pregnant.  

 

In both of these scenarios, one thing is clear – Douglas McLeod believes he has such a right to his wife’s body (and other women’s bodies) that he is willing to violently and forcefully impose it. 

 

Even more clear is the fact that for nearly two centuries, women’s rights reformers have fought to make reproductive rights legal, precisely because of men like Douglas McLeod. 

 

Women’s rights reformers beginning in the 1840s openly deplored women’s subjugation before the law – which, of course, was created and deployed by men. Temperance advocates in the nineteenth century pointed especially to alcohol as a major cause of the abuse and poverty of helpless women and children. As one newspaper editor editorialized, “Many… men believe that their wife is as much their property as their dog and horse and when their brain is on fire with alcohol, they are more disposed to beat and abuse their wives than their animals…. Every day, somewhere in the land, a wife and mother – yes, hundreds of them – are beaten, grossly maltreated and murdered by the accursed liquor traffic, and yet we have men who think women should quietly submit to such treatment without complaint.”(1)

 

But of course women were never silent about their lowly status in a patriarchal America. As one of the first practicing woman lawyers in the United States, Catharine Waugh McCulloch argued in the 1880s that “Women should be joint guardians with their husbands of their children. They should have an equal share in family property. They should be paid equally for equal work. Every school and profession should be open to them. Divorce and inheritance should be equal. Laws should protect them from man’s greed…and man’s lust…”(2)

 

Indeed, the idea of “man’s lust” and forced maternity was particularly abhorrent to these activists. In the nineteenth century, most women were not publicly for the legalization of birth control and abortion but there were complex reasons for this rejection. In a world where women had little control over the actions of men, reformers rightly noted that legalizing contraceptives and abortion would simply allow men to abuse and rape women with impunity and avoid the inconvenient problem of dependent children. 

 

Instead, many suffragists and activists embraced an idea called voluntary motherhood. The theoretical foundations of this philosophy would eventually become part of the early birth control movement (and later the fight for legal abortion in the twentieth century). Simply put, voluntary motherhood was the notion that women could freely reject their husbands’ unwanted sexual advances and choose when they wanted to have children. In an era when marital rape law did not exist, this was a powerful way for women to assert some autonomy over their own bodies. As scholar Linda Gordon has written, it is thus unsurprising that women – even the most radical of activists – did not support abortion or contraception because “legal, efficient birth control would have increased men’s freedom to indulge in extramarital sex without greatly increasing women’s freedom to do so even had they wanted to.”(3) But the ideas underpinning voluntary motherhood promised to return a measure of power to women. 

 

Of course, the nineteenth-century criminalization of abortion and birth control in state legislatures was openly about restricting women’s freedom altogether. As Dr. Horatio Storer wrote, “the true wife’” does not seek “undue power in public life…undue control in domestic affairs,… or privileges not her own.”(4) Beginning in the 1860s, under pressure from physicians like Storer and the newly organized American Medical Association (who wanted to professionalize and control the discipline of medicine), every state in the union began passing laws criminalizing abortion and birth control. Physicians saw their role as the safeguard not only of Americans’ physical health, but the very health of the republic. They, along with other male leaders, viewed the emergent women’s suffrage movement, rising immigration, slave emancipation, and other social changes with alarm. Worried that only white, middle-class women were seeking abortion, doctors and lawmakers sought to criminalize contraceptives and abortion in order to ensure the “right” kind of women were birthing the “right” kind of babies. 

 

The medical campaigns to ban abortion were then bolstered by the federal government’s passage of the 1873 Comstock Act, which classified birth control, abortion, and contraceptive information as legal obscenity. Fines for violating the Act were steep and carried prison time. Abortion and birth control then remained illegal for essentially the next century, until the Supreme Court finally ruled in two cases – Griswold v. Connecticut (1965) and Roe v. Wade (1973), that both were matters to be considered under the doctrine of privacy between patient and physician. The efforts of the second-wave feminist movement also simultaneously transformed older ideas of voluntary motherhood, which asserted that women both didn’t have to have sex or be pregnant, into the more radical notion that women could – and should - enjoy sex without fear of becoming pregnant.  

 

Anti-abortion activists today thus know that they cannot openly advocate for broadly rescinding women’s human and legal rights. Instead, in order to achieve their agenda,  they cannily focus on the rights of the unborn or “fetal personhood,” and the false flag of “protecting” women’s health.  But it is crystal clear that the recent spate of laws criminalizing abortion in states like Georgia, Ohio, Alabama, and Douglas McLeod’s home state of Mississippi have nothing to do with babies or health. Instead they flagrantly reproduce the past history of men’s legal control over women. It is not a coincidence that women make up less than 14% of Mississippi’s legislative body – the lowest in the country. McLeod’s behavior and arrest may have taken place in May of 2019, but his actions – both at home and in the legislature – look no different than his historical male counterparts. Unlike the past, it’s just that neither he nor his colleagues are willing to admit it. 

 

(1) The Woman’s Standard (Waterloo, IA), Volume 3, Issue 1 (1888), p. 2. 

(2) “The Bible on Women Voting,” undated pamphlet, Catharine Waugh McCulloch Papers, Dillon Collection, Schlesinger Library. 

(3) Linda Gordon, The Moral Property of Women: A History of Birth Control Politics in America (University of Illinois Press, 2002).

(4) Horatio Robinson Storer, Why Not? A book for Every Woman (Boston: Lee and Shepard, 868). Quoted in Leslie Reagan, When Abortion Was a Crime. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172181 https://historynewsnetwork.org/article/172181 0
The President is Disrupting the U.S. Economy

Donald Trump has been president for only two of the ten years of America’s economic expansion since the Great Recession, yet he eagerly takes full credit for the nation’s advancement. It has been easy for him to boast because he had the good fortunate to occupy the White House during a mature stage of the recovery. The president’s fans attribute booming markets and low unemployment to his leadership even though Trump’s words and actions at the White House have often broken the economy’s momentum. In recent months, especially, Trump’s interference in business affairs has put U.S. and global progress at risk. 

 

An article that appeared in the New York Times in May 2019 may offer some clues for understanding why the American president has been less than skillful in managing the country’s financial affairs. Tax records revealed by the Times show that from 1985 to 1994 Donald Trump lost more than a billion dollars on failed business deals. In some of those years Trump sustained the biggest losses of any American businessman. The Times could not judge Trump’s gains and losses for later years because Trump, unlike all U.S. presidents in recent decades, refuses to release his tax information. Nevertheless, details provided by the Times are relevant to a promise Trump made during in 2016. Candidate Trump advertised himself as an extraordinarily successful developer and investor who would do for the country what he had done for himself. Evidence provided by the Times suggests that promise does not inspire confidence. 

 

Trump’s intrusions in economic affairs turned aggressive and clumsy in late 2018. An early sign of the shift came when he demanded $5.7 billion from Congress for construction of a border wall. House Democrats, fresh off impressive election gains, stated clearly that they would not fund the wall. The president reacted angrily, closing sections of the federal government. Approximately 800,000 employees took furloughs or worked without pay. Millions of Americans were not able to use important government services. When the lengthy shutdown surpassed all previous records, Trump capitulated. The Congressional Budget Office estimated that Trump’s counterproductive intervention cost the U.S. economy $11 billion. 

 

President Trump’s efforts to engage the United States in trade wars produced more costly problems. Trump referred to himself as “Tariff Man,” threatening big levies on Chinese imports. Talk of a trade war spooked the stock markets late in 2018. Investors worried that China would retaliate, inflating consumer prices and risking a global slowdown. Then Trump appeared to back away from confrontations. The president aided a market recovery by tweeting, “Deal is moving along very well . . .  Big progress being made!”

 

Donald Trump claimed trade wars are “easy to win,” but the market chaos of recent months suggested they are not. When trade talks deteriorated into threats and counter-threats, counter-punching intensified. In May 2019, China pulled away from negotiations, accusing the Americans of demanding unacceptable changes. Trump responded with demands for new tariffs on Chinese goods. President Trump also threatened to raise tariffs against the Europeans, Canadians, Japanese, Mexicans, and others. U.S. and global markets lost four trillion dollars during the battles over trade in May 2019. Wall Street’s decline wiped out the value of all gains American businesses and citizens realized from the huge tax cut of December 2017. 

 

President Trump’s confident language about the effectiveness of tariffs conceals their cost. Tariffs create a tax that U.S. businesses and the American people need to pay in one form or another. Tariffs raise the cost of consumer goods. They hurt American farmers and manufacturers through lost sales abroad. They harm the economies of China and other nations, too (giving the U.S. negotiators leverage when demanding fairer trade practices), but the financial hits created by trade wars produce far greater monetary losses than the value of trade concessions that can realistically be achieved currently. 

 

Agreements between trading partners are best secured through carefully studied and well-informed negotiations that consider both the short and long-term costs of conflict. The present “war” is creating turmoil in global markets. It is breaking up manufacturing chains, in which parts that go into automobiles and other products are fabricated in diverse countries. Many economists warn that the move toward protectionism, championed especially by President Trump, can precipitate a global recession.

 

Trump’s approach to trade had unfortunate effects early in the Great Depression. In 1930 the U.S. Congress passed the protectionist Smoot-Hawley Tariff Act that placed tariffs on 20,000 imported goods. America’s trading partners responded with their own levies. Retaliatory actions in the early 1930s put a damper on world trade and intensified the Depression. After World War II, U.S. leaders acted on lessons learned. They promoted tariff reduction and “free trade.” Their strategy proved enormously successful. Integrated trade gave nations a stake in each other’s economic development. The new order fostered seventy years of global peace and prosperity. Now, thanks to a president who acts like he is unaware of this history, the United States is promoting failed policies of the past. 

 

It is not clear how the current mess will be cleaned up. Perhaps the Chinese will bend under pressure. Maybe President Trump will agree to some face-saving measures, accepting cosmetic adjustments in trade policy and then declaring a victory. Perhaps Trump will remain inflexible in his demands and drag global markets down to a more dangerous level. Markets may recover, as they did after previous disruptions provoked by the president’s tweets and speeches. Stock markets gained recently when leaders at the Federal Reserve hinted of future rate cuts. It is clear, nevertheless, that battles over tariffs have already created substantial damage.

 

Pundits have been too generous in their commentaries on the president’s trade wars. Even journalists who question Trump’s actions frequently soften their critiques by saying the president’s tactics may be justified. American corporations find it difficult to do business in China, they note, and the Chinese often steal intellectual property from U.S. corporations. Pundits also speculate that short-term pain from tariff battles might be acceptable if China and nations accept more equitable trade terms. Some journalists are reluctant to deliver sharp public criticism of Trump’s policy. They do not want to undermine U.S. negotiators while trade talks are underway. 

 

American businesses need assistance in trade negotiations, but it is useful to recall that the expansion of global trade fostered an enormous business boom in the United States. For seven decades following World War II many economists and political leaders believed that tariff wars represented bad policy. Rejecting old-fashioned economic nationalism, they promoted freer trade. Their wisdom, drawn from a century of experience with wars, peace and prosperity, did not suddenly become irrelevant after Donald Trump’s inauguration. Unfortunately, when President Trump championed trade wars, many Americans, including most leaders in the Republican Party, stood silent or attempted to justify the radical policy shifts. 

 

Since the time Donald Trump was a young real estate developer, he has demonstrated little interest in adjusting beliefs in the light of new evidence. Back in the 1980s, when Japan looked like America’s Number One economic competitor, Donald Trump called for economic nationalism, much like he does today. “America is being ripped off” by unfair Japanese trade practices,” Trump protested in the Eighties. He recommended strong tariffs on Japanese imports. If U.S. leaders had followed Donald Trump’s advice in the Eighties, they would have limited decades of fruitful trade relations between the two countries.

 

America’s and the world’s current difficulties with trade policy are related, above all, to a single individual’s fundamental misunderstanding of how tariff’s work. Anita Kumar, Politico’s White House Correspondent and Associate Editor identified Trump’s mistaken impressions in an article published May 31, 2019. She wrote, “Trump has said that he thinks tariffs are paid by the U.S.’s trading partners but economists say that Americans are actually paying for them.” Kumar is correct: Americans are, indeed, paying for that tax on imports. This observation about Trump’s misunderstanding is not just the judgment of one journalist. Many commentators have remarked about the president’s confusion regarding who pays for tariffs and how various trading partners suffer from them. 

 

The United States’ economy proved dynamic in the decade since the Great Recession thanks in large part to the dedication and hard work of enterprising Americans. But in recent months the American people’s impressive achievements have been undermined by the president’s clumsy interventions. It is high time that leaders in Washington acknowledge the risks associated with the president’s trade wars and demand a more effective policy course. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172182 https://historynewsnetwork.org/article/172182 0
Political Corruption Underwrites America’s Gun-Control Nightmare Reprinted from The Hidden History of Guns and the Second Amendment with the permission of Berrett-Koehler Publishers. Copyright © 2019 by Thom Hartmann. 

At bottom, the Court’s opinion is thus a rejection of the common sense of the American people, who have recognized a need to prevent corporations from undermining self-government since the founding , and who have fought against the distinctive corrupting potential of corporate electioneering since the days of Theodore Roosevelt. It is a strange time to repudiate that common sense. While American democracy is imperfect, few outside the majority of this court would have thought its flaws included a dearth of corporate money in politics. 

—Justice John Paul Stevens’s dissent in Citizens United 

 

Parkland shooting survivor and activist David Hogg once asked, when Sen. John McCain, R-Ariz., was still alive, why McCain had taken more than $7 million from the NRA (not to mention other millions that they and other “gun rights” groups spent supporting him indirectly). 

 

McCain’s answer, no doubt, would be the standard politician-speak these days: “They support me because they like my positions; I don’t change my positions just to get their money.” It’s essentially what Sen. Marco Rubio, R-Fla., told the Parkland kids when he was confronted with a similar question. 

 

And it’s a nonsense answer, as everybody knows. 

 

America has had an on-again, off-again relationship with political corruption that goes all the way back to the early years of this republic. Perhaps the highest level of corruption, outside of today, happened in the late 1800s, the tail end of the Gilded Age. (“Gilded,” of course, refers to “gold coated or gold colored,” an era that Donald Trump has tried so hard to bring back that he even replaced the curtains in the Oval Office with gold ones.) 

 

One of the iconic stories from that era was that of William Clark, who died in 1925 with a net worth in excess, in today’s money, of $4 billion. He was one of the richest men of his day, perhaps second only to John D. Rockefeller. And in 1899, Clark’s story helped propel an era of political cleanup that reached its zenith with the presidency of progressive Republicans (that species no longer exists) Teddy Roosevelt and William Howard Taft. 

 

Clark’s scandal even led to the passage of the 17th Amendment, which let the people of the various states decide who would be their U.S. senators, instead of the state legislatures deciding, which was the case from 1789 until 1913, when that amendment was ratified. 

 

By 1899, Clark owned pretty much every legislator of any consequence in Montana, as well as all but one newspaper in the state. Controlling both the news and the politicians, he figured they’d easily elect him to be the next U.S. senator from Montana. Congress later learned that he not only owned the legislators but in all probability stood outside the statehouse with a pocket full of $1,000 bills (literally: they weren’t taken out of circulation until 1969 by Richard Nixon), each in a plain white envelope to hand out to every member who’d voted for him.

 

When word reached Washington, DC, about the envelopes and the cash, the US Senate began an investigation into Clark, who told friends and aides, “I never bought a man who wasn’t for sale.” 

 

Mark Twain wrote of Clark, “He is as rotten a human being as can be found anywhere under the flag; he is a shame to the American nation, and no one has helped to send him to the Senate who did not know that his proper place was the penitentiary, with a chain and ball on his legs.” 

 

State Senator Fred Whiteside, who owned the only non-Clark-owned newspaper in the state, the Kalispell Bee, led the big exposé of Clark’s bribery. The rest of the Montana senators, however, ignored Whiteside and took Clark’s money.

 

The US Senate launched an investigation in 1899 and, sure enough, found out about the envelopes and numerous other bribes and emoluments offered to state legislators, and refused to seat him. The next year, Montana’s corrupt governor appointed Clark to the Senate, and he served a full eight-year term. 

 

Clark’s story went national and became a rallying cry for clean-government advocates. In 1912, President Taft, after doubling the number of corporations being broken up by the Sherman Anti-Trust Act over what Roosevelt had done, championed the 17th Amendment (direct election of senators, something some Republicans today want to repeal) to prevent the kind of corruption that Clark represented from happening again. 

 

Meanwhile, in Montana, while the state legislature was fighting reforms, the citizens put a measure on the state ballot of 1912 that would outlaw corporations from giving any money of any sort to politicians. That same year, Texas and other states passed similar legislation (the corrupt speaker of the House Tom DeLay, R-Texas, was prosecuted under that law). 

 

Montana’s anticorruption law, along with those of numerous other states, persisted until 2010,when Justice Anthony Kennedy, writing for the five-vote majority on the U.S. Supreme Court, declared in the Citizens United decision that in examining more than 100,000 pages of legal opinions, he could not find “. . . any direct examples of votes being exchanged for . . . expenditures. This confirms Buckley’s reasoning that independent expenditures do not lead to, or create the appearance of, quid pro quo corruption [Buckley is the 1976 decision that money equals free speech]. In fact, there is only scant evidence that independent expenditures even ingratiate. Ingratiation and access, in any event, are not corruption.”

 

The US Supreme Court, following on the 1976 Buckley case that grew straight out of the Powell Memo and was written in part by Justice Lewis Powell, turned the definitions of corruption upside down.

 

That same year, the Court overturned the Montana law in the 2010 American Tradition Partnership, Inc. v. Bullock ruling, essentially saying that money doesn’t corrupt politicians, particularly if that money comes from corporations that can “inform” us about current issues (the basis of the Citizens United decision) or billionaires (who, apparently the right-wingers on the Court believe, obviously know what’s best for the rest of us). 

 

Thus, the reason the NRA can buy and own senators like McCain and Rubio (and Thom Tillis, R-N.C./$4 million; Cory Gardner, R-Colo./$3.8 million; Joni Ernst, R-Iowa/$3 million; and Rob Portman, R-Ohio/$3 million, who all presumably took money much faster and much more recently than even McCain) is because the Supreme Court has repeatedly said that corporate and billionaire money never corrupts politicians. (The dissent in the Citizens United case is a must- read: it’s truly mind-boggling and demonstrates beyond refutation how corrupted the right-wingers on the Court, particularly Scalia and Thomas—who regularly attended events put on by the Kochs—were by billionaire and corporate money.)

 

So here America stands. The Supreme Court has ruled, essentially, that the NRA can own all the politicians they want and can dump unlimited amounts of poison into this nation’s political bloodstream. 

 

Meanwhile, angry white men who want to commit mass murder are free to buy and carry all the weaponry they can afford. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172188 https://historynewsnetwork.org/article/172188 0
What We Can't Forget As We Remember Anne Frank

 

On grocery store checkout lines around the country this month, amidst the candy bars and zany tabloids, shoppers will find a glossy 96-page magazine called “Anne Frank: Her Life and Legacy.” Unfortunately, it fails to explain one of the most important but little-known aspects of the Anne Frank story—how her life could have been saved by President Franklin D. Roosevelt. 

 

The new Anne Frank publication, compiled by the staff of Life magazine, is filled with photographs of Anne and her family, and a breezy overview of her childhood, tragically cut short by the Nazi Holocaust. Today, June 9, would have been her 90th birthday. 

 

Little Anne, “thin as a wisp, curious, mercurial, and a know-it-all” at first enjoyed an idyllic life, “but outside the family circle, the world was changing,” Life recounts. Economic and social crises in Germany propelled Adolf Hitler to power in 1933, and Anne’s father, Otto, quickly moved the family to neighboring Holland for safety.

 

When World War II erupted in 1939, Life reports, Otto “frantically searched for ways to get his family away from the growing conflict” and “he hoped to emigrate to the United States.”

 

That’s all. No accounting of what happened when the Franks sought to emigrate to the United States. No explanation as to why the Roosevelt administration refused to open America’s doors to Anne Frank or countless other Jewish children. 

 

Just the one vague allusion to Otto’s “hope,” and then quickly back to the famous story of Anne hiding in the Amsterdam attic and writing entries in her diary.

 

Here’s the part of the story that Life left out.

 

Laws enacted by the U.S. Congress in the 1920s created a quota system to severely restrict immigration. Roosevelt wrote at the time that immigration should be sharply restricted for “a good many years to come” so there would be time to “digest” those who had already been admitted. He argued that future immigration should be limited to those who had “blood of the right sort”—they were the ones who could be most quickly and easily assimilated, he contended.  

 

As president (beginning in 1933), Roosevelt took a harsh immigration system and made it much worse. His administration went above and beyond the existing law, to ensure that even those meager quota allotments were almost always under-filled. American consular officials abroad made sure to “postpone and postpone and postpone the granting of the visas” to refugees, as one senior U.S. official put it in a memo to his colleagues. They piled on extra requirements and created a bureaucratic maze to keep refugees like the Franks far from America’s shores.

 

The quotas for immigrants from Germany and (later) Axis-occupied countries were filled in only one of Roosevelt’s 12 years in office. In most of those years, the quotas were less than 25% full. A total of 190,000 quota places that could have saved lives were never used at all.

 

Otto Frank, Anne's father, filled out the small mountain of required application forms and obtained the necessary supporting affidavits from the Franks’ relatives in Massachusetts. But that was not enough for those who zealously guarded America's gates against refugees. 

 

Anne’s mother, Edith, wrote to a friend in 1939: "I believe that all Germany's Jews are looking around the world, but can find nowhere to go."

 

That same year, refugee advocates in Congress introduced the Wagner-Rogers bill, which would have admitted 20,000 refugee children from Germany outside the quota system. Anne Frank and her sister Margot were German citizens, so they could have been among those children.

 

Supporters of the bill assembled a broad, ecumenical coalition--including His Eminence George Cardinal Mundelein, one of the country’s most important Catholic leaders; New York City Mayor Fiorello La Guardia; Hollywood celebrities such as Henry Fonda and Helen Hayes; and 1936 Republican presidential nominee Alf Landon and his running mate, Frank Knox. Former First Lady Grace Coolidge announced that she and her neighbors in Northampton, Massachusetts, would personally care for twenty-five of the children.

 

Even though there was no danger that the children would take jobs away from American citizens, anti-immigration activists lobbied hard against the Wagner-Rogers bill. President Roosevelt’s cousin, Laura Delano Houghteling, who was the wife of the U.S. Commissioner of Immigration, articulated the sentiment of many opponents when she remarked at a dinner party that “20,000 charming children would all too soon grow up into 20,000 ugly adults.” FDR himself refused to support the bill. By the spring of 1939, Wagner-Rogers was dead.

 

But Wagner-Rogers was not the only way to help Jewish refugees. Just a few months earlier, in the wake of Germany’s Kristallnacht pogrom, the governor and legislative assembly of the U.S. Virgin Islands offered to open their territory to Jews fleeing Hitler. Treasury Secretary Henry Morgenthau, Jr. endorsed the proposal. 

 

That one tiny gesture by President Roosevelt—accepting the Virgin Islands leaders’ offer—could have saved a significant number of Jews. But FDR rejected the plan. He and his aides feared that refugees would be able to use the islands as a jumping-off point to enter the United States itself.

 

At a press conference on June 5, 1940, the president warned of the “horrible” danger that Jewish refugees coming to America might actually serve the Nazis. They might begin “spying under compulsion” for Hitler, he said, out of fear that if they refused, their elderly relatives back in Europe “might be taken out and shot.” 

 

That's right: Anne Frank, Nazi spy.

 

In fact, not a single instance was ever discovered of a Jewish refugee entering the United States and spying for the Nazis. But President Roosevelt did not shy away from using such fear-mongering in order to justify slamming shut America’s doors.

 

The following year, the administration officially decreed that no refugee with close relatives in Europe could come to the United States.

 

Anne and Margot Frank, and countless other German Jewish refugee children, were kept out because they were considered undesirable. They didn’t have what FDR once called “blood of the right sort.” One year after the defeat of Wagner-Rogers, Roosevelt opened America’s doors to British children to keep them safe from the German blitz. Those were the kind of foreigners he preferred.

 

Life magazine’s tribute to Anne Frank is touching. The photos fill our hearts with pity. But by failing to acknowledge what the Roosevelt administration did to keep the Jews out, Life’s version of history misses a point that future generations need to remember: pity is not enough to help people who are trying to escape genocide.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172187 https://historynewsnetwork.org/article/172187 0
What the Feud and Reconciliation between John Adams and Thomas Jefferson Teaches Us About Civility

 

Donald Trump did not invent the art of the political insult but he’s inflamed the level of vitriolic public discourse and incivility to a new low unmatched by other presidents. In a tainted tradition that has permeated our history, other presidents have not been immune to dishing out acerbic insults against one another.

 

John Quincy Adams was livid that Harvard University planned to award President Andrew Jackson with an honorary degree. He wrote in his diary that Jackson was “a barbarian who could not write a sentence of grammar and hardly could spell his own name.”

 

Franklin Pierce was not as impressed with Abraham Lincoln as history has been, declaring the day after Lincoln issued the Emancipation Proclamation that the president had “limited ability and narrow intelligence.” 

 

The list of spicy presidential insults goes on and on. While such statements are often laugh-aloud funny, they are also shocking and sobering. How can these men who have reached the pinnacle of political power be so crude and demeaning? We can learn a valuable lesson from the friendship and feud between John Adams and Thomas Jefferson, and their ultimate reconciliation.

 

In 1775, the 32-year-old Virginia born-and-bred Jefferson traveled from his mountain-top Monticello mansion to the bustling city of Philadelphia to serve as a delegate to the Second Continental Congress.

 

Sometime in June that year after Jefferson arrived in the City of Brotherly Love, he met for the first time one of the most prominent and outspoken leaders of the resistance to British domination – John Adams. The Massachusetts attorney was the soft-spoken Jefferson’s senior by seven years. But neither their opposite personalities, age differences, or geographical distance separating their homes stood in the way of the start of a remarkable relationship that would span more than a half-century. 

 

They forged a unique and warm partnership, both serving on the committee to draft a declaration of independence from British rule. According to Adams, Jefferson had “the reputation of a masterly pen,” and was therefore tasked with using his writing skills to draft the document. Jefferson was impressed with how Adams so powerfully defended the draft of the document on the floor of the congress, even though he thought Adams was “not graceful, not elegant, not always fluent in his public addresses.”

 

In the 1780s, they found themselves thrown together once again as diplomats in Europe representing the newly minted United States. These collaborators and their families were friends.

 

But by 1796, their friendship was obliterated by the rise of political parties with starkly different visions of the new American experiment. With his election that year as the nation’s second president, the Federalist Adams found himself saddled with Jefferson as his vice president representing the Democratic-Republican Party. Tensions were high between the two men. 

 

Just three months after their inauguration as the embryonic nation’s top two elected officials, Jefferson privately groused to a French diplomat that President Adams was “distrustful, obstinate, excessively vain, and takes no counsel from anyone.” Weeks later, Adams spewed out his frustration, writing in a private letter that his vice president had “a mind soured, yet seeking for popularity, and eaten to a honeycomb with ambition, yet weak, confused, uninformed, and ignorant.” 

 

When Jefferson ousted Adams from the presidency in the election of 1800, Adams was forced to pack his bags and vacate the newly constructed Executive Mansion after just a few months. At four o’clock in the morning on March 4, 1801, Jefferson’s inauguration day, the sullen Adams slipped out of the Executive Mansion without fanfare, boarded a public stage and left Washington.  The streets were quiet as the president left the capital under the cover of darkness on his journey back home. He wanted nothing to do with the man who had publicly humiliated him by denying him a second term as president, nor in witnessing Jefferson’s inauguration and moment of triumph. 

 

For the next dozen years these two giants of the American revolution largely avoided one another, still nursing wounds inflicted by the poisonous partisan politics of their era. But on July 15, 1813, Adams made an overture, reaching out to his former friend and foe, writing that “you and I ought not to die until we have explained ourselves to each other.” That letter broke the dam and began a series of remarkable letters between the two men that lasted for more than a dozen years until death claimed them both on the July 4, 1826 – the 50thanniversary of the Declaration of Independence. 

 

Not all such presidential feuds have resulted in such heart-warming reconciliations. But the story of Adams and Jefferson serves as a model of what can happen when respect replaces rancor, friendships triumph over political dogma, and we allow reconciliation to emerge from the ashes of fractured friendships. 

 

Adams and Jefferson ultimately listened to one another, explaining themselves. Listening to someone who thinks differently than we do can feel threatening and scary – almost as if by listening to their thoughts we might become infected by their opinions. So we hunker down and lob snarky tweets to attack the humanity and patriotism of others, foolishly hoping such tactics will convince them to change.

 

But what would it look like if we could agree on core values we share in common with one another? Patriotism, a safe country, a stable society, economic well-being that promotes health, education, food, and housing, ensuring that people are treated with dignity and respect.

 

We could then have vigorous and civil debates about the best policies to implement our values. We won’t always agree with everyone. There will be a wide diversity of opinions. But if we could “explain ourselves” to one another, listen deeply, forge friendships, and understand the hopes and fears and humanity of others, we might actually solve some of the problems that seem so intractable in our polarized society – a society that seems to thrive on extremism on both ends of the political spectrum.

 

Adams and Jefferson ultimately allowed their humanity and deep friendship to triumph over their politics. We can thank them and other candid and often irreverent barbs by our presidents about other presidents, because these insults cause us to reflect how we should treat one another – not only in the public square, but around the family dinner table, in our marriages, and in the workplace. 

 

Our survival as a nation depends on our ability to listen to those with very different political philosophies, to “explain ourselves” to one another, to search for broad areas of agreement with those of different political philosophies, and to reject the acidic politics of personal demonization in which we attack the humanity or patriotism of others.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172184 https://historynewsnetwork.org/article/172184 0
Whatever Happened to an Affordable College Education?

Image: Pixabay

 

As U.S. college students―and their families―know all too well, the cost of a higher education in the United States has skyrocketed in recent decades.  According to the Center on Budget and Policy Priorities, between 2008 and 2017 the average cost of attending a four-year public college, adjusted for inflation, increased in every state in the nation.  In Arizona, tuition soared by 90 percent.  Over the past 40 years, the average cost of attending a four-year college increased by over 150 percent for both public and private institutions.  

 

By the 2017-2018 school year, the average annual cost at public colleges stood at $25,290 for in-state students and $40,940 for out-of-state students, while the average annual cost for students at private colleges reached $50,900.

 

In the past, many public colleges had been tuition-free or charged minimal fees for attendance, thanks in part to the federal Land Grant College Act of 1862.  But now that’s “just history.”  The University of California, founded in 1868, was tuition-free until the 1980s.  Today, that university estimates that an in-state student’s annual cost for tuition, room, board, books, and related items is $35,300; for an out-of-state student, it’s $64,300.

 

Not surprisingly, far fewer students now attend college.  Between the fall of 2010 and the fall of 2018, college and university enrollment in the United States plummeted by two million students.  According to the Organization for Economic Cooperation and Development, the United States ranks thirteenth in its percentage of 25- to 34-year-olds who have some kind of college or university credentials, lagging behind South Korea, Russia, Lithuania, and other nations.

 

Furthermore, among those American students who do manage to attend college, the soaring cost of higher education is channeling them away from their studies and into jobs that will help cover their expenses.  As a Georgetown University report has revealed, more than 70 percent of American college students hold jobs while attending school. Indeed, 40 percent of U.S. undergraduates work at least 30 hours a week at these jobs, and 25 percent of employed students work full-time.

 

Such employment, of course, covers no more than a fraction of the enormous cost of a college education and, therefore, students are forced to take out loans and incur very substantial debt to banks and other lending institutions.  In 2017, roughly 70 percent of students reportedly graduated college with significant debt.  According to published reports, in 2018 over 44 million Americans collectively held nearly $1.5 trillion in student debt.  The average student loan borrower had $37,172 in student loans―a $20,000 increase from 13 years before.

 

Why are students facing these barriers to a college education?  Are the expenses for maintaining a modern college or university that much greater now than in the past?

 

Certainly not when it comes to faculty.  After all, tenured faculty and faculty in positions that can lead to tenure have increasingly been replaced by miserably-paid adjunct and contingent instructors―migrant laborers who now constitute about three-quarters of the instructional faculty at U.S. colleges and universities.  Adjunct faculty, paid a few thousand dollars per course, often fall below the official federal poverty line.  As a result, about a quarter of them receive public assistance, including food stamps.

 

By contrast, higher education’s administrative costs are substantially greater than in the past, both because of the vast multiplication of administrators and their soaring incomes.  According to the Chronicle of Higher Education, in 2016 (the last year for which figures are available), there were 73 private and public college administrators with annual compensation packages that ran from $1 million to nearly $5 million each.

 

Even so, the major factor behind the disastrous financial squeeze upon students and their families is the cutback in government funding for higher education. According to a study by the Center on Budget and Policy Priorities, between 2008 and 2017 states cut their annual funding for public colleges by nearly $9 billion (after adjusting for inflation).  Of the 49 states studied, 44 spent less per student in the 2017 school year than in 2008.  Given the fact that states―and to a lesser extent localities―covered most of the costs of teaching and instruction at these public colleges, the schools made up the difference with tuition increases, cuts to educational or other services, or both.

 

SUNY, New York State’s large public university system, remained tuition-free until 1963, but thereafter, students and their parents were forced to shoulder an increasing percentage of the costs. This process accelerated from 2007-08 to 2018-19, when annual state funding plummeted from $1.36 billion to $700 million.  As a result, student tuition now covers nearly 75 percent of the operating costs of the state’s four-year public colleges and university centers.

 

This government disinvestment in public higher education reflects the usual pressure from the wealthy and their conservative allies to slash taxes for the rich and reduce public services.  “We used to tax the rich and invest in public goods like affordable higher education,” one observer remarked.  “Today, we cut taxes on the rich and then borrow from them.”     

 

Of course, it’s quite possible to make college affordable once again.  The United States is far wealthier now than in the past, with a bumper crop of excessively rich people who could be taxed for this purpose.  Beginning with his 2016 presidential campaign, Bernie Sanders has called for the elimination of undergraduate tuition and fees at public colleges, plus student loan reforms, funded by a tax on Wall Street speculation.  More recently, Elizabeth Warren has championed a plan to eliminate the cost of tuition and fees at public colleges, as well as to reduce student debt, by establishing a small annual federal wealth tax on households with fortunes of over $50 million.

 

Certainly, something should be done to restore Americans’ right to an affordable college education.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172186 https://historynewsnetwork.org/article/172186 0
The Challenges of Writing Histories of Autism

Image: Julia is a character on Sesame Street who has autism. 

 

This is a version of an article first published in the May 2019 issue of Participations.  It is reproduced here with the kind permission of the editors.

 

Autism is a relatively new (and increasingly common) disability, and we don’t yet fully understand it.  The symptoms vary enormously from individual to individual. Severity can range from barely noticeable to totally debilitating. The condition often impairs the ability to read but can also result in “hyperlexia”, a syndrome which involves precocious reading at a very early age but also difficulties in reading comprehension. 

 

We have just begun to write the history of autism. Frankly, some of the first attempts stumbled badly, especially over the question of whether “It was there before” – that is, before the twentieth century.  That mantra was repeated several times by John Donvan and Caren Zucker in In a Different Key:The Story of Autism (2016). But they and others have found precious few halfway plausible cases in history, nothing remotely like the one in 40 children afflicted with autism reported by the 2016 National Survey of Children's Health. Donvan and Zucker claimed that the “Wild Boy of Aveyron”, the feral child discovered in France in 1800, “had almost certainly been a person with autism.” But autism impairs the ability to perceive danger, and frequently results in early deaths from drowning and other accidents, so it’s not likely that an autistic child could survive long in the wild. And there are barely a dozen examples of feral children in history, so even if they were all autistic, the condition was vanishingly rare.  

 

In Neurotribes (2015) Steve Silberman also argued that autism had been a common part of the human condition throughout history.  His book celebrated Dr. Hans Asperger as a friend and protector of autistic children, even placing his benevolent image on the frontispiece. Critics hailed that version of history as “definitive”. But recently Edith Sheffer, in Asperger’s Children: The Origins of Autism in Nazi Vienna (2018), confirmed that Asperger had been deeply implicated in the Nazi program to exterminate the neurologically handicapped. 

 

Surely if we want to write a full and honest account of the recent history of the autism epidemic, we should interview members of the autism community, defined as including both autistic individuals and their family members. This, however, presents  a number of special obstacles that I encountered when I conducted research for an article that was eventually published as “The Autism Literary Underground." Here I want to explain how we as historians might work around these barriers.

 

For starters, about a third of autistic individuals are nonspeaking, and many others experience lesser but still serious forms of verbal impairment.  But at least some nonspeakers can communicate via a keyboard, and can therefore be reached via email interviews. Email interviews have a number of other advantages: they save the trouble and expense of travel and transcription, they avoid transcription errors and indistinct recordings, and they allow the interviewer to go back and ask follow-up and clarification questions at any time.  This is not to rule out oral interviews, which are indispensable for the nonliterate. But email interviews are generally easier for autism parents, who are preoccupied with the demands of raising disabled children, many of whom will never be able to live independently. These parents simply cannot schedule a large block of time for a leisurely conversation.  When I conducted my interviews, the interviewees often had to interrupt the dialogue to attend to their children.  Perhaps the most frequent response to my questions was, “I’ll get back to you….” (One potential interviewee was never able to get back to me, and had to be dropped from the project.) Ultimately these interviews addressed all the questions I wanted to address and allowed interviewees to say everything they had to say, but in email threads stretching over several days or weeks.

 

Recent decades have seen a movement to enable the disabled to “write their own history”. In 1995 Karen Hirsch published an article advocating as much in Oral History Review, in which she discussed many admirable initiatives focusing on a wide range of specific disabilities – but she never mentioned autism. Granted, autism was considerably less prevalent then than it is today, but the omission may reflect the fact that autism presents special problems to the researcher.  In 2004 the Carlisle People First Research Team, a self-governing group for those with “learning difficulties”, won a grant to explore “advocacy and autism” but soon concluded that their model for self-advocacy did not work well for autistic individuals. Though the Research Team members were themselves disabled, they admitted that they knew little about autism, and “there was an obvious lack of members labelled with autism or Asperger’s syndrome” in disability self-advocacy groups throughout the United Kingdom.  The Research Team concluded that, because autism impairs executive functioning as well as the ability to socialize and communicate, it was exceptionally difficult for autistic individuals to organize their own collective research projects, and difficult even for nonautistic researchers to set up individual interviews with autistic subjects.

 

Self-advocacy groups do exist in the autism community, but they inevitably represent a small proportion at the highest-performing end of the autism spectrum: they cannot speak for those who cannot speak.  We can only communicate with the noncommunicative by interviewing their families, who know and understand them best. 

 

One also has to be mindful that the autism community is riven by ideological divisions, and the unwary researcher may be caught in the crossfire.  For instance, if you invite an autistic individual to tell their own story, they might say something like this:

As a child, I went to special education schools for eight years and I do a self-stimulatory behavior during the day which prevents me from getting much done. I’ve never had a girlfriend. I have bad motor coordination problems which greatly impair my ability to handwrite and do other tasks. I also have social skills problems, and I sometimes say and do inappropriate things that cause offense. I was fired from more than 20 jobs for making excessive mistakes and for behavioural problems before I retired at the age of 51.

Others with autism spectrum disorder have it worse than I do.  People on the more severe end sometimes can’t speak. They soil themselves, wreak havoc and break things. I have known them to chew up furniture and self-mutilate. They need lifelong care.[7]

 

This is an actual self-portrait by Jonathan Mitchell, who is autistic. So you might conclude that this is an excellent example of the disabled writing their own history, unflinchingly honest and compassionate toward the still less fortunate, something that everyone in the autism community would applaud. And yet, as Mitchell goes on to explain, he has been furiously attacked by “neurodiversity” activists, who militantly deny that autism is a disorder at all. They insist that it is simply a form of cognitive difference, perhaps even a source of “genius”, and they generally don’t tolerate any discussion of curing autism or preventing its onset.  When Mitchell and other autistic self-advocates call for a cure, the epithets “self-haters” and “genocide” are often hurled at them. So who speaks for autism?  An interviewer who describes autism as a “disorder”, or who even raises the issues that Mitchell freely discussed, might well alienate a neurodiversity interviewee. But can we avoid those sensitive issues? And even if we could, should we avoid them?  

 

Mitchell raises a still more unsettling question: Who is autistic? The blind, the deaf, and the wheelchair-bound are relatively easy to identify, but autism is defined by a complex constellation of symptoms across a wide spectrum – and where does a spectrum begin and end? You could argue that those with a formal medical diagnosis would qualify, but what about those who are misdiagnosed, or mistakenly self-diagnosed? What if their symptoms are real but extremely mild: would an oral historian researching deafness interview individuals with a 10 percent hearing loss? Mitchell contends that neurodiversity advocates cluster at the very high-functioning end of the spectrum, and suspects that some aren’t actually autistic:

Many of them have no overt disability at all.  Some of them are lawyers who have graduated from the best law schools in the United States. Others are college professors. Many of them never went through special education, as I did. A good number of them are married and have children. No wonder they don’t feel they need treatment.

 

Precisely because neurodiversity advocates tend to be highly articulate, they increasingly dominate the public conversation about autism, to the exclusion of other voices. Mitchell points to the Interagency Autism Coordinating Committee, an official panel that advises the US government on the direction of autism research: seven autistic individuals have served on this body, all of whom promote neurodiversity, and none favor finding a cure.  The most seriously afflicted, who desperately need treatment, are not represented, and they “can’t argue against ‘neurodiversity’ because they can’t articulate their position. They’re too disabled, you might say.”

 

The severely disabled could easily be excluded from histories of autism, unless the researcher makes a deliberate effort to include them, and in many cases we can only include them by interviewing their families. My own research relied on email interviews with self-selected respondents to a call for participants I had posted on autism websites. Though I made clear that I wanted to communicate with autistic individuals as well as with other members of their families, only the latter responded. As Jan Walmsley has rightly pointed out, consent is a thorny issue when we interview the learning disabled. I specified that I would only interview responsible adults -- that is, those who were not under legal guardianship -- but that proviso effectively excluded a large fraction of the autism community. For researchers, that may present an insurmountable difficulty.

 

Yet another ideological landmine involves the causes of autism, for many in the autism community believe it is a disorder that results from adverse reaction to vaccination.  In my own research, this was the group I chose to focus on.  The mainstream media generally treat them as pariahs and dangerous subversives, denounce them repetitively, and almost never allow them to present their views.  But that kind of marginalization inevitably raise troubling questions: Are these people being misrepresented?  What is their version of events?  And since they obviously aren’t getting their ideas from the newspapers or television networks, what exactly are they reading, and how did that reading shape their understanding of what has been inflicted on them? 

 

So I started with a simple question: What do you read? Unsurprisingly, many of my subjects had read the bestselling book Louder Than Words (2007) by actress Jenny McCarthy, where she describes her son’s descent into autism and argues that vaccination was the cause. Doctors have expressed horror that any parent would follow medical advice offered by a Playboy centerfold, but a historian of reading might wonder whether the reader response here is more complicated.  Are readers “converted” by books, or do they choose authors that they already sympathize with?  My interviewees reported that, well before they read Louder Than Words, they had seen their children regress into autism immediately following vaccination.  They later read Jenny McCarthy out of empathy, because she was a fellow autism parent struggling with the same battles that they had to confront every day.

 

Granted, my sample was quite small, essentially a focus group of just six self-selected parents.  Occasionally oral historians can (through quota sampling) construct large and representative surveys, for instance Paul Thompson’s landmark 1975 study of Edwardian Britain, but it would be practically impossible to do the same for the fissured and largely nonspeaking autism community. What oral historians can sometimes do is to crosscheck their findings against large statistical surveys. For instance, my respondents said that they read Jenny McCarthy not because she was a celebrity, but because she was an autism mom. They were corroborated by a poll of 1552 parents, who were asked whom they relied for vaccine safety information: just 26 percent said celebrities, but 73 percent trusted parents who reported vaccine injuries in their own children. To offer another illustration: vaccine skeptics are often accused of being “anti-science”, but my interviewees produced lengthy bibliographies of scientific journal articles that had shaped their views. They were supported by a survey of 480 vaccine skeptic websites, of which 64.7 percent cited scientific papers (as opposed to anecdotes or religious principles).

 

I oftenvdescribe autism as an “epidemic”. This is yet another flashpoint of controversy. Public health officials generally avoid the word, and many journalists and neurodiversity activists fiercely argue that autism has always been with us. As a historian who has investigated the question, I have concluded (beyond a reasonable doubt) that autism scarcely existed before the twentieth century, and that it is now an ever-spreading pandemic. To explain the evidence behind this conclusion would require a very long digression, though I can refer the reader to a robust demonstration. The essential point here is that any interviewer who refers to autism as an “epidemic” may alienate some of his or her interviewees.

 

So how do we handle this situation – or, for that matter, any other divisive issue?  All oral historians have opinions: we can’t pretend that we don’t. But we can follow the ethic of an objective reporter.  A journalist is (or used to be) obligated to report all sides of an issue with fairness, accuracy, and balance. He or she may personally believe that one side is obviously correct and the other is talking nonsense, but in his or her professional capacity he or she keeps those opinions to herself and assures his or her interviewees that they are free to express themselves.  One has to accept that not everyone will be reassured.  I found myself variously accused of being (on the one hand) an agent of the pharmaceutical companies or (on the other) an antivaccinationist. (I am neither.) But most of my subjects were quite forthcoming, once I made clear that the article I was writing would neither endorse nor condemn their views.

 

Of course, if any of the voices of autism are stifled, then the true and full story of the epidemic will be lost.  Some honest and well-researched histories of autism have been produced, notably Chloe Silverman’s Understanding Autism and Edith Sheffer’s Asperger’s Children. Although Silverman only employs a few interviews, her work is distinguished by a willingness to listen closely to autism parents.  And in her chilling account of the Nazi program to eliminate the mentally handicapped, Sheffer uncovered the voices of some of its autistic victims in psychiatric records. What both these books suggest is that we could learn much more about autism as it was experienced by ordinary people simply by talking to them.  Many of them protest that the media only reports “happy news” about autism (e.g., fundraisers, job training programs) and prefers not to dwell on the dark side (neurological damage, unemployment, violent outbursts, suicide), and these individuals are usually eager to tell their stories. To take one striking example, in 2005 the New York Times dismissed the theory that thimerosal (a mercury-containing preservative in some vaccines) might cause autism in a 2005 front-page story headlined “On Autism’s Cause, It’s Parents vs. Research” (suggesting that parents did no research). One of my interviewees had herself been interviewed by Gardiner Harris, one of the reporters who filed the Times story, and she offered a very different version of events:

Harris misidentified one of the two women in his opening anecdote. He described an autistic child’s nutritional supplements as “dangerous,” though they had been prescribed by the Mayo Clinic for the child’s mitochondrial disorder—facts he did not disclose. Three times Harris asked me, “How do you feel?” rather than, “What scientific studies led you to believe thimerosal is harmful to infants?"

 

Rather than rely solely on “the newspaper of record” (or any other newspaper), historians can find correctives and alternative narratives in oral interviews. Oral history has made an enormous contribution to reconstructing the history of the AIDS epidemic and the opioid epidemic, and it will be no less essential to understanding the autism epidemic.

 

 

 

 

 

 

 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172185 https://historynewsnetwork.org/article/172185 0
On the eve of Pride 2019, D.C. LGBT Community Reflects on its own history with Lavender Scare  

 

“I really think it is so important to remember that there were people who were taking a stand in the years before Stonewall and people who really had thecourage to get the movement rolling in the 1960’s. Their efforts should be recognized.”

 

As the question and answer session after Wednesday night’s screening of The Lavender Scare was wrapping up, director Josh Howard reminded the audience of the focus of his documentary: the systematic firing and discrimination of LGBT people under the Eisenhower administration, from their perspective. The screening included a Q&A afterwards that featured  Howard, David Johnson – the historian author of the  book that inspired the film, and Jamie Shoemaker–who is featured in the film as the first person to successfully resist the law. The screening was timely as D.C.’s Pride parade is Saturday, June 8, and the 50th anniversary of the Stonewall riots is Friday, June 28.  The Lavender Scare will premiere on PBS on June 18. 

 

Most of the seats in the Avalon Theatre were filled. After the film and applause ended, Howard asked a question he likes to ask every audience at a screening: how many of you were personally affected or knew someone who was affected by the Lavender Scare? Almost everyone in the audience raised their hands. 

 

The Q&A was an open dialogue, with several people standing and telling stories of how they were personally tied to the events of the film and the movement in general. Several were connected to the central figure of the documentary, former prominent activist Frank Kameny. One man who had grown up with another prominent activist, Jack Nichols, explained, “when Jack was picketing in front of the White House, I was quite aware. In fact, Frank and Jack did some of the planning in my apartment at the time; but because I was a teacher, I couldn’t have anything to do with it, because if my picture was in the paper, then my career would’ve been over.”

 

The policy harmed the careers of some in the audience, though. “I had gone to Frank for guidance before my interview at NSA,” one gentleman recalled, “and he told me ‘don’t say anything, don’t answer anything that you’re not asked,’ and so forth. Anyway, I was not hired and I’m frankly very glad now that I was not hired.” Experiences such as those reflect just how wide-reaching the policy was; it not only removed the gay community from office, but also discouraged them from applying to positions in the first place. 

 

Frank Kameny’s impact on the D.C. community was evident. In attendance was his former campaign manager from 1971, who recalled that the day after they announced the campaign, “we received a check in the mail for $500 from actors Paul Newman and Joanne Woodward. We used that money to travel to New York to meet with Gay Activist Alliance of New York.” Similarly, one of his former colleagues on the board of the ACLU in Washington recounted that as they defended their license to meet, “the issue was whether names [of gay members] would be revealed, and while Frank was very happy and very brave to have his name revealed, he didn’t feel that he could just turn over names of other people. That’s what he was fighting against in the agencies.” 

 

While the film successfully showed the struggle faced by the LGBT community, the conversion afterwards reflected the hope that many in the community feel today. Jamie Shoemaker, who was once almost fired from the NSA, evidenced the progress that he’s seen. “All of the security clearance agencies now have LGBT groups that are very active, including the NSA. One year after I retired, they payed me to come out to give a speech about my experiences… they (the groups) are very active and it’s really a good scene in these agencies now. What a difference,” he said. The theatre was immediately filled with applause. 

 

Many expressed a desire for reparations in some form or another. David Johnson, who authored The Lavender Scare: The Cold War Persecution of Gays and Lesbians in the Federal Government, threw light on the LOVE Act, an act introduced into the Senate that would “mandate that the State Department investigate all of its firings since 1950. They would collect information from either fired employees or their families, and I think most importantly, though, it would mandate that their museum, the US Diplomacy Center, actually have a permanent exhibit on the Lavender Scare.” Once again, the room broke into applause.

 

The Capital Pride Parade will take place on Saturday, June 8th across multiple locations in Washington. The 50th anniversary of the Stonewall riots is Friday, June 28.  The Lavender Scare will premier on PBS on June 18.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172191 https://historynewsnetwork.org/article/172191 0
Arbella Bet-Shlimon Got Into History to Counter False Perceptions of Middle East Region

 

Arbella Bet-Shlimon is Assistant Professor in the Department of History at the University of Washington, a historian of the modern Middle East,  an adjunct faculty member in the Department of Near Eastern Languages and Civilization and an affiliate of the Jackson School's Middle East Center. Arbella’s research and teaching focuses on the politics, society and economy of twentieth-century Iraq and the broader Persian Gulf region, as well as Middle Eastern urban history. Her first book, City of Black Gold: Oil, Ethnicity, and the Making of Modern Kirkuk (Stanford University Press, 2019), explores how oil and urbanization made ethnicity into a political practice in Kirkuk, a multilingual city that was the original hub of Iraq's oil industry. She received her PhD from Harvard University in 2012.

 

What books are you reading now?

 

I just wrote an obituary for my Ph.D. advisor, Roger Owen, in the latest issue of Middle East Report, and I read his memoir A Life in Middle East Studies prior to writing it. It proved to be a fascinating retrospective on the development of our field over the twentieth century. At the moment, I am digging into the work of the multilingual Kirkuki poet Sargon Boulus, and scholarship about him, as I write an article about the idea of the city of Kirkuk as a paragon of pluralism in northern Iraq. This is a topic I became interested in when I was researching my book on Kirkuk’s twentieth-century history, City of Black Gold, just published by Stanford University Press.

 

Why did you choose history as your career?

 

I decided to make history a career after I was already in graduate school in an interdisciplinary program. I started that program with a goal: to counter inaccurate and stereotyped perceptions of the Middle East among Americans. These spurious ideas were fostering cruelty to Middle Easterners at home and prolonging destructive foreign policy abroad. I concluded that researching, writing, and teaching the modern history of the region would be the best way to meet that goal. The way I stumbled into this conclusion was essentially accidental, but I’ve never looked back.

 

It was an unexpected change of direction, because I hadn’t taken a single history class in college. And history, according to most college students who haven’t taken a history class, is boring. We have an image problem. Just look at the most famous secondary school in the world: Hogwarts (from the Harry Potter universe). This is a school where one of the tenure lines has a jinx on it that leaves professors fired, incapacitated, or dead after one year, but its worst course isn’t that one. Instead, its worst course is a plain old history class, taught by a droning ghost professor who bores even himself so thoroughly that he doesn’t realize he died a long time ago. High school students (real-life ones, I mean) will frequently tell you that they hate history because it’s just memorizing lists of things, or their teacher just makes them watch videos. That’s not what history is beyond the K-12 realm, of course—neither college history nor popular history is anything like that—and there are some great K-12 history teachers who don’t teach that way. But it’s a widespread stereotype rooted in some truth. I didn’t actively dislike history prior to pursuing it full time, but it hadn’t even occurred to me to consider it a possible career.

 

What qualities do you need to be a historian?

 

Qualities that are central to any research career. For instance, a high tolerance for delayed gratification, because you can knock your head against a research question for years before the answers start to come to fruition in any publishable form. And you need to be willing to be proven wrong by the evidence you find.

 

Who was your favorite history teacher?

 

My dad was my first history teacher. I learned a lot about the history of the Middle East riding in the car as a kid.

 

What is your most memorable or rewarding teaching experience?

 

Once, at a graduation event, a graduating student told me that a conversation he’d had with me during office hours was one of the main reasons he did not drop out of college. I had no idea my words had had that impact at the time. I think we professors are often not aware of the small moments that don’t mean much to us but change a student’s life (both for the worse and for the better).

 

What are your hopes for history as a discipline?

 

Institutional support; support from the parents or other tuition funders of students who want to pursue history as their major; and stable, contracted teaching positions with academic freedom protections for those who have advanced degrees in history and wish to work in academia.

 

Do you own any rare history or collectible books? Do you collect artifacts related to history?

 

I don’t collect artifacts, but I’ve used my research funds to acquire a few things that are hard to find and have been indispensable to my work. For instance, I have copies of the full runs of a couple of rare periodicals from Kirkuk that I acquired while writing my book. They’re almost impossible to find even in global databases—and when you come across something like that in someone’s private collection, you have to get a copy somehow.

 

What have you found most rewarding and most frustrating about your career? 

 

The most rewarding thing about being a historian is when a student tells me that their perspective on the world has been transformed by taking my class. The most frustrating thing is the pressure from so many different directions to downsize humanities and social science programs.

 

How has the study of history changed in the course of your career?

 

That’s a very broad question, but I can speak specifically about my own field of Middle Eastern history. When I set out to write a PhD dissertation on Iraq, some colleagues in my cohort reacted with surprise because, they pointed out, it would be extremely difficult to conduct research there. One fellow student told me that he’d started grad school interested in Iraq but realized after the 2003 US invasion that he wouldn’t be able to go there, so he switched his focus to Egypt. Since then, though, many more conflicts have developed and brutal authoritarian rulers have become more deeply entrenched. Nobody researching the history of the Middle East today can assume that the places they are interested in will be freely accessible or that any country’s archives are intact and in situ. And even if we can visit a place, it may not be ethical to talk to people there about certain sensitive topics. At the same time, we know that we can’t just sit in the colonial metropolis and write from the colonial archives, as so many historians of a previous generation did. So I think many Middle East historians have become more methodologically creative in the past decade, asking new sorts of questions and tapping into previously underappreciated sources.

 

What are you doing next?

 

Right now, I’m trying to understand Iraq’s position in the Persian Gulf, shifting my focus toward Baghdad and its south. Historians of Iraq have written extensively about its experience as a colonized, disempowered country, but have less often examined how expansionist ideas were key to its nation-building processes throughout the twentieth century. This becomes clear from the perspective of Kirkuk. It’s also clear when looking at Iraq’s relationship with Kuwait, which Iraq has claimed as part of its territory at several points. I’m in the early stages of gathering sources on this topic.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172183 https://historynewsnetwork.org/article/172183 0
Here Comes the D-Day Myth Again

 

Last Friday (May 31, 2019), the NPR radio program “On Point” conducted a special live broadcast from the National WWII Museum in New Orleans entitled “75th Anniversary Of D-Day: Preserving The Stories Of WWII Veterans." The host was NPR’s media correspondent David Folkenflik and the segment featured Walter Isaacson, professor of history at Tulane University, and Gemma Birnbaumassociate vice president of the World War II Media and Education Center at The National WWII Museum, as guests. This writer was not only looking forward to an engaging discussion of the successful Allied landings at the Normandy beaches on June 6, 1944, but also hoping that the guests would present the contemporary state of military history research on the significance of D-Day.

 

I was sorely disappointed. Instead of placing the invasion within the wider context of the war against Nazi Germany, Folkenflik and his guests revived the “Myth of D-Day,” that is, they reinforced the erroneous belief that D-Day was the decisive battle of the Second World War in Europe, that it marked “the turning of the tide,” and that it sealed the doom of the German Army, the Wehrmacht. Had D-Day failed, so the argument goes, Germany could have still won the war, with nightmarish consequences for Europe, the United States and the world as a whole. This myth is a legacy of the Cold War, when each side accentuated what it did to defeat Nazi Germany, the most monstrous regime in human history, and played down the contributions of the other side. Russian students today, for example, are taught the “Great Patriotic War,” which the Soviet Union won practically single-handedly, without having previously cooperated with Nazi Germany and without having had committed any atrocities – which is take a creative approach to interpreting the history of World War II, to say the least. But it also remains the case that far too many American, British and Canadian students are taught that the victory over Nazi Germany was mostly the work of the Anglo-American forces, which also is a distortion of truth. 

 

This “Allied scheme of history,” as the Oxford historian Norman Davies calls it, was most consistently presented by Gemma Birnbaum on the On Point broadcast. She not only reiterated the belief that D-Day was necessary to defeat Nazi Germany, but her words also suggested that, until then, Germany was somehow winning the war. Before the Allies invaded France, she said, the Wehrmacht “was moving all over the place.” According to her, it was only after the German defeat in Normandy that “fatigue began to set in” among German soldiers. But “fatigue” had already begun to spread throughout the Wehrmacht in the late fall of 1941, when the Red Army stopped the Germans at the gates of Moscow. It is true that the Germans continued to “move all over” Europe afterwards, but they increasingly began doing so in a backwards motion. It is depressing to consider that Birnbaum co-leads the educational department of the World War II museum in New Orleans, where she has the opportunity to pass on her myopic views of the war onto countless young people, thus ensuring the perpetuation of the D-Day myth. Not much has changed in the museum, it would seem, since 2006, when  Norman Davies commented: “Yet, once again, the museum does not encourage a view of the war as a whole. Few visitors are likely to come away with the knowledge that D-Day does not figure among the top ten battles of the war.”

 

Many military historians would now contend that, if there was indeed any “turning point” in the European war, it took place before Moscow in December 1941. For it was then that Germany lost the opportunity to win the war that it had been hoping to win. It was also at that point that the Soviets forced upon the Germans a war of attrition. As the Stanford historian James Sheehan points out, there are no decisive battles  in wars of attrition, but rather milestones along the way to victory, as the enemy is slowly but surely reduced to a condition of weakness where they can no longer continue the fight. In that sense, the other important milestones were Stalingrad (February 1942), after which it became increasingly clear that Germany was going to lose the war, and Kursk (July 1943), after which it became increasingly clear that the Russians were coming to Berlin, with or without the help of the Western Allies.

 

Any objective look at the human and material resources available to Nazi Germany by the spring of 1944, especially compared to those available to the Allies, makes the claim that D-Day saved the world from a Nazi-dominated Europe preposterous. Such arguments are not history but science fiction. We need only consider that in May 1944, the German field army had a total strength of 3.9 million soldiers (2.4 million of which were on the Eastern front), while the Soviet Red Army alone had 6.4 million troops. Moreover, while the Wehrmacht had used up most of its reserve troops by 1942, Joseph Stalin could still call up millions more men to fight. While Germany was rapidly running out of the food, fuel, and raw materials an army needs to fight a protracted war, the stupendous productive capacities of the United States, through the Lend-Lease program, made sure that the Soviet soldiers were well-fed and equipped for their final assault on Germany. Add to this the continual pounding that German industry and infrastructure was taking by the Anglo-American air war, which also forced the German military to bring back invaluable fighters, anti-aircraft artillery, and service personnel to the home front, and it becomes obvious that Germany was fated to lose the war long before any Allied soldiers reached the beaches of Normandy. The German army was defeated on the Western front, to be sure, but it was annihilated in the East. Until almost the very end of the war, somewhere between 60-80 per cent of the German divisions were stationed in the East, and that was where they were wiped out. But the Soviets paid a horrific price for their victory. According to the Military Research Office of the Federal German Army, 13,500,000 Soviet soldiers lost their lives in the fight against Nazi Germany. The United Kingdom lost some 326,000 soldiers. The United States lost 43,000 men in Europe.

 

In light of such statistics, one can only imagine how offended many Russians, Ukrainians and Byelorussians must feel today when they hear Americans congratulating themselves for having been the ones who defeated the Nazis. Nevertheless, the host of the On Point broadcast, David Folkenflik, introduced one segment with the claim that the United States had played the “dominant” role in achieving victory in World War II. Regarding the Pacific theater, there is no doubt about this. But after considering the scale of the fighting on the Eastern front of the European war, Folkenflik’s contention becomes absurd. Unfortunately, such comments are still all-too common. The English historian Giles Milton, for instance, has recently published a book entitled “D-Day. The Soldier’s Story,” in which he writes that the tide against Nazi Germany “had begun to turn” by the winter of 1942, but he still reserves the final turning for D-Day.  So it is no wonder that many Russians today feel that people in the West fail to give them the credit they deserve for achieving victory in World War II.

 

This is important to contemporary polticis: if the tensions between Russia and the United States are ever to be overcome, then there will have to be more American recognition and appreciation of the sacrifices of the Soviet peoples in World War II. Otherwise Americans will continue to make it easier for Vladimir Putin to engage in his own historical myth-making to help legitimize his increasingly authoritarian rule. To be fair, if David Folkenflik had decided to include a discussion of the Eastern Front in his broadcast, it would have lasted too long and lacked focus. Moreover, it is only to be expected that, when a nation reflects on the past, it concentrates on its own historical achievements. But that cannot be a license for spreading false historical beliefs. At least a brief mention of the Eastern front would have been merited. 

 

To acknowledge that D-Day was no “turning of the tide” in no way implies that it was not an important, or even a crucial, battle of the Second World War. Had the landings failed, as the American Allied Supreme Commander Dwight D. Eisenhower feared they might, the war could have dragged on for several more years. In that case, the Nazis would have come much closer to their goal of exterminating every last Jewish man, woman and child in Europe. Not to mention the hundreds of thousands, perhaps millions more military and civilian casualties that would have ensued. Victory in Normandy saved countless lives. In the final analysis, however, the greatest strategic consequence of the battle lies elsewhere.

 

This true significance of D-Day was briefly mentioned during the On Point episode by Walter Isaacson. (He was also the only participant who did not engage in overt exaggeration of D-Day’s importance for defeating Nazi Germany.) Isaacson made the most sensible comment of the entire program when he pointed out that, had D-Day failed, a lot more of Europe would have fallen under the control of the Soviet Union that actually did. In truth, without D-Day, the Soviet T-34 tanks would not only definitely have crossed the river Rhine, but they most likely also would have reached the French Atlantic coast. As the English military historian Anthony Beevor has discovered “a meeting of the Politburo in 1944 had decided to order the Stavka [Soviet High Command] to plan for the invasion of France and Italy. . .  The Red Army offensive was to be combined with a seizure of power by the local Communist Parties.” D-Day may not have been necessary to defeat Nazi Germany, but it was needed to save western Europe from the Soviet Union. As Beevor observes, “The postwar map and the history of Europe would have been very different indeed” if “the extraordinary undertaking of D-Day had failed.”

 

By all means, then, we should commemorate the heroism and sacrifices of the Anglo-American soldiers who fought and died on D-Day. They all made an important contributions to liberating western Europe and achieving victory over Nazi Germany. But national pride must never be allowed to distort historical reality. The successful Allied landings in Normandy accelerated Germany’s defeat, but they didn’t bring it about. The German military historian Jörg Echternkamp puts it well: “From the beginning of the two-front war leads a straight path to the liberation of Europe from Nazi domination roughly one year later. Nevertheless the German defeat had already at this time long since been sealed on the eastern European battlefields by the Red Army. This is all-too easily concealed by strong media presence of D-Day today." The credit for vanquishing Adolf Hitler’s armies should go first and foremost to the Soviet Red Army. Again, Norman Davies is correct when he writes: “All one can say is that someday, somehow, the present fact of American supremacy will be challenged, and with it the American interpretation of history.” For now, however, as the On Point broadcast has shown, popular understanding of D-Day in the Unites States continues to be more informed by myth than reality. 

 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172164 https://historynewsnetwork.org/article/172164 0
How Should Historians Respond to David Garrow's Article on Martin Luther King, Jr.?

 

 

Pulitzer Prize winner and noted historian David Garrow made headlines last week after Standpoint published his article on the FBI’s investigation of Martin Luther King, Jr. Their documents allege King’s  involvement in numerous extra-marital affairs, relations with prostitutes, and presence during a rape. In response, scholars have questioned the reliability of the FBI records Garrow used to make such claims. These documents, and the resulting controversy, should also lead scholars to ask questions about the ways in which historians can and should address the history of gender and sexuality when it intersects with the histories of the civil rights movement, American religion, and the development of the surveillance state.

 

First, King and many of the clergy involved in the civil rights movement took a different approach toward interacting with women than some other well-known preachers, particularly Billy Graham. In 1948, evangelist Billy Graham and his staff agreed to a compact known as the Modesto Manifesto. This informal compact dealt with a number of issues from distributions of revival offerings to relations with local churches to what would become more colloquially known as the Billy Graham rule: men on the team would never be alone with a woman who was not their wife. While the rule may have kept the evangelist, who was noted in the press for his fair looks, and much of his team on the straight and narrow, it no doubt limited the opportunities of women within his organization and marked women as dangerous to men, particularly preachers.

 

The Billy Graham rule would have been impractical for the civil rights movement. The work of women was essential to the growth and success of the movement, and it would have been nearly impossible for civil rights leaders, such as King, to avoid being in contact with women and still have had a thriving movement.  Sociology professor Belinda Robnett established that for the civil rights movement, it was very often women who linked leaders of organizations like King, to supporters of the movements on the local level. These bridge leaders recruited more activists to the cause and ensured the general running of civil rights organizations. Some of the women named in Garrow’s essay served as bridge leaders, and as a consequence were especially vulnerable to such charges in an era where Graham’s rule was influential. 

 

Those with traditional moral values reading David Garrow’s recent article on the alleged sexual proclivities of Martin Luther King Jr., might come to the conclusion that if King had instituted the Billy Graham rule, he never would have had the opportunity for extramarital affairs. They might imagine that there would have been no possibility that the FBI could have made such allegations, true or false. That however is unlikely to have been the case. While King’s moral failings are perhaps best left for he and his creator to resolve, it is certain that given the climate at the FBI at the time, and given J. Edgar Hoover’s special animus toward King, as Garrow described in this work, that there would have been continual attempts to try to establish some kind of moral failing with which to undermine one of America’s two most famous preachers.

 

The most controversial claim in these documents is a reference to an oddly edited document purporting to be a summary of electronic surveillance in which an unnamed Baptist minister forcibly raped a female parishioner while King looked on. While Garrow questions some documents, according to a Washington Post article, he seems to have less questions about the authenticity of this summary. King advisor Clarence Jones points out that while this rape should be condemned if true, if itdid occur, why did Hoover not turn over the evidence to other officials? It would have provided Hoover with the opportunities he had been seeking to undermine one of America’s most recognized preachers.

 

Jones of course is asking a question that all civil rights historians should ask but we should also ask other questions.  How do these documents often reflect a callous disregard for women? If this incident was true, why did the FBI not seek justice for this unnamed woman? And, if it is not true, how little did Hoover’s men value women that they thought an incident like this could be easily invented and the duplicity go unnoticed, and how did that impact their investigation of King? We should also ask if the Billy Graham rule set American expectations for the private behavior of very public clergy.

 

Women’s bodies are often sexualized, and black women’s bodies even more so.  In these documents, it is clear that the FBI placed an emphasis on what they deemed as these women’s immoral or abnormal sexual choices ranging from oral sex to adultery to prostitution to lesbian partners.  Even when they perhaps should have, the agents express little to no concern for the women, but rather the concern is for the state.  These women’s bodies mattered to the FBI only when they may have been in a position toplay a part in compromising America’s foremost black preacher and make him susceptible to communist influence, or when those same bodies offered the FBI an opportunity to expose that preacher’s failings.

 

For some of the women named in the documents used in the Garrow article, the evidence of sexual activity isscant, merely referring to them as girlfriends or women who wanted to know why King hadn’t come by when he was in their area. In another instance, an orgy between King, a prostitute, and a well-known female gospel singer is described.  For historians to focus on these instances now, with so much of the evidence from biased sources, and some of it still under seal, feels a bit like participating in historical slut shaming. For these women, whatever their sexual choices were over fifty years ago, there is no escape. Salacious details, real or fiction, lay forever in the National Archives.

 

In this case, much of what we’d like to know in regards to these controversies will not be revealed until the court order sealing the records expires in 2027, and may not be resolved even then. T. E. Lawrence once wrote that “the documents are liars.” It is the task of every historian to determine to what extent that is true, but it is also the task of every historian to examine the ways in which to documents may tell unplanned truths about our past, even if that makes us uncomfortable. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172128 https://historynewsnetwork.org/article/172128 0
Roundup Top 10!  

A Black Feminist’s Response to Attacks on Martin Luther King Jr.’s Legacy

by Barbara Ransby

We should not become historical peeping Toms by trafficking in what amounts to rumor and innuendo.

 

About the FBI’s Spying

by William McGurn

What’s the difference between surveillance of Carter Page and Martin Luther King?

 

 

What D-Day teaches us about the difficulty — and importance — of resistance

by Sonia Purnell

For four years, a few French citizens fought a losing battle. Then they won.

 

 

After Tiananmen, China Conquers History Itself

by Louisa Lim

Young people question the value of knowledge, a victory for Beijing 30 years after the crackdown on student protests.

 

 

How True-Crime Stories Reveal the Overlooked History of Pre-Stonewall Violence Against Queer People

by James Polchin

The history of such crimes tends to be lost.

 

 

Hitler told the world the Third Reich was invincible. My German grandfather knew better

by Robert Scott Kellner

As a political organizer for the Social Democrats, Kellner had opposed the Nazis from the beginning, campaigning against them throughout the duration of the ill-fated Weimar Republic.

 

 

How racism almost killed women’s right to vote

by Kimberly A. Hamlin

Women’s suffrage required two constitutional amendments, not one.

 

 

Who Will Survive the Trade War?

by Margaret O’Mara

History shows that big businesses profit most when tariffs reign.

 

 

Of Crimes and Pardons

by Rebecca Gordon

The United States was not always so reluctant to put national leaders on trial for their war crimes.

 

 

Trump Is Making The Same Trade Mistake That Started The Great Depression

by John Mauldin

Similar to today, the Roaring 1920s saw rapid technological change, namely automobiles and electricity.

 

 

 

The Making of the Military-Intellectual Complex

by Daniel Bessner and Michael Brenes

Why is U.S. foreign policy dominated by an unelected, often reckless cohort of “the best and the brightest”?

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172190 https://historynewsnetwork.org/article/172190 0
Why 2019 Marks the Beginning of the Next Cycle of American History

 

A century ago, historian Arthur Schlesinger, Sr. argued that history occurs in cycles. His son, Arthur Schlesinger, Jr., furthered this theory in his own scholarship. As I reflect on Schlesinger’s work and the history of the United States, it seems clear to me that American history has three 74-year-long cycles. America has had four major crisis turning points, each 74 years apart, from the time of the Constitutional Convention of 1787 to today.

 

The first such crisis occurred when the Founding Fathers met in Philadelphia in 1787 to face the reality that the government created by the Articles of Confederation was failing. There was a dire need for a new Constitution and a guarantee of a Bill of Rights to save the American Republic. The founding fathers, under the leadership of George Washington, were equal to the task and the American experiment successfully survived the crisis. 

 

For the next 74 years, the Union survived despite repeated disputes over American slavery. Then, in 1861, the South seceded after the election of Abraham Lincoln and the Union’s refusal to allow this secession led to the outbreak of the Civil War. In this second crisis, exactly 74 years after the Constitutional crisis of 1787, two-thirds of a million people lost their lives and, in the end, the Union survived.

 

The war was followed by the tumultuous period of Reconstruction and the regional sectionalism that had led to the Civil War continued. As time passed, with the growth of the industrial economy, the commitment to overseas expansion, and widespread immigration, the United States prospered over the next three quarters of the century until the Great Crash on Wall Street and the onset of the Great Depression under President Herbert Hoover in 1929. The economy was at its lowest point as Franklin D. Roosevelt took the oath of office in 1933.

  

World War II broke out in 1939—exactly 74 years after the end of the Civil War (1865). While America did not officially enter the war for two years, it is clear that the danger of the Axis Powers (Nazi Germany, Fascist Italy, Imperial Japan), on top of the struggles of the Great Depression, marked a clear crisis in American history.  Fortunately, America had the leadership of Franklin D. Roosevelt to lead us through the throes of the Great Depression and World War II. 

 

Once the Second World War ended in 1945, America entered a new period that included the Cold War with the Soviet Union and tumult in America due to the Civil Rights Movement and opposition to American intervention in wars in  Korea, Vietnam, and the Middle East.  The saga of Richard Nixon and Watergate seemed to many to be the most crisis-ridden moment of the post World War II era. But the constitutional system worked, and the President’s party displayed courage and principle and accepted that Nixon’s corruption and obstruction of justice meant he had to go.  Certainly, Watergate was a moment of reckoning, but the nation moved on through more internal and external challenges.

 

2019 is exactly 74 years after 1945 and it is clear that America is once again in a moment of crisis. As I have written before, I believe that today’s constitutional crisis is far more serious and dangerous than Watergate. Donald Trump promotes disarray and turmoil on a daily basis, undermines our foreign policy and domestic policy, and is in the process of working to reverse the great  progress and accomplishments of many of his predecessors going back to the early 20th century. The past 74 years have produced a framework of international engagement – the World Trade Organization and free trade agreements, the United Nations and conflict resolution, and a series of treaties like the Non Proliferation Treaty and Paris Climate Agreement. Nearly all of these accomplishments of the past 74-year cycle are now under threat. 

 

The rise of Donald Trump is not an isolated phenomenon as similar leaders have come to power in much of the world in the past couple of years. This has occurred due to the technological revolution and the climate change crisis.  Both trends have convinced many that the post-1945 liberal world order is no longer the solution to global issues and that authoritarian leadership is required to deal with the economic and security challenges that the world faces. Charismatic figures claim to have the solutions to constant crisis by stirring racism, nativism, anti-Semitism, Islamophobia, misogyny, and xenophobia.  

 

In some ways, this is a repeat of what the world faced in the late 1930s, but as this is the present instead of the past, we have no certainty that the major western democracies can withstand the crises and preserve democratic forms of government.  As America was fortunate to have George Washington, Abraham Lincoln, and Franklin D. Roosevelt in earlier moments of turmoil and crisis, the question now is who can rise to the occasion and save American prosperity and the Constitution from the authoritarian challenge presented by Donald Trump.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172129 https://historynewsnetwork.org/article/172129 0
Beijing’s Tiananmen Square Massacre

Image: A rally of more than a million people who swarmed the city’s Happy Valley Race Course, to protest the killings.  To the right, young Hong Kongers in white (the Chinese color of mourning); to the left, at a private club, indifferent expats enjoy tonics by the pool. By then, many of them had already packed their bags.

 

This week marks the 30th anniversary of the infamous June 4th massacre in Beijing — when People’s Liberation Army troops, under the command of the Chinese Communist Party, murdered an unknown number of students in the Chinese capital. Estimates of the death toll range from 200 to 2,500 according to various independent accounts.  Most were killed by automatic gun fire, but many were crushed beneath the steel treads of Army tanks.

 

The sudden, merciless crackdown strangled a blossoming democracy movement led by university students and workers and sent shock waves around the world.  But nowhere felt the sheer terror of the mass murders more than the then-British colony of Hong Kong,  where I was working as a young reporter.

 

Knowing that the slaughter in Beijing could happen again in Hong Kong, the city’s confidence in its own future was shattered.  For 147 years, Hong Kong was a British colony and had encouraged a free-wheeling capitalist system. It became a showcase of social and economic freedom juxtaposed against the historically brutal communist China. Under British rule, Hong Kong had enjoyed a laissez faire economic system, creating a capitalist economy that was the envy of the world. The people had enjoyed freedoms of the press, speech, assembly and movement in and out of the territory. While local Hong Kong Chinese tycoons had largely run the city’s booming local business community, real political power rested with the governor, who had always been appointed by Britain’s Prime Minister, a situation which suited the Hong Kong Chinese just fine.

 

Under the terms of the historic 1984 Joint Declaration, signed by Britain and China, the British Crown Colony and its 5.6 million residents would revert to Chinese rule by June 1997. But the prospect of now being under the direct control of the People’s Liberation Army was one which deeply frightened the people of Hong Kong — especially since most of their parents or grandparents had fled  China after the communist takeover in 1949.Barring an unforeseen political turnaround by the Beijing regime, experts predicted a massive outflow of people and investments from Hong Kong. 

 

In the weeks leading up to the Beijing bloodbath, and in the dark days that followed, a normally non-political Hong Kong underwent an immense groundswell of cultural pride, and an almost overnight political awakening. As millions marched and swarmed to the city’s early optimistic rallies in proud support of the students’ democratic movement in late May, the long-held belief that Hong Kong people only cared about money was put to rest.

 

When a crowd of 300,000 packed the city’s Happy Valley Race course on May 27th, where I attended a day-long concert to raise funds for the Beijing students, organizers hoped to raise $250,000. By evening the total take was a generous $3 million.

 

As the many thousands waved yellow ribbons—borrowed from the 1986 Philippines’ People’s Power Revolution which had toppled  Filipino dictator Ferdinand Marcos— Hong Kong’s leading singers switched from their usual sappy love songs to passionately patriotic tunes such as For Freedom, Heir of the Dragon, Be a Brave Chinese! and Standing as One.

 

Normally hard-hearted taxi-drivers and mini-van owners refused to accept fares from people heading to the rallies, while street-side fish mongers and vegetable hawkers donated part of their day’s earnings to the Beijing’s students’ democratic movement. All the leading Chinese and English-language newspapers got involved, publishing emotional editorials supporting the aims of the Beijing students. The city’s many glossy magazines replaced the usual pouting pop stars and beauty queens with the handsome face of Beijing student leader Wu’er Kaixi. 

 

But as the days of proud defiance turned into a single night of horror on June 4th, Hong Kong reacted to the Beijing bloodbath with shock, sadness, anger, and finally, outrage. The marches and rallies continued swelling in size to two million. But the mood was now somber and grim. White—the traditional Chinese color of mourning—replaced yellow.  And hundreds of marchers—now old as well as young—openly cried in the tropical heat of the teeming streets; something I had never seen before, or since. The public was shaken by the senseless slaughter they’d all seen on television.

 

The Hong Kong Red Cross pleaded for blood donations for the many injured in Beijing, the call was answered by over 1,000 people per day. Catholic residents of the British territory, including many resident Filipinos, attended a special mass in memory of the slain students, celebrated by Cardinal John Wu and 300 priests. Buddhist services were held in temples across the territory.

 

In a unique waterborne protest, 200 fishing boats assembled in busy Victoria Harbour, forming the largest flotilla ever seen in Hong Kong. For five hours fishing captains and their crews circled the harbor to pay respects to the young victims of the massacre in China’s ancient capital.  Huge black banners, reading “For democracy, for freedom—the fishermen have come!” lay draped across the boats’ wheelhouses. 

 

 

Newspaper headlines frighteningly alluded to a possible second Chinese civil war. The Hong Kong stock market plunged 300 points—perhaps the sharpest single day fall since 1949. And a massive brain-drain began which eventually grew to a human flood, as close to a million Hong Kong families steadily fledtheir homes. The best and the brightest of the middle-class fled to Australia, Canada, the UK and the United States. The less affluent acquired passports which suddenly became available – for a fee — from remote and obscure poverty-stricken nations in Africa and the South Pacific.

 

Today, as the world marks the massacre’s 30th anniversary, post-Handover Hong Kong is ruled from Beijing and the city’s 7.6 million people still sit in the historical shadow of that slaughter.

 

China has never officially released a realistic death toll and Hong Kong stopped asking long ago.  Most of the local media is now controlled by Beijing, including the leading English-language newspaper, the South China Morning Post—owned by pro-Beijing billionaire Jack Ma, founder of Alibaba. Even public references to the Tiananmen Square massacre have been watered down – gradually moving to the more politically acceptable “Tiananmen incident.” Or merely mumbled as “Tiananmen…”

 

Self-censorship is now rampant in Hong Kong’s once vibrant pressbut sometimes more pressure is applied. In 2014, the editor of Ming Pao—a popular leading liberal paper seen as not supportive enough of Beijing—was attacked on his way to work by several people wielding meat cleavers. He barely survived and will walk with a limp for the rest of his life.  Some 13,000 people marched in the streets in protest, and young reporters carried signs reading “You can’t kill us all.”

 

 

Beijing is increasingly working to strip away Hong Kong’s freedoms and Hong Kongers have become like the proverbial frog sitting in a pot of water on a stove, where the heat is slowly increased. Beijing has called for Hong Kong’s high court judges to act not as independent magistrates, but as “civil servants” for the government — acting on its behalf rather than that of justice. Two-thirds of the members of the respected 112-year old Law Society of Hong Kong angrily protested. But one-third of the lawyers remained silent.

 

One poignant pertinent link still binds Hong Kong to the tragic events in Beijing all these years ago: The Goddess of Democracy statue. The original was created by the Beijing students, modeled on America’s Statue of Liberty. It was created in just a few days and made of fragile foam and papier-mâché on a metal frame. It stood 35 feet – designed to be tall enough to be seen from any point in the vast 109-acre Square. The People’s Liberation Army destroyed it on the night of June 4th.  

 

Every year, on June 4th,  hundreds of thousands of Hong Kong people remember the Beijing students by gathering for a candle-lit memorial at the city's beloved Victoria Park.

 

But in 2010, a full-scale bronze replica of the Goddess was created in Hong Kong to become a part of the city’s annual memorial services. These were held in the city’s beloved Victoria Park— in the very heart of the city, where as many as 200,000 people gathered each year. However, after the 2010 gathering, the Hong Kong Government decided to move the politically embarrassing statue to the Chinese University of Hong Kong’s campus, for permanent display—far outside the city. 

 

The hope being: Out of sight, out of mind. Just the way that Beijing wants it.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172121 https://historynewsnetwork.org/article/172121 0
Remembering Murray Polner (1928-2019)

 

I knew Murray Polner, HNN’s book editor until May 2017, long before I began submitting reviews.  This was his second stint as my editor, the first being in the 1990s, when I became a contributing writer for PS: The Intelligent Guide to Jewish Affairs.  

 

PS was a newsletter he created with co-editor Adam Simms, as a postscript to his time as editor-in-chief of Present Tense, published by the American Jewish Committee from 1973 to 1990 as a liberal counterpoint to Commentary.  (That other AJC publication had evolved under Norman Podhoretz from a forum for liberals and moderate leftists to become the earliest exponent of a new intellectual and political current known as neoconservatism.)    

 

Working with Murray at PS, I got to know him and his wife Louise personally.  I recall visiting their home in Great Neck a couple of times, including for a vegan Passover Seder.  And I was privileged, over 20 years ago, to be included in a big bash at a Manhattan venue in honor of his 70th birthday.   

 

His career trajectory was astonishingly distinctive.  He served in the US Naval Reserve from 1947 to 1952 and then in the US Army from 1953 until ‘55, but he became disillusioned with the military and evolved into a pacifist. In this connection, he worked with a local antiwar group to oppose the possible renewal of military conscription after the Carter administration reinstituted compulsory registration for young men.  And he served as the editor of Fellowship, the organ of the pacifist Fellowship of Reconciliation from 1991 to ’93.      

 

In the late 1950s and early ‘60s, Murray taught at Thomas Jefferson High School in Brooklyn, and then in adjunct capacities at Brooklyn College, Queens College and Suffolk Community College.  He also served as executive assistant to Harvey Scribner, the first chancellor of New York’s public school system.     

After graduating CCNY in 1950, he pursued graduate studies in the late 1960s, earning an MA in history from the University of Pennsylvania and a Ph.D. in Russian history at Union Institute and University in 1972. In the meantime, he began publishing the first of his eight books, No Victory Parades: The Return of the Vietnam Veteran, in 1970.  Then came a work on amnesty for draft resisters, When Can I Come Home? a Debate on Amnesty for Exiles, Antiwar Prisoners, and Others in 1972.   

The first of his books with a Jewish theme was Rabbi: The American Experience, published in 1977. This was followed with two anthologies, one in 1994, The Challenge of Shalom: The Jewish Tradition of Peace & Justice, co-edited with Naomi Goodman, and Peace, Justice, and Jews: Reclaiming Our Tradition, co-edited with Stefan Merken in 2007.

 

He linked his passion for baseball with his devotion to social justice in Branch Rickey: A Biography. Published in 1982, this was about the sports executive who broke the color line in Major League Baseball by bringing Jackie Robinson to the Brooklyn Dodgers in 1947.

 

Returning to the theme of pacifism, Murray felt the need to enlist a knowledgeable Catholic as co-author of Disarmed and Dangerous: The Radical Lives and Times of Daniel and Philip Berrigan, the militant anti-war priests, published in 2007.  His choice was Jim O’Grady, a biographer of the Catholic political radical, Dorothy Day, and a reporter for WNYC public radio.    

 

Murray’s pacifism drew admiration from the antiwar right as well as the left.  In 2008, he co-edited We Who Dared to Say No to War: American Antiwar Writing from 1812 to Now, with Thomas E. Woods, Jr., a libertarian.  And from 2001 until 2015, Murray wrote numerous scathing opinion pieces on US foreign policy for the rightwing antiwar website, LewRockwell.com,  which mourned his passing immediately upon news of his death with a piece by Mr. Woods

 

This is what Rick Shenkman, HNN’s founder, emailed to his son, Rob, upon learning of his passing: 

 

“Murray went back almost to the beginning of HNN nearly 20 years ago. I marveled at his productivity into his nineties and his subtle grasp of the key issues facing the country.  

 

“I knew he was slowing down when he asked to retire as HNN’s book editor, but then he surprised me by indicating he wanted to keep up the blog.  And so he did! 

 

“Murray played a big role at HNN and was instrumental in our success.  

 

“I admit I was always jealous because he made writing seem easy.  But when you looked deeply into his complex sentences you realized he was an old-fashioned wordsmith who worked over his paragraphs until they sang.”

 

And this is from Rob’s message to Rick:

 

“Just a few days before we lost him, he dictated to me a letter to the editor of the NY Times, asking why the editorial page had not seen fit to warn about a possible US war with Iran. That was my dad, a lion to the end, yet also as sweet as a pussycat.”

 

Murray is survived by Louise, his wife of over 68 years, their daughter Beth Polner Abrahams, their two sons, Rob and Alex, and six grandchildren.  May his memory long endure.  

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172126 https://historynewsnetwork.org/article/172126 0
D-Day 75 Years Later and the Quest for Peace

D-Day planning map, used at Southwick House

 

My father Vincent was wounded clearing the mines on Omaha Beach following the D-Day invasion 75 years ago.  At the hospital next to him was one of the D-Day paratroopers who got dropped into France to fight the Nazi Germans.  When my father told the paratrooper about removing mines, his reply was "gee, that's dangerous." My father could not believe that a daring paratrooper would consider that a risky job! Both my father and the paratrooper were lucky in the sense they did not suffer the worst of injuries. But so many others lost their lives in the invasion to retake Europe from the hold of Nazi Germany.  The D-Day landings of June 6, 1944 and the invasion that followed brought about the end of the German war machine. D-Day led to freedom for millions who had been suffering under Nazi occupation.  It's important to remember D-Day because of what so many brave soldiers were sacrificing for: to build peace.  As General Dwight Eisenhower told Walter Cronkite, Americans and the allies came together “to storm these beaches for one purpose only. Not to gain anything for ourselves, not to fulfill any ambitions that America had for conquest, but just to preserve freedom. I think and hope, and pray, that humanity will have learned ... we must find some way ... to gain an eternal peace for this world.” It's special to remember these ideals because so often today we hear leaders talking about war recklessly. When discussing a potential military campaign, some even say things like it would only take a matter of days, as if it was all so easy. Some leaders flaunt military might and spending to the extreme. It’s scary when our leaders seem to have no concept of what war is or the human cost.  Not only are soldiers at risk in war, but civilians too. Part of the Allied invasion of Europe was civil affairs units bringing relief supplies to feed the hungry. War always leads to food shortages and hunger. This relief had to continue for years across the continent.    We need our leaders to be thoughtful, like Eisenhower, about the critical issues of war and peace.  Eisenhower, as president, avoided war. He was deeply concerned about too much military spending and sought arms control. Today, we need to pursue disarmament among the militaries of the world.  It is a tragedy in itself when nations have to commit so many of their precious citizens and resources to war. My father remarked how his fellow soldier, Lou Siciliano, was a really educated man who could be doing so many other things to help society, "but this is what happens in war."  And for the families back home there is pain during war, and the aftermath especially for those wounded.   My father, after being hurt by pellets from an exploded mine 50 feet away, was lucky that a letter he sent reached his mother before the War Department’s telegram.  Her brother had been killed during World War One, and the trauma of that War Department telegram arriving first would have been extreme.  My father had to live with pellets in his legs the rest of his life, but it did not cause him too much trouble except perhaps toward the end when his mobility was extremely limited.  Many other families were not so fortunate after the D-Day invasion. Cemeteries in France mark the fallen soldiers. They are not alone in their grief. To this day many military families suffer that terrible news about loved ones lost. For some families of injured soldiers, care is needed for a lifetime. When you think of these families, it can reinforce the mission for peace.  American and allied soldiers lost their lives on D-Day so that others may live free. The best way to honor D-Day veteran’s sacrifice is to work for that elusive, but achievable eternal world peace.   

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172100 https://historynewsnetwork.org/article/172100 0
Remembering Rome's Liberation

Celebrations as Rome is liberated

 

 

Amid the justifiable hoopla this week surrounding the launch of the D-Day invasion, it is important to note, too, the anniversary of an event that unfolded just two days earlier, on June 4, 1944--the Allied liberation of Rome. 

 

Despite Churchill's promises of quick victory in the 'soft underbelly” of Hitler's Fortress Europe, the prize was hard-won, indeed—Italy proved a “tough, old gut,” in the words of the GI's master chronicler Ernie Pyle. The Germans were intent on fighting for every inch of their southern flank. 

 

It began in summer '43 with Sicily, a campaign famous for the race that developed between General George Patton and Field Marshal Bernard Montgomery, his British rival, to be the first into the stepping-stone to the Italian mainland, Messina. Things got no easier after Italian dictator Benito Mussolini was arrested and Italy dropped out of the war. “All roads lead to Rome,” lamented theatre commander Sir Harold Alexander, “but all the roads are mined.” There were landings at Salerno, then chaotic Naples, where, as if on cue, looming old Vesuvius blew its top, giving the boys from Allen Park and Kalamazoo something to write about in their V-mail to the family back home. 

 

Twin, bloody stalemates followed over the fall and winter: at the swollen Rapido River in the Appenine Mountains south of Rome, near Cassino, and at Anzio, the seaside resort where Nero once fiddled as his eternal city burned.  Stars and Stripes cartoonist Bill Mauldin captured the dilemma facing Yanks, Tommies, Poles and other infantry units as they struggled to overcome fierce German resistance: “(We) seemed to find itself generally looking up, not down, at the enemy.” 

 

The breakthrough came finally in May of 1944, as advance troops pushed across the Alban Hills and then to the southern outskirts of Rome.  Fearing another Stalingrad, Hitler agreed to allowing his forces to pull back to positions 150 miles north, where the war would rage on for another full year. Fifth Army chief Mark Clark's jeep convoy got lost on the Via della Conciliazione near St. Peters, until a priest from Detroit offered directions to the city center. “They didn't even let us have the headlines for one day,” Clark was heard to complain when the gaze of the press shifted, suddenly and overwhelmingly, to Normandy. 

 

Is was in any case a glorious moment of triumph, paid for by the sacrifice of the 7,860 mostly young men who lie today buried in the American Cemetery at Nettuno, Anzio's sister-city.  “My God, they bombed that, too!” a member of the 1st Special Force exclaimed as columns in olive drab marched past the Coliseum.  Ex-pat Jane Scrivener recorded the sight as they moved down the posh Via Veneto, marking for Rome's citizens the end of months of brutal Nazi occupation: 

 

                          They were dusty, battle-worn and unshaven, but they smiled 

                          and waved in response to the greetings of the crowd. They 

                          had roses in the muzzles of their rifles and miniature Italian 

                          flags, which had been thrown to them. They had roses stuck 

                          in the camouflage nets of their helmets and in their shirts. 

 

“One has read these things in books, and accepted them as fiction,” Scrivener added, “never dreaming of witnessing them as we did today.” 

 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172099 https://historynewsnetwork.org/article/172099 0
Depicting the Devil: How Propaganda Posters Portrayed Nazi Ideology In 1925, a bellicose Adolf Hitler understood that he needed the power of mass persuasion to push his political ideology on the German people. Citing propaganda as an essential component of statecraft in Mein Kampf, he wrote that propaganda must “awaken the imagination of the public through an appeal to their feelings, in finding the appropriate psychological form that will arrest the attention and appeal to the hearts of the national masses.” In its early phases, the Nazi party largely depended on Hitler’s own oratory gifts and stage presence to gather more interest and support. This changed dramatically with the party’s rise to political prominence and Hitler’s partnership with chief propagandist Joseph Goebbels. 

 

Goebbels immediately went to work weaponizing German history and mythology. In 1933, the Nazi government set up the Ministry of Propaganda with Goebbels at its helm. The Ministry identified two primary threats that needed to be brought to public attention and eliminated - so called internal enemies and external enemies of Germany and the German people. Jews, communists, Roma, homosexuals, religious groups such as Jehovah's Witnesses, and other minority groups were labeled as subversives: domestic enemies who were working actively against the country and its success. The perceived unbearable conditions of the The Treaty of Versailles and the myth of the “Stab in the Back,” were popular rallying cries and the so called historical proof in the pudding for nationalists. Propaganda was wielded like a weapon - painted in broad strokes and utilized as a means for both domestic suppression and international aggression.

 

Initially, the use of radio, newspapers, and even movies such as Leni Riefenstahl’s Triumph of the Will (1935) were popular vectors of transmission, but over time, less expensive visual means were also employed. Like in other parts of the world, the propaganda poster became a popular and easily mass produced misinformation tool. The striking nature of posters, coupled with psychological messaging, created an emotional response in viewers. The fact that they were unavoidable and plastered at various public locations was also a major convenience for Goebbels and the Nazi propaganda machine. 

 

One of the most important themes which is found in Nazi propaganda posters, and Nazi propaganda in general, was the adoration of Adolf Hitler. Der Fuhrer (leader) was likened to a god-king and a messianic human representation of the will of Germany. Nazi propagandists created numerous artifacts which were not only on public display but also pushed to the private realm of Germans who were encouraged to harbor artifacts of Hitler. The phrase “Ein Volk, ein Reich, ein Fuhrer,” (One people, one empire, one leader) was embedded in the minds of the people and pushed across various mediums to cement the idea that the party was in complete charge of the German state and its people.

 

 

Obsessed with imagery based around a national community, the Nazis utilized artists to create emotionally stirring and provocative pieces of art. To strengthen the idea of domestic enemies, the vilification of Jews was one of the most common themes in Nazi propaganda posters. Jews were often depicted as blood thirsty demons, feeding on pure-blooded Germans, portrayed as Bolshevik infiltrators,  a plague upon the land, or puppet masters secretly controlling the levers of power across national borders, especially in the economic sectors. This vile dehumanization used various motifs to encapsulate the Nazi ideology.

 

Many posters focused less on vilification, and more on the glorification of the Aryan body, and the idea of a pure blooded society. Aryan superiority was showcased through triumphant imagery of the unrealistically strong and perfect  male and female body . Often, so-called subhuman classes of people were also drawn to highlight the natural inferiority to the Aryan. Jews were further racialized in this way with caricature style depictions such as overly large noses and claws instead of hands. The Nazi obsession with family and the roles of men and women were also intertwined with encapsulations of what constituted the perfect German individual, and in turn, the German population as a whole. The Nazis strongly believed in the traditional roles of men and women.This was closely linked to the Nazi ideology of a Volksgemeinschaft (national community) which transcended class and religious differences to create a sense of racial comradery and national pride. 

 

 

Nazi propagandists naturally needed to sell their military aggression to the civilians at home. Weary and exhausted from World War I, the German public was in no mood for the triumphalism of war on the eve of the invasion of Poland in 1939. Propagandists turned to the idea of defining wars of aggression as “self-defense” and territorial acquisition which was completely necessary and justified for the preservation of the Aryan race. Upon reaching the age of 18, boys were required to join military service or the Reich Labor Service. Recruitment posters claimed that military service was for “freedom and life.”

 

Propaganda extended into other realms of everyday life as well, including education. Students were routinely encouraged to become the Fuhrer’s “little propagandists,” at every turn. Educators were expected to join the National Socialist Teachers League and by 1936, close to 97% did - one of the highest percentages in any profession in the country. Boys and girls between the ages of 10 and 17 were expected to join the Hitler Youth. Originally established as a youth training program to prepare young men to become a part of the Sturmabteilung (SA), it morphed into an mandatory after school program to mold children to be faithful to the Nazi party and to the Nazi leadership. A popular poster from the time reads - “Leader - all 10 year olds into the Hitler Youth.”

 

 

A recurring problem which plagued Nazi leadership and their attempts at total and complete control of the media in Germany were foreign broadcasts. Artists tackled this issue by presenting listeners to broadcasts from London, New York, Moscow and other ‘hostile’ locations as traitors. Utilizing guilt and the idea of racial and national disloyalty, the Nazis tapped into psychological manipulation to set their agenda and attempt to eliminate any threats to their control of the media landscape. 

 

Nazi propagandists targeted virtually every segment of society in Germany. From the individual to the overarching state, from the inner family sanctum, to the international policies of aggression and Aryanism. Nazi propaganda attempted to normalize the dehumanization of entire groups of people deemed unworthy according to the strict racist policies implemented on a national level. The poster became a cheap transmitter of these various messages and combined visual arts with psychological methods to incessantly repeat Nazi ideologies to the German public. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172125 https://historynewsnetwork.org/article/172125 0
Witch Hunt! How Europe’s Witch Mania Came to the New World

 

When I decided to write Poison in the Colony, a historical novel about the Jamestown colony, I was up against a problem. It was 2014 and young readers had made their desires clear: if a book didn’t contain a fantasy world, vampires, or at least a few wizards, they did not want to read it. I pondered how I might incorporate the other-worldly aspects that young readers wanted in a novel about real people and real historical events. I decided to blend in a believable and historically accurate bit of magical realism—my main character would have the gift of the “second sight,” i.e. strong intuitive powers.

 

I figured that a young person with a strong sense of intuition—she would know things she wasn’t supposed to know, she would have small peeks into the future—may well be accused of witchcraft. Even better, I thought. The plot thickens!

 

The fear of witches was at fever pitch in Britain in the early 1600’s and that fear had traveled across the ocean with the colonists. Witchcraft was condemned by the church and at the time church doctrine taught that witches had willingly entered into a pact with the devil. In Europe, between the years of 1500 and 1660 it is estimated that about 80,000 people were put to death for allegedly practicing witchcraft. Eight out of ten of these were women. Death was normally meted out through strangling and burning at the stake, beheading, or hanging. 

 

I started to research the years the book would cover, 1613 to 1622, and the family of Anne Burras and John Laydon and their daughter Virginia. Virginia was to be my main character, blessed (or cursed) with the second sight. In my research, I stumbled upon two stories that both shocked me and let me know that I was on the right track.

 

Anne Burras and Jane Wright, a left-handed midwife, were assigned to make shirts for the colony. The thread they were provided turned out to be rotten, and so the shirts were not made properly. They were accused of stealing from the colony. This was during the period of Martial Law, known for brutal punishments. For this transgression the two women were whipped severely. Anne was pregnant at the time and miscarried. Were Anne and Jane purposely given bad thread? Were they framed for the crime of stealing from the colony? Midwives and left-handed people were vulnerable to accusations of being in consort with the devil. Did someone accuse these women of witchcraft and was this their punishment? 

 

My research led me to another story: the first witch trial ever recorded in the New World. It happened in 1626 and took place in the Jamestown settlement in Virginia. The accused witch? This same Jane Wright. I felt as though my hunches, and my characterization of Virginia, may well be founded in historical reality

 

Witch hunts in the New World began in Virginia, though Virginia’s alleged witches were imprisoned rather than executed. The first American witch executions occurred in Windsor, Connecticut in 1647 (46 accused, 11 executed). The mania reached a peak during the famous Salem, Massachusetts witch trials in 1692: over 200 people were accused, 150 were arrested, 20 were executed and another 4 died in jail awaiting trial.

 

These hapless women and men were accused of causing a variety of problems: storms that wrecked ships, the death of humans and livestock, illness, and crop failure. Some “witches” were even accused of entering people’s rooms at night in apparition form and biting, scratching, and/or trying to strangle them. Any unexplained difficulty or calamity, or, it seems, strange dreams, could be blamed on a witch.

 

Social class figured prominently in these matters, with those of high status more often having the authority and credibility to do the accusing, and the poor most often bearing the brunt of the accusations. Acting outside the norm in any way, even by simply being left handed, was suspect, as was knowing how to use herbs to heal, being a midwife, or, as a woman, being unmarried. Sometimes the alleged witch had quarreled with neighbors over land boundaries or given birth to a child out of wedlock. Often, men who were accused of sorcery were those who had tried to defend their wives or other female family members when they were accused.

 

There does seem to be some truth to the accusations that these “witches” correctly predicted events and deaths, making it appear as though there was something going on beyond the imaginations of the accusers. When Jane Wright was brought to trial, the court records state that she had correctly predicted the death of several colonists and used witchcraft to kill a newborn as well as destroy crops and livestock. Knowing when a death will happen and predicting it, and actually causing the death, are of course two very different things, but this conflation certainly contributed to the belief in sorcery

 

In Poison in the Colony, I wrote about intuitive abilities that were passed down through women. Virginia’s grandmother, a character I invented, also had the gift of the second sight. She was convicted of sorcery, strangled and burned at the stake in England—all the more chilling for Virginia to know the specifics of what could befall her. 

 

This belief that being in consort with the devil was passed down from mother to daughter has its roots in history. As part of the finger pointing in Salem, Massachusettes, the four-year-old daughter of Sarah Good, one of the accused witches, was also arrested and put into prison. The child lingered in prison for between six and eight months, and by the time she was released she was so traumatized that she was never able to take care of herself. 

 

When her mother, Sarah Good, stood on the platform ready to be hung, the Reverend Nicolas Noyes urged her to confess to being a witch. Good replied, “You are a liar. I am no more a witch than you are a wizard, and if you take away my life God will give you blood to drink!” 

 

In 1717 Reverend Nicolas Noyes died hemorrhaging internally, choking on his own blood.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172120 https://historynewsnetwork.org/article/172120 0
Donald Trump, the Humanities, and the Decline of American Values

The main reading room of the Library of Congress

 

During April 2019 several pieces appeared on the HNN website dealing with the decreasing interest in the humanities, including history. One of them was entitled  “US declining interest in history presents risk to democracy.” Commenting on President Trump’s poor knowledge of history, it observed that he “is a fitting leader for such times.” Another article, abridged from The New York Times, was “Is the U.S. a Democracy? A Social Studies Battle Turns on the Nation’s Values.” These essays stirred me to ask, “What is the connection, if any, between President Trump, the decline of the humanities, and U.S. values?”

 

Let’s begin with American values. While any generalizations present difficulties, they can at least help us get closer to important truths. A valuable indicator of American values, first published in 1950, is historian Henry Steele Commager’s The American Mind.  Regarding “the nineteenth-century American,” he wrote, “Often romantic about business, the American was practical about politics, religion, culture, and science.” In the next several pages, Commager also generalizes that the average American’s culture “was material”; there “was a quantitative cast to his thinking”; “theories and speculations” disturbed him, and “he avoided abstruse philosophies of government or conduct”; his “attitude toward culture was at once suspicious and indulgent,” and he expected it (and religion) to “serve some useful purpose”; and “he expected education to prepare for life — by which he meant, increasingly, jobs and professions.” “Nowhere else,” the historian noted, “were intellectuals held in such contempt or relegated to so inferior a position.” 

 

A dozen years after the publication of Commager’s book, Richard Hofstadter’s Anti-Intellectualism in American Life (1962) appeared. Over a year ago, I discussed that historian’s insights as they applied to present-day U. S. culture and President Trump.  Hofstadter noted that “the first truly powerful and widespread impulse to anti-intellectualism” arose during the Jackson era. This anti-intellectualism was common among evangelicals and it was reflected in the popularity of the Horatio Alger rags-to-riches myth, the increasing emphasis on vocational training, the popularity of self-help gurus like Norman Vincent Peale, and the strong impact in the early 1950s of McCarthyism. 

 

I then indicated how all these points were connected to Trump, that he “epitomizes the anti-intellectual strain in American culture,” and that he has never “evidenced any interest in the humanities or liberal arts. Literature, history, philosophy, the arts, and any interest in foreign cultures have remained alien to him.”

 

 

At the end of Commager’s book he asked a number of questions about the future. What would U. S. education educate people about? How would Americans use their increasing leisure?  Increasingly abandoning “traditional moral codes,” would “they formulate new ones as effective as those they were preparing to abandon?” “Would they preserve themselves from corruption and decadence?” “Could they preserve their pragmatism from vulgarization?”

 

In the seven decades that have passed since the publication of The American Mind, the answers we have provided to these questions regarding education, leisure, morality, corruption, decadence, and vulgarization have been more negative than positive. 

 

In 1985, Neil Postman wrote in Amusing Ourselves to Death, “Our politics, religion, news, athletics, education and commerce have been transformed into congenial adjuncts of show business, largely without protest or even much popular notice. The result is that we are a people on the verge of amusing ourselves to death.”  In 1993, Zbigniew Brzezinski, former National Security Adviser to U. S. President Jimmy Carter, stated that television had “become prominent in shaping the [U. S.] national culture and its basic beliefs,” that it “had a particularly important effect in disrupting generational continuity in the transfer of traditions and values,” and that it helped produce “a mass culture, driven by profiteers who exploit the hunger for vulgarity, pornography, and even barbarism.” By 2005, Postman’s son Andrew noted that entertainment had considerably broadened including the Internet, cell phones, and iPods.  By 2018, there was further broadening, and historian Jill Lapore wrote, “blogging, posting, and tweeting, artifacts of a new culture of narcissism,” became commonplace. Social media, “exacerbated the political isolation of ordinary Americans while strengthening polarization on both the left and the right. . . . The ties to timeless truths that held the nation together, faded to ethereal invisibility.”

 

How appropriate then that in 2016 we elect a TV celebrity (on The Apprentice), who in the words of historian Niall Ferguson, “is the incarnation of the spirit of our age. His tweets–hasty, crude and error-strewn–are just one symptom of a more general decline in civility that social media have encouraged.”

 

If Trump tells us something ugly about ourselves, what does that have to do with the present state of the humanities? In a 2018 essay, “Why Trump's Crassness Matters,” I indicated that “Trump’s crassness and lack of aesthetic appreciation reinforces an unfortunate tendency in our national character—to undervalue beauty.” The Frenchman Alexis de Tocqueville observed this already in the early nineteenth century, noting that we tended to “cultivate the arts which serve to render life easy, in preference to those whose object is to adorn it. . . . [We] will habitually prefer the useful to the beautiful, and . . . . will require that the beautiful should be useful.” 

 

At times, some of our leaders have demonstrated an appreciation of beauty. Historian Douglas Brinkley has written long books on both of our Roosevelt presidents’ appreciation of nature’s beauties, and John Kennedy once said, “I look forward to an America which will not be afraid of grace and beauty, which will protect the beauty of our natural environment, which will preserve the great old American houses and squares and parks of our national past, and which will build handsome and balanced cities for our future.”

 

Unfortunately, however, Donald Trump’s philistinism and disrespect for our environment is all too common, as is lack of aesthetic appreciation—note his constant budget proposals to kill the National Endowments for the Arts and Humanities. Does not the fact that fewer university students are selecting courses in history and the other humanities and arts reflect some of the same reasons we elected our liar-in-chief Donald Trump? We overvalue such things as making money, “getting ahead,” glitz, and celebrity status and undervalue what the humanities and arts emphasize—beauty, truth, and goodness.  

 

A month after Trump’s election I wrote that he reflected the “ugly side of American life.” A comparison that didn’t occur to me then, but does now is that our culture is like the popular understanding of a Robert Louis Stevenson character—Dr. Jekyll and his alternative personality, Mr. Hyde. Trump is the diabolical Mr. Hyde of our national personality. We also have a good Dr. Jekyll side represented by such individuals as Carl SandburgDorothy Day, and Martin Luther King, Jr. Like the Jekyll/Hyde multiple personality, the two sides are battling for our soul.  

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172118 https://historynewsnetwork.org/article/172118 0
The Bund was far from Perfect. It still matters to Jewish History

A Bundist demonstration, 1917

 

 

From 25-27 September 1897, thirteen activists from various Jewish radical organizations in the Russian Empire met in Vilna (now Vilnius, Lithuania). The date was no accident, it coincided with the Jewish holiday of Yom Kippur, a day when many Jews were traveling to be with family, a day when these activists could travel without arousing the authorities’ suspicion. The need for secrecy overrode all other concerns. 

 

At no point were all of the activists in the room at the same time, and no official minutes of the meeting were kept. But the party they founded, the General Jewish Labor Bund, would play a leading role in Jewish politics for the next 50 years. Staunchly secular and socialist, hostile towards Zionism but fiercely committed to the Jewish community, the Bund insisted that if the revolution liberated Jewish workers as workers, but allowed them to suffer continued persecution as Jews, it would not have liberated them. Rather, the party dreamed of a socialist federation of nations, including Jews; autonomous in cultural matters but politically and economically united.

 

The Bund occupies something of a paradoxical place in Jewish history. Judged by its own criteria, the Bund failed. Largely destroyed in the Holocaust (small chapters survive today), the closest it came to its goal of Jewish autonomy within a socialist federation of nations was the Soviet Union, an experiment few would consider a successful resolution of the Jewish Question. And yet, despite this inability to realize its goals, the Bund played an outsized role in the lives of the Eastern European Jews. When the Jews of Russia and Poland suffered from popular- and state-sponsored antisemitism and economic displacement, the Bund gained the admiration of many—including many of its staunchest opponents—for its role in organizing workers and defending Jewish communities from during pogroms. The Bund played a central role in cultural matters too, embracing Yiddish, a language often derided as a folksy jargon, as one worthy of serious discourse. We cannot easily dismiss this party from Jewish history.

 

After slowly fading from the Jewish communal consciousness after the Bund’s near-destruction in the Holocaust, the Yiddish socialism espoused by Bund today is undergoing a revival of interest, especially among young, left-leaning Jews. In part, this is spurred on by a declining interest in Zionism, itself the result of Zionism’s conflation, rightly or wrongly, with the politics of Benjamin Netanyahu, politics opposed by many American Jews. Combined with the rising antisemitism on both the right- and left-wings in the US that has robbed American Jews of a domestic political home, the Bund has emerged as an important symbol. Articles about the Bund specifically and Yiddish socialism in general have appeared in the New York Times, the New York Review of Books, the Jewish Daily ForwardJacobin, and elsewhere, with many of these going viral. Jewish Currents, a left-wing Jewish magazine founded in 1946, has successfully relaunched in pursuit of younger Jews. Organizations such as The Jewish WorkerJewdas, and Jewish Solidarity have joined with Jewish Currents in claiming the Bund’s legacy while growing the conversation about the Bund on Facebook and Twitter. Coinciding with a moment when campaigns by Elizabeth Warren, Bernie Sanders, Alexandra Ocassio-Cortez, and others have brought socialist ideas back into the mainstream, young Jews have found the idea of a proudly Jewish form of progressive politics that embraces cultural specificity while rejecting particularism attractive indeed.

 

Ironically, the Bund’s history of failure only adds to the Bund’s mystique. Virtue is easy when one lacks the power to act, and the Bund was powerless for most of its existence. Its program existed only in the world of “what could be.” While the Zionists, like so many national liberation movements before and after, disappointed in power, the Bund’s dreams, perpetually deferred, lived on as potent symbols. The party is easily reimagined as a kinder, purer alternative for Jewish politics, representing everything Zionism was not; while the latter was masculine and national, in the Bund is imagined an egalitarian and cosmopolitan spirit. 

 

However, the lionization of the Bund depends in a large part on the Bund’s historical powerlessness. This is problematic. Hannah Arendt once noted that beauty and humanity are luxuries afforded only the oppressed, luxuries that “have never survived the hour of liberation by even five minutes.” The Bund did not prove Arendt wrong. The Bund did experience one moment of power, during the Russian Revolution. Despite claims to the contrary by many antisemites, Bundists did not initiate the Red Terror (1917-1922). They did, however, participate. Spurned at the ballot box by the Jewish masses in favor of their Zionist archrivals, the Bund swiftly learned the value of being able to arrest their rivals on political charges. Dissenting party members suffered as well. Sara Foks, a seamstress from Kiev and one-time rising star of the Bund in Ukraine who opposed Soviet rule, was arrested and interrogated repeatedly by one-time comrades now wearing the uniform of the Cheka until, on July 24, 1919, she jumped off a bridge into the Dnieper River. Others were simply executed.

 

None of this is to say that Bundists were evil, but that they were human. Like all movements, the Bund reflected the environment from which it emerged, and late Imperial Russia was as harsh an environment as one can imagine. Its actions were driven by a desperate conviction that the future of the Jewish people depended on the successful realization of its program. This was at a time when the Jewish future was very much in doubt. The debate as to whether the Bund were angels or demons misses the point; that they were human, a status they had to fight time and time again to defend. 

 

The importance of the Bund is not in whether it succeeded or failed, or if it provided a kinder path for Jewish politics than Zionism. What does matter is what the Bund represented during the half-century it contended as a major force in Jewish politics. In leading strikes and organizing defense against pogroms, in advancing new forms of ideas in Jewish politics. The Bund embodied the aspirations and identity of millions of Jews for five decades and provided serious answers to the questions Jews faced then and now—even its mistakes are valuable lessons, warnings to good people from across the political spectrum who are convinced with absolute certainty that they are right. 

 

Moreover, the Bund represents a model for diaspora existence that should prove inspiring to Jewish communities around the world, pioneering the idea of meaningful Jewish existence beyond Zion. It offereda political language deeply committed to the Jewish community with an equally uncompromising commitment to values of freedom, justice, and societal fairness. It is in this legacy that the Bund remains essential for Jews today, a legacy at once more difficult and more helpful than the callous erasure of our past or a rose-tinted nostalgia for the lost causes. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172119 https://historynewsnetwork.org/article/172119 0
A Brief History of the Theory Trump and Barr Use to Resist Congressional Oversight

 

The unitary theory of the presidency may be reaching its logical conclusion under President Donald J. Trump. That theory, which is referred to as the unitary executive, holds that presidents have broad, close to unlimited, powers over the executive branch. At its extreme, the theory holds that the president cannot be checked “by Congress or the Courts, especially in critical realms of authority,” as John P. MacKenzie wrote in his book Absolute Power

 

The Unitary Executive, as put forward by Attorney General Barr, holds that presidential power over executive branch functions can only be limited by the voters at the next election, or by Congress through its impeachment power. This was essentially the position Barr took in his June 8, 2018 memo to the Justice Department. “Thus, under the Framer’s plan, the determination whether the President is making decisions based on ‘improper’ motives or whether he is ‘faithfully’ discharging his responsibilities is left to the People, through the election process, and the Congress, through the Impeachment process,” Barr wrote. Although Barr does not say it, a president who acted in an improper or faithless way, but who is reelected or who escapes impeachment, could indeed be above the law. Is this really what the Framers intended?

 

It is first important to recognize that the words “unitary executive” do not appear anywhere in the Constitution, although supporters of the theory claim to be originalists. The first known use of the term occurred during the Reagan Administration, when Attorney General Meese first put the theory forward. It was later used to justify much of President George W. Bush’s War on Terror, including extreme measures like torture in the post 9/11 world. Yet even Assistant Attorney General John Yoo, who advanced the theory during the Bush years by writing the infamous memo enabling the torture of terrorists, recently said in an interview with NPR that “the Constitution grants him [the president] a reservoir of executive power that’s not specifically set out in the Constitution.”

 

What Article II of the Constitution does provide is a broad statement that “the executive Power shall be vested in a President of the United States of America.” Alexander Hamilton, perhaps the foremost defender of presidential power, wrote in Federalist No. 70 that “energy in the executive is a leading character in the definition of good government.” Hamilton in part equated energy with unity and believed the presidency should be occupied by one person who could act decisively. The Constitutional Convention, which met in Philadelphia over the summer of 1787 and in which Hamilton had participated, debated and then rejected an executive council. But it was not a decision that was reached lightly, and there were numerous members of the Convention that feared a single executive could begin to resemble the British monarch. 

 

Those who feared a strong executive were influenced by the experiences of the colonists in the 1760s and 1770s during the buildup to the eventual break with Great Britain. During that time, royal governors, appointed by the King, had often dissolved local colonial assemblies when they disagreed with their decision sand regularly vetoed bills. The opponents of a strong executive now feared the return to monarchy, which they had fought to overturn during the Revolutionary War. The concerns they held, which focused largely on the concentration of power in the hands of one individual, had led to the weakening of executive power at the state level in the constitutions approved immediately following the Declaration of Independence. 

 

Yet the lack of a strong executive had led to numerous problems, both during the Revolutionary War and during the years the new republic was governed by the Articles of Confederation. The Convention finally settled on a single executive, but that decision was affected by the presence of Washington at the Convention.  Franklin, who opposed a single executive and preferred some form of an executive council, seemed to allude to this when he said, “The first man put at the helm will be a good one. Nobody knows what sort may come afterwards.” Pierce Butler of South Carolina wrote in a 1788 letter that, “many of the members cast their eyes towards General Washington as President and shaped their ideas of the powers to be given to a President by their opinions of his virtue.” It was clear that most members of the Convention, although concerned about placing too much power in the hands of any one man, were willing to place much more power in the new office of president because of their great respect for Washington. One historian has argued: “had Washington been absent, it is entirely possible that the framers of the Constitution would have created a multiple executive,” or at least have created an office that the legislature would select.

 

We need a balanced approach to our governmental institutions, just as the Framers intended. An energetic head of state is certainly part of this formula. As the political scientist Judith Best has observed “the ship of state cannot do without the pilot who sets the course, who knows where the shoals and reefs lie, and who can direct all hands.” What we do not need is an Imperial President, in Arthur Schlesinge rJr.’s words. Presidential overreach is especially dangerous when the ship of state is being guided by a man who lacks Washington’s sense of virtue. Even Hamilton feared an overly powerful executive and thought “the executive power is more easily confined when it is one” since it is easier to find misconduct when one person bears responsibility for the office of the presidency. 

 

It is not unusual that the Congress and the President sometimes butt heads. All presidents chafe at oversight by the legislative branch, which can sometimes be overbearing. Madison fully expected this, writing in Federalist No. 51 that “ambition must be made to counteract ambition. The interest of the man must be connected with the constitutional rights of the place.” Out of these conflicts each branch would, it was hoped, remain within its orbit. 

 

Yet we have also learned that the branches of government must find ways to work together with a certain degree of mutual forbearance. A good example occurred early in our history during the Jefferson Administration. During the treason trial of Aaron Burr, Chief Justice John Marshall had a subpoena served to President Jefferson to produce documents. “The English principle that the King could do no wrong, Marshall said, did not apply to the United States where the President…was subject to the same law as every other American,” as Schlesinger has written. But Marshall did not fully press his authority, and Jefferson was not required to appear in court. Jefferson’s view was that Marshall wanted him to “abandon superior duties” to inferior ones. “Both men were surely correct,” according to Schlesinger, and in the future courts would try to find that balance between enforcing the law equally upon everyone while recognizing the official duties a president must fulfill. 

 

The concept of Congressional oversight over the executive branch is a long-established precedent in the United States, a practice that traces back to our British roots. As early as 1792, the House established a special committee to investigate certain executive branch actions, and Madison and four members of the Constitutional Convention voted for the inquiry, indicating they thought this was a core function of the Congress. In a 1927 Supreme Court decision, the Court found that “the power of the Congress to conduct investigations is inherent in the legislative process [and] that power is broad.” It has often been the Supreme Court that has required presidents who overstep their bounds to comply with Congressional mandates. When Richard Nixon refused to turn over his tapes during the Watergate crisis, the Supreme Court ordered him to do so, leading to his eventual resignation from office. 

 

The Supreme Court has in fact ruled twice on the unitary executive theory, and both times rejected the concept. In Morrison v. Olson, decided in 1988, the Court majority decided that the special counsel statute did not violate the separation of powers. Justice Scalia, alone among the justices, issued a scathing dissent largely along the lines of the theory of the unitary executive. “Morrison shattered the claim that the vesting of ‘the executive power’ in a president under Article II of the Constitution created a hermetic unit free from the checks and balances apart from the community,” MacKenzie wrote in Absolute Power. In 2006, the Supreme Court again issued a stinging rebuke to executive overreach in Hamdan v. Rumsfeld, a case that dealt with the use of military commissions to try terrorists at Guantanamo Bay. As Justice Breyer wrote for the majority, “The Court’s conclusion ultimately rests upon a single ground: Congress has not issued the Executive a ‘blank check’ to create military commissions,” and told the Bush Administration that they should seek Congressional approval, which they ultimately received.

 

Not every adherent to the unitary executive theory accepts that the president has absolute power. Steven Calabresi, a major supporter of the theory, has written that “there are some people who believe that the President has the prerogative powers of King George III in foreign and domestic policy,” but that he does not “fall into that category.” Still, others have used the theory, or at least the concept, of unfettered presidential power. Richard Nixon, whose presidency predated the use of the term, once told David Frost in the aftermath of Watergate that “when the President does it that means its not illegal.” Dick Cheney, while a member of the House in 1987, was even more blunt when he dissented from the majority report on the Iran-Contra affair. “The Chief Executive will on occasion feel duty bound to assert monarchical notions of prerogative that will permit him to exceed the laws.” 

 

What is so shocking today is President Trump’s absolute refusal to comply with Congressional requests for information and testimony from some of his top aides regarding the recently released Mueller Report. One must wonder what advice he is receiving from his Attorney General, and whether Barr’s support of the unitary executive affects such advice. Lawyers for Donald Trump seem to adhere to the more extreme version of the unitary executive theory. In a letter dated May 15, 2019 to Chairman Jerold Nadler of the House Judiciary Committee, Trump’s legal counsel questioned whether the Committee’s inquiry was designed to “further a legitimate legislative purpose” or was designed to harass and embarrass “political opponents.” Nadler’s response went to the heart of the matter, indicating that first the Justice Department said it “cannot indict” a sitting president and “now it adds the extreme claim that Congress cannot act either…this flies in the face of the American idea that no one is above the law, and I reject it,” according to Nadler.  It also is inconsistent with the Mueller Report, which found that “Congress may apply the obstruction laws to the President’s corrupt exercise of powers” since it “accords with our constitutional system of checks and balances and the principle that no person is above the law.” In the meantime, lower courts have begun to act, requiring that the President comply with certain demands for information.    

 

Part of the tragedy of recent events is that William Barr came into the job of Attorney General with a solid reputation. It now appears to many of us that he has decided to protect Donald Trump at all costs, and not the office of the presidency, as he claims. The unitary executive theory which Barr supports is a dangerous doctrine when applied in the most extreme manner. Now it has been put at the service of a man with clear autocratic tendencies who knows no limits and respects no norms, a man who wants to use the power of the presidency to punish his enemies. If Barr really wants to save the presidency, he might start by rethinking his support for unlimited presidential power under the guise of the unitary executive. Otherwise he may leave the House of Representatives with little choice but to open an impeachment inquiry in order to do their jobs. But then perhaps that is what his boss really wants.  

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172098 https://historynewsnetwork.org/article/172098 0
The Occupation of The Atlantic Mind

 

 

In the latest issue of The Atlantic (May 14, 2019), it should come as no shock that Benny Morris, an Israeli historian turned propagandist, attacks Rep. Rashida Tlaib (D-MI), the first Palestinian-American in Congress and a Muslim. Morris’s implacable hostility to Arabs—as Atlantic editor-in-chief Jeffrey Goldberg once put it--“sometimes leads him to inflammatory conclusions,” including Morris’s lament that Israel’s founder David Ben-Gurion failed to carry out a “full expulsion—rather than a partial one,” which could have “stabilized the State of Israel for generations.” The professor’s article is entitled, “Rashida Tlaib Has Her History Wrong.” Morris proceeds to show that he is indeed an expert at getting the history wrong.

 

Morris condemns Tlaib for pointing out that European Jews escaped anti-Semitism by occupying her family’s homeland. He continues with a hoary account ascribing Palestinians with “direct” responsibility for the Nazi genocide, citing the anti-Semitism of the Islamic leader of the era and the efforts to impede Jewish migration into Palestine. Despite local opposition, thousands of European Jews flooded into Palestine in the 1930s precipitating the 1936-39 Arab revolt, which Morris blames on the Palestinians rather than the European migrants.

 

In Morris’s unapologetic and at least borderline racist reading of the past, “most Palestinians still hope for Israel’s disappearance”—this offered with no supporting evidence—whereas “the Zionist side over the decades has repeatedly agreed to a compromise based on partitioning Palestine into two states” only to have “the Arab side” reject all proposals. So, there you have it—peace-loving Israel always making generous offers and the Allah-worshipping fanatics always turning them down in deference to their murderous plots to destroy the Jewish state. History made simple--and loaded with Zionist apologetics.

 

For the record, it is a historical fact that since the June 1967 war Israel has repeatedly rejected opportunities to trade land for peace and has instead pursued a colonial domination of the West Bank, the Golan Heights, as well as Egyptian territory and the Gaza Strip, the territories that it did belatedly relinquish. Until the 1990s Israel vigorously opposed any discussion and refused to negotiate toward the creation of any sort of “Palestinian entity.”

 

Morris understands that an effective propagandist must offer a counter-argument or two, if only deftly to dismiss them. Thus, he allows, it is true that since 1967 “the Israeli side has oppressed the Palestinian inhabitants and denied them various civil rights,” but sad to say, “such is the nature of military occupation.” (Imagine Morris, or anyone else, writing, “Jews were beaten in the streets and had their shops closed, but such is the nature of anti-Semitism.”)

 

What has happened in Palestine is not a “military occupation,” it is rather a settler colonization, one that as pertains to the West Bank and East Jerusalem is illegal and thus illegitimate, as well as being an ongoing and highly destabilizing human rights atrocity. Jewish settlements are scarcely mentioned by Morris, for the obvious reason that they have been illegal and had already rendered a two-state solution unworkable by 2000, the time of the mythical “generous offer” at Camp David, which Morris the propagandist resurrects as the best example of unregenerate Palestinian hostility to peace. For the record, a scholarly consensus holds that the offer of a bisected, non-contiguous state replete with Jewish-only roads, checkpoints, and ultimate Israeli control of state security represented no real opportunity for a viable independent Palestinian state, let alone constituting a “generous offer.”

 

Sadly, at one time, as Norman Finkelstein has pointed out on several occasions, Benny Morris was an accomplished historian and in fact played a key role in a much-needed post-Zionist scholarship in Israel. His research revealed that the ethnic cleansing of 1948 had been deliberate. Only later did he become a cheerleader for it. Unlike Ilan Pappe, Avi Shlaim, Nurit Peled, and many other Israeli historians who have their integrity intact, Morris, as Finkelstein accurately charges, became a court historian--a propagandist for the State of Israel.

 

The problem isn’t just Morris, however—it’s the Atlantic too. Founded as the Atlantic Monthly in Boston in 1857, The Atlantic is a venerable American publication with a distinguished record of literary and cultural criticism and political reportage. Today the magazine is distinguished by the intensity of its Zionist distortions of the past and present of the Palestine conflict.

 

The Atlantic’s palpable pro-Israeli bias should come as no surprise as its editor-in-chief since 2016 is Jeffrey Goldberg, a citizen of both Israel, which he served as an IDF prison guard, and the United States. Goldberg is a staunch Zionist and is quick to equate criticism of the Israel lobby with anti-Semitism, as he did in a notorious review in The New Republic (October 8, 2007) in which he directly linked the book by the distinguished political scientists John Mearsheimer and Stephen Walt, The Israel Lobby and U.S. Foreign Policy (2006), with Osama bin Laden’s brand of virulent anti-Semitism.

 

The brilliant 2016 documentary film identifies “The Occupation of the American Mind” at the root of misperceptions, imbalanced reporting, and outright disinformation on the Israel-Palestine issue. The case of The Atlantic and of Benny Morris remind us of the pernicious and monolithic nature of Zionist discourseand this ongoing “occupation.” It places the noble professions of journalism and history in the service a crude propaganda regime that seeks to perpetuate the occupation of the American mind.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172095 https://historynewsnetwork.org/article/172095 0
What I’m Reading: An Interview With Historian Mark Weisenmiller

 

 

Born in Pittsburgh in 1963 (four days after the assassination of President John F. Kennedy), Mark Weisenmiller graduated from the Pennsylvania State University in 1985. He is an author-historian-reporter. Previous employers include United Press International (UPI); Deutsche Presse Agentur (DPA); Agence France Presse (AFP); Inter Press Service (IPS); the international news wire service, based in Beijing, China, known as the Xinhua News Agency (XNA), and The Economist.

 

Regarding his history-themed writing, he has had such articles published in the Canada-based “History Magazine”; the London, England-based “History Today”; “America In WWII”--and many articles for History New Network. He has written articles about the following, which will be published in future issues, of “History Magazine”: a profile of Ivan The Terrible; a report on the Gang of Four Trial in China in 1980; a story about the famous 1967 U.S. Supreme Court case of “Loving v. Virginia,” and a story about the Grand Ole Opry.

 

When D-Day, 75th Anniversary: A Millennial’s Guide (to which he contributed two chapters) is published in 2019, this will be the fifth non-fiction book that he has either contributed and/or written solely. These books have been about, respectively, ice hockey; capital punishment; a biography of a famous American newscaster, and a book that chronicled the cultures, current events, leaders, news, and politics of 15 North African, Middle Eastern, and Southern European countries.

 

Divorced, he is the father of one son and one daughter. 

 

 

What books are you reading now?

 

Long has it been a policy of mine, in my work, to read books that do not directly relate to a book or story or project that I currently am working on. I find that, once I read said stuff, and then go back to the book or story or project that I am working on, this gives me a fresh perspective. In other words, my six senses (the five senses, plus kinesiology--i.e., the study of movement) become more alert and super-sensitive. Also, I am one of those people that is usually reading three or four books at a time. To directly answer your question: John Gunther’s Inside Africa; Robert K. Massie’s great Nicholas and Alexandra; Bob Gibson’s From Ghetto To Glory, and James Mustich’s 1,000 Books To Read Before You Die: A Life-Changing List.

 

What is your favorite history book?

Impossible to answer; rather, let me answer this question by way of the following: technically, I am a reading prodigy. When I was six, I was reading at a level common for fourth graders. So, I was reading non-fiction and history books at an unusually early age. The first history book of merit that really made an impression on me was Barbara W. Tuchman’s The Guns of August. Her sometimes slog-like prose is that of an academic historian but then again she was an academic historian. For some readers, the pace of her prose is as slow as molasses in Massachusetts in March. I have had friends who told me that they have tried to read a book of Tuchman’s but simply could not finish it. I am not a bibliophile or historical snob; I understand their viewpoints. However, regarding The Guns of August, when she gets her narrative motors running, in my estimation no history writer can keep up with her pace. 

Why did you choose history as your career?

From my mid-teens to age 50--that is, for 35 years--I worked for international news wire services. It was great fun but as I grew older I noticed at the end of the work day I didn’t have as much pleasure doing it as I once did. Maybe it was simply the fact that I did the same job--albeit, for different news wire services--for more than one third of a century. I loved working for international news wire services and, under certain conditions, would go back to it. Even when I was doing this news-wire service work, I also wrote innumerable stories for history magazines and websites. This fact, coupled with the fact that I have had a lifelong interest in world history, led me to what I am doing now. I don’t feel that I ran away from international news wire service work as much as I ran towards writing history articles, books, book reviews, etc.

 

What qualities do you need to be a historian?

Perseverance and a strong set of eyes; be prepared to do more reading than can be imagined. Actually, to my way of thinking, reporters becoming historians is a natural progression and people who work in either/or profession, besides sharing those two traits, also share the following items: sometimes being intractable; possessors of a mental and physical toughness; having large vocabularies; being friendly and gregarious, and finally, being open-minded and non-judgmental. All of this helped me write the two chapters that I contributed to D-Day, 75th Anniversary: A Millennial’s Guide.

 

 

Who was your favorite history teacher?

Professor Sidney Elkin, who was my instructor for the Political Science class when I was attending the Pennsylvania State University. Again, to my way of thinking, journalism, history, and political science--not always, but often--are interchangeable. 

 

What is your most memorable or rewarding teaching experience?

 

The most rewarding experience is, in retrospect, also the most memorable and it came via Professor Elkin. In his class, students had to pair off and visit poor people in their homes to conduct surveys asking them all sorts of--what I thought then and still very much think now--personal questions about their lowly economic status. I forget what we did with all the information we collected from the surveys; probably collated it. These people, many of whom were African-American, lived in squalid destitute in Aliquippa, Pa. This was in the days when Western Pennsylvania was the home base for U.S. steel manufacturing; now it is not. Many of these people were embarrassed by their economic status; I was quick to note that not many of them made eye contact with myself when I asked them the survey questions. What all of this taught me was, and is, that, in regards to history and journalism, statistics dealing with human beings are not just cold, sterile, numbers. Each digit is a person with dreams, desires, emotions, and feelings.

 

Do you own any rare history or collectible books? Do you collect artifacts related to history?

 

I have owned, or do own, literally thousands of history books; many of them I’ve given away to charities, friends, or libraries. I have a first edition of Lowell Thomas’ With Lawrence in Arabia (“Lawrence of Arabia” is one of my all-time favorite films) and a first edition of Gunther’s Inside Europe. Regarding artifacts, a main hobby is collecting American presidential campaign buttons. Some I bought, but most I got from when I covered Presidential election campaigns. Interestingly, both Thomas and Gunther were reporters before focusing more on history writing rather than hard-news reporting. Maybe, subconsciously, this is where I got the idea of becoming an author-historian-reporter.

 

What have you found most rewarding and most frustrating about your career?

 

Rewarding: simply the fact that I was, and am, given an opportunity to work in what I was trained to do. Frustrating: I am not sure that the following is a point of frustration or simply a phenomenon that goes with being a writer of history articles and books (it does not apply to my journalism career). To wit: this can be a very, very lonely life. I read three to four times as much, in toto, as the average person, simply because that is required to do this job well. Consequently, I am by myself for very, very long stretches of time. To anyone considering going into this field, I can not stress the following enough: make sure to make plenty of “down time” to share with family and friends. Also, be prepared to work holiday seasons as well. Deadlines for author-historian-reporters simply do not recognize such things.

 

How has the study of history changed in the course of your career?

 

Yet another difficult question. My immediate response is that due to the average attention span of people decreasing (if we are to believe sociologist’s studies), I am noticing that the study of history--and particularly the writing about it--may be becoming more driven by interest in personalities rather than events. I consider this a positive; in general, it puts things into human perspective. Most people would rather read about people than events and I tend to agree with them.

 

What is your favorite history-related saying? Have you come up with your own?

 

Being a decades-long international and domestic political reporter, I have, and know of, dozens of stories so I can easily provide one. This is listed on my Facebook profile and I will also repeat it here. One time during the 1964 American Presidential election, Senator Barry Goldwater, the Republican Party’s Presidential candidate, said something moronic in criticizing a social program by the incumbent, President Lyndon B. Johnson. In itself, this was unusual that Goldwater said a moronic thing because I once interviewed him and, although bull-headed, he was not at all moronic. Anyway, the following day, a reporter repeated what Goldwater said--never mind, for our purposes here, what it was--to LBJ while the President was sitting with his feet propped up before him behind his work desk in the Oval Office. LBJ rolled his eyes, shook his head from side to side, and said in that unique Texan drawl of his “Any jackass can kick down a barn but it takes a carpenter to build one.”

 

What are you doing next?

Besides continuing to write articles for history magazines and websites, I currently am at work in the multiple stages of self-editing my e-book of reportage about China. Afterwards, the e-book will be professionally edited. This will be the second in a series of books that I plan to write chronicling the cultures, histories, leaders, news, and politics of the world’s countries and regions. I am not yet sure what country will be the focus of Book Three in the series; maybe Brazil, maybe Canada, maybe some other country or region. These books will be somewhat similar to Gunther’s so-called “Inside” books. One thing that I have drifted away from, and did much of in the past, and which I very much miss doing, is writing profiles of the leaders and rulers of the world’s countries and regions. I have not yet been able to find the Internet, media, or press outlet for which these profiles would be a good fit. Attention, editors of said outlets, please be aware that, for commission, I am available for such work. In short, I have already accomplished much, but I have so much more to do. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172117 https://historynewsnetwork.org/article/172117 0
What Should Historians Do About the Mueller Report?

 

 

The Mueller Report, officially entitled Report on the Investigation into Russian Interference in the 2016 Presidential Election is an extraordinary document, systematic, detailed (472 pages) and thoroughly researched and documented (2,381 footnotes).  News media pundits have summarized its high points, congressional Democrats say they need more information, and President Trump has asserted that it proves there no collusion with the Russians and no obstruction of justice.  

But the report itself deserves wider reading and more scrutiny and analysis than it has received. So far, for instance, no member of Congress seems to have announced that he or she has read the entire report. The media have moved on to the confrontation between the president and the House of Representatives over who can testify in House hearings.  The report itself has moved into the background too soon. Historians can help encourage people to read, reflect on, and perhaps take action on, one of the most important documents in American history. We need to be more emphatic in asserting history's role in bringing clarity to complex public issues. 

A few strategies:

Demonstrate how to draw concrete conclusions from the evidence in the report.  The Mueller report is in essence a detailed assemblage and presentation of evidence, somewhat comparable to a grand jury report. Historians can guide the public by analyzing and weighing the information in the report, analogous to what historians routinely do with evidence from the past, and reach conclusions. Historians help people understand cause-and-effect relationships, analyze and weigh evidence, and discern underlying patterns in complexity. They are adept at presentations that are fair, objective, and fact-based. Those skills can be put to work on the Mueller report both to enlighten the public and to demonstrate how people can use the report themselves. Presentations at public meetings, op-ed pieces for newspapers, and broadcasts via social media are all potential forums.  For instance, there is a great deal of information about Trump's Russian ties and why and how the Russians supported his campaign. There are dozens of pages with evidence about the president's determined attempts to sidetrack, slow down, discredit, or stop the investigation. Appendix C, 21 pages long, presents the president's written response to the Special Counsel's questions (Trump refused to be interviewed), with more than 30 uses of the phrase "does not recall" or other phrases that the report calls "incomplete or imprecise."  This evidence, appropriately analyzed, can lead to informed conclusions about Russian meddling, the Trump campaign's actions, and the role and responsibility of the president himself.

Add a historical dimension. Special Counsel Mueller's charge was to examine the 2016 election. He did not look back into history, i.e., previous attempts by foreign nations to meddle in U.S. presidential elections. Historians know that 2016 was not the first time.  In 1960, the Russians secretly approached Adlai Stevenson with an offer to support his campaign. (Stevenson rebuffed them and decided not to seek the Democratic nomination that year.) In 1888 the British ambassador indicated a preference for incumbent Grover Cleveland, running for re-election. (The move backfired, alienating voters and probably costing Cleveland the election.) If the Russians tried in 1960, and got caught in 2016, it seems like a good bet that they may have tried in some of the 13 elections in between.

Compare the report to previous high-stakes reports. Historians can provide an important perspective by comparing the Mueller report to previous reports on critical national issues. In particular, they can remind the public why the investigations were undertaken, what the reports said, their strengths and weaknesses, public reaction, and what (if any) action they engendered. Examples might include the 9-11 Commission Report (2004), which analyzed intelligence failures before the attacks;  reports (6 volumes) (1976) of the Senate Select Committee to Study Governmental Operations with Respect to Intelligence Activities, which led to reform of federal intelligence agencies; reports of the National Commission on the Causes and Prevention of Violence, particularly Violence in America: Historical and Comparative Perspectives (1969) which emphasized lack of employment and educational opportunities in inner-city neighborhoods but did not lead to concrete actions and the issues continue today; the President's Commission on National Goals, Goals for Americans. (1960) which emphasized individual initiative and called for elimination of barriers to equal justice and opportunity and engendered considerable public discussion but no solid policy initiatives; the Congressional Joint Committee on the Investigation of the Pearl Harbor Attack's Report (1946) which exonerated President Franklin Roosevelt from blame for the surprise 1941 attack and led to policy changes, including the National Security Act and creation of the Department of Defense in 1947; and the House of Representatives' investigation and report (1792) on the defeat of General Arthur St. Clair by Indians in the Northwest Territory, which occasioned the first assertion of what we now call executive privilege, by President George Washington, in response to the House's request for documents on the military campaign.

Use the report to call attention to the relevance of the past to the present. The Mueller report and the issues it probes are good examples of where historical insight would help. This points the toward the need for more of what might be called "historical mindedness" in public debates of critical issues these days. "So long as the past and the present are outside one another, knowledge of the past is not of much use in the problems of the present,” wrote British philosopher and historian Robin G. Collingwood in his 1946 book The Idea of History. “But suppose the past lives on in the present; suppose, though encapsulated in it, and at first sight hidden beneath the present’s contradictory and more prominent features, it is still alive and active; then the historian may very well be related to the non-historian as the trained woodsman is to the ignorant traveler. ‘Nothing here but trees and grass’, thinks the traveler, and marches on. ‘Look’, says the woodsman, ‘there is a tiger in that grass’ ".   There is more than one "tiger" in the Mueller report.

Monitor preservation of the investigation's records.  Mueller's two-year investigation amassed a great volume of interviews, court filings, testimony, FBI investigative material, staff reports, and other records. These are official records of the Justice Department.  Historians (and others) need to call attention to the need for them to remain intact and at some point be transferred to the National Archives and Records Administration. The files should be opened to researchers in a timely fashion and with minimal restrictions. They can be expected to reveal a fuller picture of Russian interference, presidential obstruction, and the impact of both on the 2016 election. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172093 https://historynewsnetwork.org/article/172093 0
UPDATED Mueller Investigation: What Historians Are Saying  

 

 

 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/168317 https://historynewsnetwork.org/article/168317 0
Roundup Top 10!  

 

Adam Cohen: Clarence Thomas Knows Nothing of My Work

by Adam Cohen

The justice used my book to tie abortion to eugenics. But his rendition of the history is incorrect.

 

In our tumultuous times, history offers hope

by Katrina vanden Heuvel

In his book, my father ceaselessly reminds us that hard work and idealism can create change.

 

 

Billionaires can't fix college: Here's the real crisis in higher education

by Jim Sleeper

In this conversation between historian Matthew Frye Jacobson and Professor Jim Sleeper, they discuss how to reclaim college from market ideology.

 

 

Tony Horwitz’s Greatest Book, Confederates in the Attic, Seems Even More Crucial Today

by Rebecca Onion

Confederates in the Attic is a gift to teachers of American history. It’s wryly funny but sneakily profound.

 

 

Socialists Don’t Know History

by Joseph Epstein

Young people don’t remember the Soviet nightmare. But what’s Sanders’s excuse?

 

 

The Indian Law That Helps Build Walls

by Maggie Blackhawk

The Supreme Court’s legal abuse of Native Americans set the stage for America’s poor treatment of many of its vulnerable populations.

 

 

Open Forum: Are public schools ‘inclusive’? Not for those who oppose abortion

by Jonathan Zimmerman

Children need to learn how to discuss abortion — and other controversial political questions — in a fair and mutually respectful way. And that won’t happen if the adults in the room tell them the right answers, right off the bat.

 

 

Why we’re letting Americans vote, marry and drink far too young

by Holly N.S. White

Age is not a perfect qualifier of ability or maturity.

 

 

The “Forever Wars” Enshrined

by Andrew J. Bacevich

The memorial to American soldiers who were sent into the wars in the Middle East and died is essentially hidden away in a small Midwestern town, which tells you what you need to know about the value Americans actually place on those wars.

</

 

My Zionism is Personal and Complicated

by Ralph Seliger

Today’s an increasingly exasperating time for progressives who care about Israel’s future.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172127 https://historynewsnetwork.org/article/172127 0
Good Intentions, But Still A Long Way To Go Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

 

 

 

One of the most momentous changes in my lifetime has been the broad social recognition that discrimination against women and people of color in American society is wrong. The idea that women and African Americans were deservedly inferior was a fundamental belief in Western society for so long that the seemingly sudden rejection of discrimination made the 1960s movements for equality seem revolutionary.

 

The revolution didn’t happen. Instead, gradual shifts in gender and racial relations have moved our society toward more equality in fits and starts over the past 50 years. Powerful resistance to change has slowed down the movement to equality at every point.

 

But lately a basic change in the arguments of the resistance demonstrates at least some ideological success. While the initial opposition to equality claimed that inequality was natural and God-given, those who oppose further change now often say that equality has been achieved, or even that the balance has shifted so far that white men are now at a disadvantage.

 

Daily life proves otherwise. The city of Boston is currently in an uproar over one of the innumerable daily incidents that show how persistent prejudice resists good intentions. The premier Boston art museum, the Museum of Fine Arts, long ago recognized that urban high culture tended to serve mainly the interests of white people. To counteract the legacy of racism, the MFA produces extensive programs to highlight the cultural contributions of black artists and to attract a diverse community. For 7 years, the MFA has celebrated Juneteenth, the oldest national commemoration of the end of slavery. The largest film festival in New England “celebrating film by, for, and about people of color”, the Roxbury International Film Festival, will also be held in June for the 21st year.

 

These laudatory initiatives came from the Museum’s leadership. But below the top level, racial resentments have not been eradicated. When a group of black 7th-graders from the Helen Y. Davis Leadership Academy, a local middle school whose students are not white, visited the MFA last week as reward for good behavior and good grades, they were greeted almost immediately with open expressions of racism. A museum staff member told the children how to behave: “no food, no drink, and no watermelon.” Security guards ostentatiously followed them around. Other museum patrons felt it was necessary to make racist remarks, including “Never mind, there’s fucking black kids in the way.” The MFA apologized, launched a wide investigation into this particular incident, and pledged to keep trying to improve its services to communities of color.

 

Only those who insulate themselves from the daily news would find this incident surprising. The broad social acceptance of the idea that discrimination is wrong has meant that the blatant daily transgressions against equal treatment have been splashed across the national media over and over again. That’s both useful and discouraging.

 

While continued instances of racism often make the news, the persistence of gender inequality is less visible, because it mostly occurs in private spaces. A remarkable recent book shows the stubborn tenacity of male resistance to equality, despite the profession of good intentions by men to relinquish a key privilege: letting women do most of the work of child care. The psychologist Darcy Lockman wrote “All the Rage: Mothers, Fathers, and the Myth of Equal Partnership” after she realized that her husband kept finding ways to avoid participating equally in child care, such as saying he needed to go to the gym after work. She found that equal parenting is mainly a myth.

 

While many men believe they carry equal weight at home, in fact women who work outside of the home still take on two-thirds of child care, a proportion that has not budged over the past 20 years. The time-use studies by the Bureau of Labor Statistics detail what men and women actually do every day. In families with a child where both parents work full-time, women spend 50% more time on “household activities”, 65% more time on “purchasing goods and services”, and 50% more time taking care of children. Men spend their extra time watching TV and in recreation.

 

I don’t get it. Watching TV or going to the gym is more interesting than caring for your child? Changing diapers is too difficult for men to master?

 

Lockman offered a set of interlocking explanations: men had generally been raised to think less about the needs of others; some people believe that women by nature were better suited to caring for children; men are more reluctant to let family responsibilities interfere with work; women are reluctant to demand that their partners take an equal role. But she ends the book with a more forceful insight: men resist giving up their privilege. Lockman cites the NY Times opinion column by philosophy professor George Yancy entitled “I am sexist” as an example of what most of the men she interviewed would not do: admit their privilege.

 

Elizabeth Cady Stanton wrote to a male cousin in 1855, “Did it ever enter into the mind of man that woman too had an inalienable right to life, liberty, and the pursuit of her individual happiness?”

 

We might broaden this plea to apply to both racism and sexism. Only once those who have enjoyed the privilege of belonging to dominant groups ask themselves whether other people also deserve the same rights will our society get beyond good intentions to equal results.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/blog/154216 https://historynewsnetwork.org/blog/154216 0
What the Fugitive Slave Law Can Teach Us About Anti-Abortion Legislation

 

Neo-Confederate apologists have long claimed, with spurious reasoning, that the American Civil War was simply about “states’ rights,” and in one ironic sense they were arguably right. The war was precipitated by a violation of regional sovereignty – of northern states rights. Case in point: the afternoon of June 2nd 1854, Bostonians gathered along the harbor’s embankments to watch a ship bound southward. Onboard was a man named Anthony Burns, but for the captains of that ship and for those who owned it he was simply cargo to be returned to his “owners” in Virginia. A formerly enslaved man, now a Baptist minister, Burns had escaped from a Virginia plantation to the freedom which Massachusetts offered, only to find that with the 1850 passage of the Fugitive Slave Act the seemingly limitless and probing powers of the slave states had followed him to Beacon Hill’s environs, and that whether willingly or unwillingly the new law would implicate all of his white neighbors in the enforcement of that slavocracy’s laws. 

 

According to the language of the Fugitive Slave Act, a New Yorker might dislike slavery, but if southern slave catchers did “summon and call to their aid the bystanders” witnessing a kidnapping, then it was required that the New Yorker must aid in that kidnapping. If a Vermonter or Rhode Islander found slavery distasteful, that was of no accounting if slave catchers were collecting a bounty in Montpellier or Providence, as all “citizens are hereby commanded to aid and assist in the prompt and efficient execution of this law, whenever their services may be required, as aforesaid.” As historian Jill Lepore writes in These Truths: A History of the United States, “Nothing so brutally exposed the fragility of freedom or the rapaciousness of slavery.” 

 

History might not repeat exactly, but there are certain themes that can be detected throughout, and there’s something clarifying in identifying those correspondences. Slavery was, among other things, an issue of violating bodily autonomy; of the state enabling one section of the population to declare to another “You do not have sovereign right over your own body – I do.” If one of the consequences of the Fugitive Slave Act was that it enlisted the unwilling to aid the violator in that violation of another person’s bodily autonomy, then Georgia’s HB 481 bill passed by the state legislature and signed into law by Governor Brian Kempe last week does something similar. Under that draconian law, and allowing few exceptions, a woman who elects to have an abortion can be charged with “murder,” not to mention the possibility of criminal charges against women who’ve suffered a miscarriage. The possibility for such punishment does not end at Georgia’s borders. 

 

Historian Mark Peterson explains in The City-State of Boston: The Rise and Fall of an Atlantic Power, 1630-1865, that the Fugitive Slave Act “demanded the active cooperation of every US citizen, on pain of severe punishment.” Harriet Jacobs, a formerly enslaved woman living in New York, noted that the passage of the law was the “beginning of a reign of terror” to African-Americans living free in the north. While certainly anyone who understands slavery knows that that is the nature of that evil institution, for the vast majority of northerners such an arrangement south of the Ohio River was of little concern to them. With the passage of the 1850 legislation, such willful ignorance could no longer be countenanced, for those who wished no part in slavery were now directly implicated in its practice in an indirect manner. Northerners could no longer pretend that slavery was simply a “regional issue;” the plantations which supposedly stopped at the Mason-Dixon Line had a federally invested power that allowed them to arrest formerly enslaved people not just in Georgia or Alabama, but in Pennsylvania and Massachusetts as well.

 

In a similar manner, both patient and doctor, whether that abortion was performed in New York, or Massachusetts, or California, could be theoretically charged with murder under Georgia law if that procedure involved a state resident. Mark Joseph Stern writes in Slate that if a “Georgia resident plans to travel elsewhere to obtain an abortion, she may be charged with conspiracy to commit murder… An individual who helps… or transports her to the clinic, may also be charged with conspiracy.” Antebellum northerners were content to pretend that bondage was simply a southern problem; in a north that may have found slavery unsavory, arrests such as those of Burns had a galvanizing effect. As the activist Amos Adams Lawrence would write, “We went to bed one night old-fashioned, conservative, compromise Union Whigs & waked up stark mad Abolitionists.” 

 

Reactionary rhetoric excels at euphemism, obfuscation, and a cagey manipulation of language which is best described as its own form of “political correctness.” A nineteenth-century advocate for slavery could define a human being like Burns as “property;” today those who advocate forced birthing are the intellectual descendants of the pro-slavery faction, and they perform the inverse rhetorical maneuver, they define a zygote or fetus dependent on the life of the mother as being a “person.” Both rhetorical maneuvers depend on defining things as that which they are not, but the result is the same – to deny the bodily integrity, autonomy, and rights of a segment of the population. Certain obvious declarations must be made – a fetus is not a human being, an abortion is a medical procedure, and no woman should be compelled by the state to carry a pregnancy to term for any reason. 

 

The so-called “Pro-Life” movement, it should also be said, is relatively modern, formed in reaction to Roe v. Wade and cleverly used as a political organizing tactic for the right. Theological arguments against abortion are thin and ahistorical – figures as varied as Thomas Aquinas and Maimonides acknowledged the occasional necessity of the procedure centuries ago – and scientific justifications against the procedure are non-existent. So strained and tortured has the conversation around abortion become that these truths are continually defended by those of us who are pro-choice, but as Rebecca Traister has written in The New Republic “the conversation around abortion has become… terribly warped.” A consequence of this warped conversation is the promulgation of outright lies, such as Donald Trump’s insistence that infanticide is practiced – a complete and utter fabrication.  Centrist Democrats have long ceded the rhetorical ground to the forced birthing movement, and the result has been the authoritarian legislation enacted this past week in not just Georgia, but Ohio, Alabama, and Missouri as well. In 1851, three years before Burns’ arrest, and Frederick Douglas said “I am sick and tired of arguing the slaveholders’ side;” in a similar vein Traister writes “Let’s just say it: Women matter more than fetuses do.” 

 

Because what the new legislation in Georgia, Missouri, Ohio, and especially Alabama threatens isn’t just a return to a pre-Roe world, it’s to establish horrifically authoritarian new laws which would serve to redefine a woman’s medical decisions as the exclusive purview of the state, and which furthermore establish women and their bodies as the effective property of the state. While some rank-and-file Republican voters are perhaps motivated by conviction that could be called “pro-life” (even while such sentiments rarely extend to empathy beyond the life of the fetus), let there be no doubt about what legislators hope to accomplish with this and related legislation. In Alabama, exception is made for embryos that are destroyed as part of IVF procedures, with a state senator explaining “The egg in the lab doesn’t apply. It’s not in a woman. She’s not pregnant.” Anti-abortion legislation is not “pro-life,” it’s about one thing – policing women’s decisions, imprisoning women (and their doctors), and disenfranchising women. Conservative pundit Andrew Sullivan, who is personally opposed to abortion but is politically pro-choice, astutely observes that this new legislation “is not about human life. It’s about controlling women’s bodies.”

 

As the Fugitive Slave Act was motivated by an inhuman racism, so are these new laws mandating forced pregnancy defined by hideous misogyny. They situate human beings and their bodily rights in relationship to the state rather than the individual, and they unequally define certain segments of society – whether African-Americans and women – as not fully deserving those rights which are actually the common treasury of humanity. Furthermore, the legislation of the slavocracy and the forced birthers alike implicate us in their noxious and anti-human designs. There are signs of hope however – this weekend Senator Elizabeth Warren unveiled concrete legislative and policy proposals as part of her presidential campaign which would nationally protect abortion rights. 

 

In the nineteenth-century a majority of southerners, including both enslaved African-Americans and poor whites, had no say in the political process. Today, the voters of Georgia are similarly disenfranchised, as Kemp questionably became governor after he supposedly defeated the progressive Democrat candidate Stacey Abrams. Those of us in blue states have no cause to simply demonize some mythic “South;” that’s to play on the terms of the Neo-Confederates who govern in many of those states. During the Civil War the Confederates slurred a multitude of pro-Union southerners as “Scalawags.” Such women and men who supported abolition and reconstruction in the midst of the slavocracy were courageous, and when it comes to reproductive freedom their descendants are still operating in places like Georgia and Alabama. In addition to national organizations like the ACLU and Planned Parenthood, there are groups like Access Reproductive Care – Southeast, Alabama’s Yellowhammer Fund, and Ohio’s Women Have Options, among dozens of other groups. Like Amos Andrew Lawrence, if liberals in blue states haven’t woken up already, it’s time we became “stark mad Abolitionists” in the case of reproductive rights, which are women’s rights, which are human rights. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172058 https://historynewsnetwork.org/article/172058 0
Clarence Thomas is Wrong: It’s Restrictions on Abortion that Echo America’s Eugenics Past

U.S. eugenics poster advocating for the removal of genetic "defectives" c. 1926

 

In the last few months, five states have enacted harsh new laws that place severe restrictions on abortion and are intended to provoke a legal challenge to overturn Roe v. Wade.  Georgia, Mississippi and Ohio would ban abortion as early as the sixth week of pregnancy, when physicians can detect a “heartbeat” but before many women know they are pregnant.  Alabama would make performing or attempting to perform an abortion a felony unless the mother’s life is at risk—without any exceptions for rape and incest.  This week, in a Supreme Court decision upholding part of Indiana’s abortion law, Justice Clarence Thomas wrote a 20-page concurring opinion that asserts “a State’s compelling interest in preventing abortion from becoming a tool of modern-day eugenics” by drawing on history. Thomas is wrong. It is not women’s right to choose abortion but the new state laws restricting abortion that are an echo of America’s eugenics past. 

 

Between 1907 and 1937, thirty-two states passed  laws permitting state officials to remove the reproductive abilities of individuals designated “feebleminded” or mentally ill in order to improve the human population. By the 1970s, more than 63,000 Americans—60 percent of them women—were sterilized as a result of these laws. In the first decades of legal sterilization, eugenic sterilization was controversial and a number of state laws were struck down in court.  Then in 1927, the US Supreme Court upheld Virginia’s eugenic sterilization law in an 8-1 decision.  Using language astonishing for its cruelty, Justice Oliver Wendell Holmes, Jr. affirmed the state’s power to prevent those who were “manifestly unfit from continuing their kind,” adding that “three generations of imbeciles are enough.”  Buck v. Bell established the constitutionality of state-authorized sterilization for eugenic purposes, and it has never been overturned.  

 

Opponents of abortion rights like to emphasize the eugenic roots of birth control and abortion, especially the role of Margaret Sanger and Planned Parenthood. The “heartbeat” laws do not explicitly mention eugenics, but the Alabama law actually likens abortion to the murder of Jews in the Holocaust and other 20th century genocides. It also compares the defense of the “unborn child” to the anti-slavery and woman suffrage movements, the Nuremberg war crimes trial, and the American civil rights movement.  

 

These comparisons present a serious misunderstanding of American history and eugenics. State sterilization laws generally had bipartisan support and were more varied—and more rooted in local politics and welfare systems-- than abortion opponents suggest.  Despite some eugenicists’ mean-spirited rants about the menace of the unfit, many also described involuntary sterilization as, in the words of California Republican Paul Popenoe, a “protection, not a penalty.”  Sterilization would “protect” the unfit from the burden of childrearing and protect the unborn from being raised by incompetent parents. 

 

Eugenic ideas echo in the anti-abortion movement today.  Like sterilization crusaders in the past, abortion foes see private decisions about reproduction as an urgent matter of state concern. They justify their interventions using dubious science—crude theories of human heredity in the eugenics case, and, to use the Georgia law as an example, inaccurate claims that “modern medical science, not available decades ago” proves that fetuses are living persons who experience pain and should be recognized in law. Moreover, the logic behind abortion restrictions, like eugenic sterilization, is deeply evaluative: some lives have more value than others. Low-income women and women of color will be hardest hit by an abortion ban, and the fetus is accorded more value than a pregnant woman.  

 

State sterilization laws, too, targeted the most disadvantaged members of society. Young women who became pregnant outside of marriage; welfare recipients with several children; people with disabilities or mental illness—as well as immigrants, people of color, and poor rural whites—were at risk of being designated “feebleminded” or “insane” and sterilized under eugenics laws.  Survivors of rape, incest, or other types of trauma (like institutionalization) were especially vulnerable. Carrie Buck, whose 1927 sterilization was authorized by the Supreme Court, became pregnant at age seventeen after being raped by a relative of her foster parents. They responded to her ordeal by petitioning a local court to commit her to the Virginia Colony for Epileptics and Feeble-minded. As an added indignity, they also took Carrie’s daughter and raised her as their own. Forty years later, North Carolina sterilized Elaine Riddick Jessie, a fourteen-year old African American girl social workers considered promiscuous and feebleminded because she became pregnant as the result of rape by an older man. Buck’s and Jessie’s stories are just the tip of the iceberg, rare public accounts of shame and suffering that have mostly remained private.  Indeed, about one-third of all those for whom the North Carolina Eugenics Board authorized eugenic sterilization were minors – some as young as 11.  The vast majority were victims of rape or incest.

 

Memories of eugenic sterilization have focused almost exclusively on race, class, and the American eugenics movement’s connection to Nazi Germany.  Yet these are selective memories that sever forced sterilization from the issue of reproductive rights. In fact, coerced sterilization proliferated when sex education, birth control, and abortion were illegal or inaccessible to women and girls who were poor. The women most likely to become victims of state sterilization programs were those who lacked access to reproductive health care. Tragically, our research shows that some poor women with large families actually sought “eugenic” sterilization because they had no other way of ending unwanted pregnancies. 

 

The animating belief of eugenics—the state should control the reproduction of poor people, immigrants, and women of color—is central to current abortion politics. It is hardly a coincidence that Alabama’s new abortion law permits no exception for rape and incest and imposes harsher penalties on the doctors who perform abortions than on rapists, but the state still refuses to expand access to Medicaid or take steps to bring down the state’s infant mortality rate, the second highest in the nation. The double violation at the core of Alabama’s abortion restrictions—discounting the pain of sexual violence and elevating the fetus over the pregnant women--perpetuates the dehumanizing logic of eugenics. 

 

Still, it is crucial to remember the significant differences between the current fight over abortion and America’s eugenics past.  State sterilization laws once had bipartisan support from Republicans and Democrats, and no social movement ever arose to contest them.  In contrast, Republican-led efforts to recriminalize abortion are rigidly partisan, and a movement to defend the reproductive and human rights of pregnant women is rising.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172066 https://historynewsnetwork.org/article/172066 0
Edwardian England’s My Fair Lady is Fairly Wonderful

 

I don’t know where it was in the new Broadway production of the musical My Fair Lady that I decided it had to be the greatest musical of all time. Was it the early song Wouldn’t It Be Loverly, that introduced you to the lower-class world of spritely and lovable cockney flower girl Eliza Doolittle? Was it the rowdy With A Little Bit of Luck, in which you meet Eliza’s boisterous dad? The forever rhymning The Rain in Spain? On the Street Where You Live? I’ve Grown Accustomed to Her Face? Get Me to the Church on Time? The rousing, fabulous oh-my-God spectacular I Could Have Danced All Night?

Was it all of them?

I don’t know, but somewhere amid those unforgettable songs, sensational acting and magnificent sets at the revival of My Fair Lady at New York’s Lincoln Center, I made up my mind. I fell in love with the musical, now celebrating its first anniversary at Lincoln Center, all over again.

The musical, with book and lyrics by Alan Jay Lerner and music by Frederick Loewe, is set in London during Edwardian England (1900 – 1910) and based on George Bernard Shaw’s 1913 play Pygmalion. It is the story of street-wise Eliza, she of the irritating shrill voice and pronounced accent, and her efforts, under the tutelage of the esteemed, sophisticated phonetics professor Henry Higgins, to “become a lady.” Higgins is so sure he can turn raffish Eliza into a proper lady and debut his new creation at the fancy elegant Embassy Ball that me makes wagers on it.

He succeeds. He brings Eliza into his house to live and study, keeps her up until all hours of the night repeating word and phrase pronunciations hundreds of times, buys her dresses and hires her carriages. He brings her to the upper crust races at Ascot, the Embassy Ball and other public places. In the end, he has made the frisky, unkempt flower girl a member of high society, with a gorgeous wide-brimmed Merry Widow hat, expensive gowns and exquisite taste.

At what cost, though? Can the “new” Eliza, the belle of the ball and consort of Queens, ever go back to the streets, flowers bunched up in her hands? Can she ever hang out with her Dad and his raucous drinking buddies again?

Will she marry Higgins, who by the middle of the play is obviously in love with her? Will she eventually run his household full of expensive furniture and hard-working maids? Will her new-found friendship with Colonel Pickering continue?

This revival of My Fair Lady is a stunner. The play has it all. First, the music. Was there ever a more memorable song that I Could Have Danced All Night? A better love song that I’ve Grown Accustomed to Her Face? A better scene stealer, play stealer, history of the theater stealer than the voluptuous, bouncy Get Me to the Church on Time, which sprawls all over the stage at Lincoln Center as the roars of the audience get louder and louder?

The roles of Eliza and Henry Higgins are two of the best written roles in theater history. Eliza is the old street girl who, introduced to the high life embraces it, but with a desperate yearning for the old gang from the neighborhood. She works hard at becoming a lady and achieves her goal. Henry Higgins is an intellectual, a devoted bachelor and a man who has been successful – all alone—for his entire life. Can he survive the commotion caused in his life by the whirling dervish Eliza? Could anyone?

Laura Benanti is nothing short of dazzling as Eliza and is at her best as she belts out I Could Have Danced All Night. She grows in the role and by the end of the play you never, never think she had been a blustery flower girl. She has a gorgeous voice and is a superb actress. She also plays well in scenes with others, particularly those with Higgins and with his mother. Harry Hadden-Paton is a complicated Higgins. He acts well and sings well, but his strength is the way he looks at Eliza and dotes on her, even if a bit gruffly. He shows her off and is proud of her, but at the same time just does not know what to do with her.  Hadden-Paton has a moment early in the play when he looks at Eliza and keeps moving, with small steps, to see more of her. His uncertainty about the flower girl is the perfect sign of his love, his surprised love, for the flower girl. Hadden-Paton and Benanti are a delightful pair.

Other fine performances are from Christian Dante White as Freddie Eynsford Hill, who falls for Eliza, Allan Corduner as Colonel Picking, Higgins’ assistant in the make-Eliza-lady college, Alexander Gemignani as Eliza’a father, Alfred P. Doolittle, who prances, dances and, in high spirits, boldly struts across the stage, hat in hand. Rosemary Harris is a delight as Higgins’ mom, who chastises Eliza when the girl throws slippers at her son, “I would not have thrown slippers at him; I’d have thrown fire irons.” 

Some critics have written that it is now Eliza’s play, that director Bartlett Sher has made Eliza the centerpiece at last. No. It is still the story of both Higgins and Eliza, an historical pair if there ever was one. What director Sher has done, brilliantly, is highlight all of the characters sin the play, large and small, to create a richer and deep portrait of London just before the first World War. He is aided by the remarkable choreography of Christopher Gattelli.

The play opened on Broadway in 1956 and was an immediate hit. It was turned into an Oscar winning movie in 1964, starring Rex Harrison as Higgins and Audrey Hepburn as Eliza. It was then, and now, a rich history lesson for patrons about Edwardian England and its high society, low society. customs and traditions.

The play, and the 1964 movie,  produced a perfect re-creation of  Ascot race course traditions and dress, right down to the wide brimmed Merry Widow hats the ladies wore (yes, that’s the scene when prim and proper Eliza, unable to take the slow pace of the horse she bet on any longer, shouts out at the horse in her best high society language, “Move your bloomin’ arse!).

You learn of the customs and style of the Embassy Balls, with their personal introductions, stilted etiquette, chats with the Queens that turn up, men and women’s’ fashions and music. You also get a terrific lesson in the look of the street, with the grimy flower girls and men, many out of work, that hung out around the theaters, bars and other public buildings. Edwardian clothing at its best is showcased in the play, right down to the fancy off-white suits and hats of the men, who cavort through the night, too, in their black tuxedoes with white silk scarves draped around their necks.

Everyone sees the enormous difference between the first-class folks at the Embassy Ball in 1910 and the scruffy street people in third class London. The gap between the is three million miles and no song could bridge it - then or now.

It was the heady days just before the start of World War I in England and America, an event that changed everything.

There has to be much applause for the magnificent revolving stage set by set designer Michael Yeargen. He built a scrumptious, dark wood finish home for Higgins, complete with a spiral staircase and rooms everywhere. As the massive stage turns slowly, you see the tiny bedrooms of the maids float past you, along with gardens and trees with maids hanging on to them. It’s one of the best sets I have ever seen. 

If you can get yourself to Lincoln Center, and to Edwardian England, and need some flowers, go see Eliza, Henry Higgins, Colonel Pickering and even the Queen of Transylvania. They will have you dancing all night, and adoring it.

PRODUCTION: The play is produced by Lincoln Center.  Sets: Michael Yeargen, Costumes: Catherine Zuber, Lighting: Donald Holder, Sound: Marc Salzberg, Music Direction: Ted Sperling, Choreography Christopher Gattelli,. The play is directed by Bartlett Sher. It has an open-ended run.    

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172089 https://historynewsnetwork.org/article/172089 0
A Day to Remember: Memorial Day 2019

 

On May 30, 1963, I urged citizens to remember 42 years earlier when locals dedicated a granite monument in Ashland, Oregon as “a permanent memorial, reminding those that come after us of the glory of the republic, the bravery of its defenders and the holiness of its ideals.” This monument, dedicated shortly after World War I in 1921, remembered those who had not been brought back alive on our troop ships.  They had died in trenches, of poison gas, or in tank warfare, maybe side by side with the British in the fields of bloody France.

 

When preparing to speak to more than a hundred locals, I read up on war and peace, suffering and victory, and the joy found in winning.  Often I reflected on that emotional World War I against the Kaiser and the sacrifices in trenches and sunken ships.

 

It had been a War to Make the World Safe for Democracy. Woodrow Wilson, his Ph.D. from Johns Hopkins and academic life lived at Princeton, had chosen Herbert Hoover to be “Food Czar” with the mandate to unite the farmers of America behind the mission of making sure Europe (the part in Allied hands at least) did not starve.

 

At home, homeland happy Germans and agitating Socialists had a minimum audience for their protests.  In the Navy Department, the Assistant Secretary, hale and hearty Franklin D. Roosevelt, was charged with creating a mine field to keep Germany out of the North Sea.  He dealt in the capitol with my engineer father, Vaughn Taylor Bornet of the Budd Company to make a success of it.

 

A few decades earlier, 1898, I pointed out, we had fought Spain to free Cuba “in the cause of humanity and to put an end to the barbarities, bloodshed, starvation, and horrible miseries” that colony was felt to be suffering.  

 

In half a century it would be time to invoke the memory of Midway and Okinawa, of D-Day. We ensured the survival of Britain and France and occupied Japan!  Plenty there to memorialize!  It was indeed true that World Wars I and II had been victorious after The Yankees had come to the rescue of democratic regimes fighting the Kaiser, Nazis and Fascists…. 

 

On February 2, 1966,  I raised the question—as Vietnam was still being actively fought over—whether there was “an ethical question” in that war we were waging so seriously, yet so unsuccessfully.  I didn’t do very well, I thought in retrospect, so in 1976 I revised my remarks.  Looking back, I wrote this emotionally  trying paragraph:  

“We can now look back upon Vietnam, that tortured spot on the planet, and we look hopefully for signs that Good and Evil were clearly defined and readily identifiable to those who undertook the long crusade by force of arms.”  

A world of jungles surrounded us back then.

 

Today we look back full decades.  We visit and walk pleasantly about in today’s Vietnam. We regret we didn’t “win.” We still deplore Communism—that is, after departing by plane or ship.  And, especially, we regret all those deaths—on BOTH sides.  As we take pleasure in the happiness now visible on the sidewalks, we know that while the economy thrives, freedom is short. We also know full well that the war waged from Kennedy to Nixon, yes!,  should have been curtailed long before it was!

 

We do have a right to ask bluntly,  “Did we have to wage it with such extraordinary vigor (just because we weren’t winning).” Did we find Honor in not stopping?  We sought, it must be said, a victory of the kind we had won earlier, in the 19th and 20th centuries. It was unaccountably being denied us in jungles way off somewhere. It was humiliating!

 

In my book on celebrating our patriotic holidays I pointed out that “The literature that attempts to evaluate the Vietnam War is thoughtful and immense.”  Competing with it here is out of the question—although I must admit to having been, as a working historian,  very much a part of it as I defended “patriotism” back when.  I  devoted maybe 200 pages to President Johnson’s turmoil when deciding what in Hell to do in Southeast Asia. 

 

He could see that the Communists were not going to prevail in the rest of Southeast Asia!  In Indonesia, Thailand, Malaysia, Singapore the Philippines,  Republic of China, and Shri Lanka.  Whatever North Vietnam and China might want, South Vietnam was to be their limited prize.  We had been content with what  had “worked” in South Korea, but that South Vietnam had been a different ballgame, it had turned out.

 

The Vietnam disaster had an effect on the kind of patriotism that prevailed earlier; no doubt about it.  This time, we had Lost!  For awhile, we just wouldn’t think about it too much or too often.  Find something else to consider when reflecting on our body politic.

 

I will dare, as I conclude this troubled essay, to quote from my book’s page 149:  “The anti-patriotic among us sometimes descend to portraying the United States in the role of an “empire” engaged routinely in “imperialist” invasions and dedicated to “conquest” for only “economic gain.”

 

For some among us, Patriotism sometimes seems just “old hat.”  Not for everybody.  One thinks back on what can easily be termed “Great Causes” supported by us in the Past. Some are still part of our active heritage. There is a free Europe.

 

Partly from what we did in our past emerged a new Commonwealth, an independent British Empire.  Bad as it is sometimes, Africa could be worse.  We have helped, overall—not wisely, always, but aided by philanthropy centered in the U.S., by Gates, Rotary, and others, by sometimes doing the right thing.  Maybe we’re a little better than we sometimes think!

 

Yet our Nation’s prestige has suffered severely in the past two years.  Leadership has lost us the affection of far too many countries who were once so close as to show pride routinely.  Beginning with that inexcusable withdrawal from the Paris accords on climate, we have from our top office displayed misunderstanding, even contempt, for other Lands.

 

This must stop; the end of “going it alone” cannot come too soon.  Surely this mostly verbal misbehavior is a temporary and transitory thing.  All in office in the Executive Branch need to bear in mind at all times that they are trustees for our evolving reputation.  We must, and we will, strive to do better, very soon.  Downhill is not the right direction for the United States of America!

 

This Memorial Day is a good time to think back, bring our minds up to date, and fly that beautiful flag while humming or singing one of our moving, patriotic songs.  For this quite aged American, it remains “God Bless America” all the way.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172057 https://historynewsnetwork.org/article/172057 0
How About a Peace Race Instead of an Arms Race?

 

In late April, the highly-respected Stockholm International Peace Research Institute reported that, in 2018, world military expenditures rose to a record $1.82 trillion.  The biggest military spender by far was the United States, which increased its military budget by nearly 5 percent to $649 billion (36 percent of the global total). But most other nations also joined the race for bigger and better ways to destroy one another through war.

 

This situation represents a double tragedy.  First, in a world bristling with weapons of vast destructive power, it threatens the annihilation of the human race.  Second, as vast resources are poured into war and preparations for it, a host of other problems―poverty, environmental catastrophe, access to education and healthcare, and more―fail to be adequately addressed.

 

But these circumstances can be changed, as shown by past efforts to challenge runaway militarism.

 

During the late 1950s, the spiraling nuclear arms race, poverty in economically underdeveloped nations, and underfunded public services in the United States inspired considerable thought among socially-conscious Americans.  Seymour Melman, a professor of industrial engineering at Columbia University and a peace activist, responded by writing The Peace Race, a mass market paperback published in 1961.  The book argued that military spending was undermining the U.S. economy and other key aspects of American life, and that it should be replaced by a combination of economic aid abroad and increased public spending at home.

 

Melman’s popular book, and particularly its rhetoric about a “peace race,” quickly came to the attention of the new U.S. President, John F. Kennedy.  On September 25, 1961, dismayed by the Soviet Union’s recent revival of nuclear weapons testing, Kennedy used the occasion of his address to the United Nations to challenge the Russians “not to an arms race, but to a peace race.”  Warning that “mankind must put an end to war―or war will put an end to mankind,” he invited nations to “join in dismantling the national capacity to wage war.”

 

Kennedy’s “peace race” speech praised obliquely, but powerfully, what was the most ambitious plan for disarmament of the Cold War era:  the McCloy-Zorin Accords.  This historic US-USSR agreement, presented to the UN only five days before, outlined a detailed plan for “general and complete disarmament.” It provided for the abolition of national armed forces, the elimination of weapons stockpiles, and the discontinuance of military expenditures in a sequence of stages, each verified by an international disarmament organization before the next stage began.  During this process, disarmament progress would “be accompanied by measures to strengthen institutions for maintaining peace and the settlement of international disputes by peaceful means.”  In December 1961, the McCloy-Zorin Accords were adopted unanimously by the UN General Assembly.

 

Although the accelerating nuclear arms race―symbolized by Soviet and American nuclear testing―slowed the momentum toward disarmament provided by the McCloy-Zorin Accords and Kennedy’s “peace race” address, disarmament continued as a very live issue.  The National Committee for a Sane Nuclear Policy (SANE), America’s largest peace organization, publicly lauded Kennedy’s “peace race” speech and called for “the launching of a Peace Race” in which the two Cold War blocs joined “to end the arms race, contain their power within constructive bounds, and encourage peaceful social change.”

 

For its part, the U.S. Arms Control and Disarmament Agency, created by the Kennedy administration to address disarmament issues, drafted an official U.S. government proposal, Blueprint for the Peace Race, which Kennedy submitted to the United Nations on April 18, 1962.  Leading off with Kennedy’s challenge “not to an arms race, but to a peace race,” the proposal called for general and complete disarmament and proposed moving in verifiable steps toward that goal.

 

Nothing as sweeping as this followed, at least in part because much of the subsequent public attention and government energy went into curbing the nuclear arms race.  A central concern along these lines was nuclear weapons testing, an issue dealt with in 1963 by the Partial Test Ban Treaty, signed that August by the U.S., Soviet, and British governments.  In setting the stage for this treaty, Kennedy drew upon Norman Cousins, the co-chair of SANE, to serve as his intermediary with Soviet Premier Nikita Khrushchev.  Progress in containing the nuclear arms race continued with subsequent great power agreements, particularly the signing of the nuclear Nonproliferation Treaty of 1968.

 

As is often the case, modest reform measures undermine the drive for more thoroughgoing alternatives.  Certainly, this was true with respect to general and complete disarmament.  Peace activists, of course, continued to champion stronger measures.  Thus, Martin Luther King, Jr. used the occasion of his Nobel Peace Prize lecture in Oslo, on December 11, 1964, to declare:  “We must shift the arms race into a ‘peace race.’”  But, with important curbs on the nuclear arms race in place, much of the public and most government leaders turned to other issues.

 

Today, of course, we face not only an increasingly militarized world, but even a resumption of the nuclear arms race, as nuclear powers brazenly scrap nuclear arms control and disarmament treaties and threaten one another, as well as non-nuclear nations, with nuclear war.

 

Perhaps it’s time to revive the demand for more thoroughgoing global disarmament.  Why not wage a peace race instead of an arms race―one bringing an end to the immense dangers and vast waste of resources caused by massive preparations for war?  In the initial stage of this race, how about an immediate cut of 10 percent in every nation’s military budget, thus retaining the current military balance while freeing up $182 billion for the things that make life worth living?  As the past agreements of the U.S. and Soviet governments show us, it’s not at all hard to draw up a reasonable, acceptable plan providing for verification and enforcement.

 

All that’s lacking, it seems, is the will to act.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172060 https://historynewsnetwork.org/article/172060 0
Leadership and Mimicry: What Plutarch knew about Elizabeth Holmes

 

Founder of the biotech company, Theranos, Elizabeth Holmes is currently awaiting trial for cheating investors and deceiving her clients. She claimed that her company was building a device that would revolutionize healthcare by running dozens of lab tests on a single drop of blood. This device, called the Edison, was to become widely available in a nation-wide chain of drug stores, providing nearly every American with quick, affordable access to important information about their health. Holmes appeared to be doing the impossible, and nearly everyone believed in her, from seasoned Silicon Valley entrepreneurs to wealthy investors to former Secretaries of State. By the time she was thirty she had accomplished one of her childhood dreams: she had become a billionaire. But quick and easy blood testing, it turns out, really is impossible. While a legal decision about her behavior as CEO lies in the future, the verdict on her character appears to be in. Elizabeth Holmes is a fraud.

 

In the last year alone, Holmes has been the subject of a book (soon to be a movie), countless newspaper and magazine articles, an HBO documentary, and an ABC News podcast (soon to be a television series). This entrepreneur, once celebrated as a genius, is now more often called names like “disgraced fraudster,” and her career has repeatedly been cast in highly moral terms, with a rise-and-fall trajectory that seems already to have completed its arc. The way to explain the collapse of Theranos, it seems, is to study the deficiencies in Holmes’ character.

 

This approach to telling Holmes’ story calls to mind the Greek philosopher Heraclitus, who claimed that “character is destiny.” This ancient saying remains popular in our modern world. The New York Times editorial board used it just last year, for instance, to describe the downfall of Eliot Spitzer and to speculate about the future of Donald Trump. John McCain selected it as the title for his 2005 book, which contains stories of successful historical figures who demonstrated admirable character. Character alone, McCain argues in the introduction, determines the courses of one’s life and career. And so, according to both the ancient philosopher and the modern statesman, there is no pre-ordained path that we are obliged to follow, nor should we look for external guidance as we navigate our careers. We deserve full credit for our successes, but we must also take full responsibility for our failures.

 

Long before the rise and fall of Elizabeth Holmes, however, philosophers and ethicists were contemplating the implications of Heraclitus’ dictum. Plutarch of Chaeronea, for one, knew this principle well and wrote at length about the fundamental importance of character, especially for people in positions of power. He composed treatises on leadership, but his most ambitious project was the Parallel Lives, a lengthy series of biographies of Greek and Roman leaders that demonstrate how their virtues and vices affected their political and military careers.

 

For Plutarch, good character was fundamental to becoming an authentic leader. In his essay To an Uneducated Leader, he laments that most people who aspire to positions of power fail to realize that they must prepare themselves ethically. “And so,” he writes, “they imitate the unskilled sculptors who believe that their colossal statues appear great and strong when they fashion their figures with a mighty stride, a straining body, and a gaping mouth.” By emphasizing appearance over character, such leaders fool everyone, including themselves, into thinking they are the real thing because they “speak with a low-pitched voice, cast a harsh gaze, affect a cantankerous manner, and hold themselves aloof in their daily lives.” In fact, such leaders are just like the statues, “which on the exterior possess a heroic and divine facade but inside are filled with earth and stone and lead.” Plutarch is imagining statues made of bronze, which were cast over a clay core that remained inside. The statues, at least, could rely on this internal weight to keep them upright, while uneducated leaders “are frequently tripped up and toppled over by their innate foolishness, because they establish their lofty power upon a pedestal that has not been leveled, and so it cannot stand upright.” That pedestal, in Plutarch’s view, is character, and so a leader who forgoes ethical development is destined to fail.

 

Plutarch believed he could show that character was destiny by examining historical examples. In his biography of the Athenian orator and politician Demosthenes, for example, he presents an inauthentic leader who is publicly exposed as hollow. Demosthenes modeled himself on Pericles, an Athenian leader of an earlier generation who in both ancient and modern times has been portrayed in ideal terms. Demosthenes was selective in following his model, however, imitating only his style of speaking, his public demeanor, and his habit of getting involved in only the most important matters, “as though Pericles had become great from these practices alone” (Dem. 9.2). Now Demosthenes did indeed become a great speaker, and he used his oratorical prowess to organize resistance to Philip of Macedon, whose military might posed an existential threat to the independent Greek cities. He talked his way into a leadership position, but when the united Greek armies met Philip’s forces in battle, Demosthenes could not live up to the image he had created. “To this point he had been a brave man,” Plutarch explains. “In the battle, however, he did nothing that was honorable or that corresponded with his words, but he abandoned the formation, running away most shamefully after casting off his arms” (Dem. 20.2). Throwing away one’s arms, especially the heavy shield, was the proverbial sign of cowardice in Greek warfare. Thus, in this single act, Plutarch found all the proof he needed of Demosthenes’ deficiency in character.

 

 

 

 

The modern story of Elizabeth Holmes is one that Plutarch would surely have recognized. ABC News in particular has focused on Holmes’ efforts to shape her public persona and so to conceal the clay inside. When the company was new, the young entrepreneur had no shortage of admirers. “Don’t worry about the future. We’re in good hands,” declares Bill Clinton in the podcast’s first episode. He is followed by an exuberant newscaster who compares Theranos to Amazon, Intel, Microsoft, and Apple, before gushing, “It could be that huge.” But Holmes was not who she pretended to be. In order to make her company more like Apple, she hired away Apple’s employees. And then she went a step further, donning a black turtleneck in deliberate imitation of Steve Jobs, “as though Jobs had become great by wearing the turtleneck alone,” Plutarch would have added. The black shirt, it turns out, was a metaphor for the black box that was supposed to be testing blood but never really had the right stuff inside. In ABC’s version of the story, neither Holmes nor the Edison was ever more than a shell.

 

In business and in politics, then, philosophers and reporters tell us that no one can hide deficiencies in character forever. “It is, of course, impossible for vices to go unnoticed when people hold positions of power,” Plutarch writes in To an Uneducated Leader (7). Then he adds this example: “When jars are empty you cannot distinguish between those that are intact and those that are damaged, but once you fill them, then the leaks appear.” So how do we avoid giving our money to an Elizabeth Holmes, or putting a Demosthenes in charge of our government, only to find out too late that they are not up to the challenge? The answer for jars is to fill them with water and check for leaks before we use them to store expensive wine or oil. Just so, Plutarch, and before him, Heraclitus, would surely have suggested that we ought not give millions of dollars to a first-time entrepreneur, or place an untested politician in high office. In those situations, their character may be their own, but their destiny is ours.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172064 https://historynewsnetwork.org/article/172064 0
On Toad-Eating, Tyranny, and Trump

 

 

Exactly two hundred years ago, following two decades of war between Napoleonic France and Britain, and the restoration of the crowned kings of Europe, William Hazlitt wrote an insightful and plain-spoken essay, “The Times: On the Connexion between Toad-Eaters and Tyrants.” There he argued that English conservatives’ fierce defense of the absolute right of kings was founded on a base submissiveness toward power and hope for rewards. He gave sycophantic defenders of autocratic rule the traditional name for the charlatan’s assistant who swallowed live, supposedly poisonous toads in order to demonstrate the effectiveness of the con-man’s cure-all.

 

Hazlitt’s argument about the worship of absolutism can help us understand why current commentators may be waiting in vain for Republican national office-holders to exercise oversight or check an unfit and almost certainly felonious President. 

 

The 80% of Republican voters and 10% of Democrats who approve of the President will not change their minds if House Democrats or a prosecutor from the Southern District of New York presents further evidence that Trump has committed bank fraud, evaded income taxes, laundered money, tampered with witnesses, suborned perjury, and obstructed justice—not to mention defrauded the US  by coordinating with an unfriendly foreign power the release of stolen documents and the sharing of sensitive polling data during an election. These charges and more are now all but certain. Most people know, even if they decline to admit, that Trump is a player of confidence games whose word is not his bond. 

 

More to the point, his supporters love him for having gotten away with being such a dishonest character and operating in the shadows of illegality his whole career. In the American tradition that equates money with success, Trump is a success, and many Americans worship success, however achieved. These people cheer him because he is uninformed but unashamed, having no ethics but billions of dollars, at least according to his varying and less than reliable claims. 

 

It does not matter to them, as it does not matter to him, that hundreds of millions of that money were funneled his way in the last twenty years by Russian oligarchs and other shady foreign nationals who own the majority of the condos in buildings such as Trump Soho. It does not matter that every business venture he initiated has failed—as the transcripts of his taxes from 1985-1994 show—nor that he has declared bankruptcy to evade creditors so many times in his career, nor that he has underpaid thousands of contractors who resemble his supporters. They do not support Trump despite his venality, immaturity, and obvious intellectual incapacity, but because of his failings of character.    

 

They have been encouraged in their adulation by electronic and print media. The parallel function of the media in Hazlitt’s time explains why the title of his essay begins with the name of the most virulently conservative newspaper of the day. The Times served as the propagandistic arm of the governing Tories, slanting its reporting against any who attempted to reform the corrupt English election system. The paper used its power to mock and launch personal attacks portraying reformers and government critics as lunatics and terrorists. 

 

The parallels between the Time sand Fox News could not be stronger. The second is waging, as the first once waged, a culture war by ridiculing and demonizing new ways of thought and policy proposals based on a desire for social justice. In the 2010s, as in the 1810s, cultural conflict serves as a vehicle and a surrogate for political conflict.    

 

During the election, many Republican office-holders kept their distance from the candidate of their party because of his habitual mendacity, his vulgarity, and his general untrustworthiness. When he became President, however, Republican office-holders and the majority of conservative commentators revealed their remarkable capacity for toad-eating.

 

Consider the Senate, designed to be one of the principal checks on the power of a lawless Chief Executive. The majority now exhibits an automatic subservience to Mitch McConnell’s increasingly anti-democratic schemes for seizing and maintaining power for his party for the next generation. Every day they countenance the Executive’s untold violations of constitutional constraints and criminal law, any two or three of which would have led to their removal of a Democratic President from office. 

 

Senator McConnell will grant with a smile and a twinkle in his eye that Trump is indeed an “unusual” politician. So is McConnell an unusual Majority Leader—a former institutionalist who tears away any shred of independence and legitimacy the upper chamber ever possessed. It now resembles the Roman Senate under the Empire, whose sole business was to inquire what were the wishes of the Emperor—how high would he like the Senators to jump, and which of them would he like to see commit suicide today?   

 

As a Representative, Lindsay Graham prosecuted Bill Clinton in order to “purify and cleanse” the Presidency because of one inconsequential lie. As a Senator, he now adamantly defends the unending stream of untruths that flows from Donald Trump’s mouth and through his Twitter account every day. There is little point in multiplying examples. Is there any need to mention Bill Barr, Rudy Giuliani, Paul Ryan, or Kelly Ann Conway? 99% of Republican legislators? The number of compromised followers is depressingly incalculable. 

 

But how, specifically, does our current pathological political condition illustrate the connection Hazlitt draws between toad-eaters and tyrants? As Hazlitt observed, legislators and media defenders are motivated by both fear and self-interest—fear of retaliation by the ruler on the one hand, and, on the other, the chance of retaining or improving their position and increasing their wealth and influence during his reign.    

 

Hazlitt’s analysis also throws light on why despots are able to attract followers. Their appeal derives from the consolations of unity and adoration of the One. In Hazlitt’s day, that respect was still based on the idea of a “divinity that hedged a king,” what he viewed as the false and ridiculous belief that a king was different from all the rest of humankind. Hazlitt anticipates the characterization of the authoritarian personality—a weak character that needs to identify itself with a strong leader in order to feel secure—as elaborated by Theodor Adorno and Max Horkheimer after the defeat of the Nazis in 1945.

 

In our day, the attitude of irrational enthusiasm for the President is also based on the doctrine of the “unitary executive” advanced by Dick Cheney and others in the Reagan years. This doctrine rules out all dissention or differences of opinion within the executive branch concerning policy, judgment, or facts. The executive branch must speak with one voice and one will—those of the Chief Executive. As the determined xenophobe Steven Miller asserts: “He will not be questioned!” 

 

That is to say, any Republican Chief Executive will not be questioned.This doctrine clearly and quickly leads toward consolidation of power in the Chief Executive, as capricious as he may be. In the right circumstances—if his party is in charge of the legislature and the Supreme Court—it enables him to be a lawless ruler.

 

Representative Jerry Nadler and other Democrats have said that Trump is behaving like a king, but he is actually acting as something worse—a tyrant. A constitutional monarch in fact acts within the law, but a tyrant disregards the laws and the common good while pursuing self-interest and personal pique. Representative Jackie Speier has more accurately asserted that the President “has in many respects become a dictator.” As the self-proclaimed One who “alone can solve” the country’s problems, he has the unconditional support that a tyrant needs. A host of Fox & Friends explained on May 9 that because he once bought a $28 million yacht, “he’s different from you and me.” Perhaps so, but not in any way that qualifies him for a high office of public trust.   

 

The paradox that Hazlitt’s psychological analysis exposes is that, like their base, Republican legislators and commentators find it easier and more exciting to defend a manifestly absolutist Executive who cares for nothing except money and power than to support a more moderate and thoughtful one. Partisans of absolutism are able to retail the most absurd justifications or explanations, freed by the example of their leader from logical constraints or concern for consistency and right. The more extreme and offensive the policy, the more transparent the excuse for misbehavior, the more contradictory the reasoning, the more delusional the thinking, then the louder and more ferocious is the defense of the leader’s moral vacuity, spitefulness, and ineptitude. The supporters are defending the ruler’s power, not his character, policy proposals, or arguments. 

 

The early Church Father Tertullian said that he believed that his Lord was both God and man not despite the idea’s absurdity, but because of it. Republican office-holders, commentators, and the base support Trump not despite his being a reality television star who is playing President, but because he has no qualifications, no knowledge of history, no understanding of the Constitution, no care for anyone’s interest but his own. They thus demonstrate their devotion to the pure pursuit of party power, unmixed with principle. That is the mark of pure and true toad-eating: “A rogue’s obeyed in office.”

 

It is not a pretty sight—the accelerating dissolution of a constitutional democracy that worked fairly well for a few decades in the mid-twentieth century. But at least we who witness the tragic and farcical spectacle can call the actors in it by their proper names.    

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172056 https://historynewsnetwork.org/article/172056 0
Where Did the Indigenous Community Mothers Go?

 

I have spent the past two decades researching and writing biographies of nineteenth century indigenous women who married and lived in a cluster of cross-cultural couples in northwest Washington State. Their husbands were county officials while others were military officers and agrarians. These women and their husbands composed 80 to 90 percent of Whatcom County’s married couples during its first twenty years of legal existence. And yet, when local historians wrote their county and city histories in the 20th century, they ignored these indigenous community mothers. The contributions and legacies of these wives and mothers were never explored. The same pattern exists in other places, leaving a conspicuous hole in Western history. 

This should not be surprising, given the iconic heroines mythologized by families and fans of the American westward movement: a courageous sad-eyed wife who left her home forever to trek the Oregon Trail across the continent and help her husband tame a wilderness. Or, the mail-order bride who braved the ocean’s dangers to marry a virtual stranger and help him build a community. Or, a spinster schoolmarm who brought literacy to pioneer children in a dusty town.  Or, the saloon girl with a heart of gold whose for-sale femininity kept a settlement from exploding in violence until real ladies arrived. Annie Oakley and Calamity Jane provided fierce examples of women equal to men in the face of the West’s challenges. Sacagawea remains the only native wife of a non-native man in the West that most Americans have ever heard about.

These archetypes appear again and again in national, regional, and local histories, as well as over two centuries of fictionalized versions of westward expansion. Until Elizabeth Jameson, Susan Armitage, and other women historians established the critical need to examine women’s contributions, most nineteenth century western women made appearances as accessories to male accomplishments or tragic sacrifices to Manifest Destiny. 

Almost never memorialized, even briefly, were the young indigenous women who lived near forts and in new settlements that displaced native communities, and whose husbands were army officers, Indian agents, merchants, local officials, and legislators. Historians did not consider that elite Native women’s families might have had their own agenda when they married their daughters to men they considered to be of equal status. Indigenous community mothers seem to have been an uncomfortable truth for historians and other writers that did not fit with the Euro-American mythology they sought to build around “the first white woman” in town. The result was their now-conspicuous absence.

Husbands in histories and literature have generally been portrayed as men on the fringe of development who contributed little or nothing to their town’s development. Historical accounts only reference wives briefly: “He married an Indian.” The husband often has been said to have “bought” his bride, revealing the writer’s ignorance of her family’s status or local wedding customs of Native Americans. For example, the Coast Salish people of western Washington State saw marriage as a family decision: the family considered how the groom would fit in and contribute to their extended family economy. Families chaperoned young elite women until marriage. His wedding gifts recognized her loss to her family, and a year or more of obligations from both sides were part of the arrangements. Whether or not the husband saw his tribal custom marriage as legitimate, the bride’s family did. These marriages took on the pattern of all marriages: long or short, happy or unhappy.  

This is not to imply that all young wives were from elite Native families. Some were not, and sometimes a slave was offered to a man seeking only a housekeeper and sexual partner, and was likely oblivious to class distinctions among native people.  

While trying to ignore the presence of native wives of “pioneer town fathers,” generations of historians showed no respect for the women’s contributions to their community’s development. A contribution was not always a man establishing a mill, a mine, or a business as current historians of western women have shown. At Bellingham Bay, four waves of bachelors poured into the settlement in the 1850s and those who stayed married the neighborhood women in the absence of eligible ones from their own society. These young indigenous wives taught the half-dozen white women how to make use of unfamiliar food sources, and cared for their children. Wives used their healing knowledge and delivered babies for isolated women. They took bachelors into their homes as boarders, providing meals, laundry, and a family atmosphere. They learned to bake bread and apple pies that their husbands and boarders missed. 

Children of cross-cultural marriages in the large cluster from which my eight biographies were written found success or failure in whichever identity they chose, whether they lived in the white-dominant society or joined relatives on a reservation. Writers continue to portray them as misfits who found no place in either society, but this stereotype is false. Legacies to their new communities and even wider society are found in the children and generations that were to come. 

One example is the legacy left by the young Alaskan Haida wife of Captain [and soon-to-be Confederate general] George E. Pickett, commander of Fort Bellingham. Her personal name remains unknown, but soldiers and locals addressed her respectfully as “Mrs. Pickett.” She lived with him in the fort commander’s in-town home where they often hosted visiting territorial officials and army “brass.” She died in the months after their son was born. Her son, James Tilton Pickett, became a much-lauded marine and landscape artist whose works reside in museums. He was the first Washingtonian to attend a professional art school, and one of the founders of Portland, Oregon’s fine arts community. 

 Jenny Wynn, an elite young Lummi woman, wed a Philadelphia Quaker blacksmith who arrived at the bay to work at the mill and new coal mine. After moving to a farm, her skill smoking superior hams turned into a profitable business. She and her husband’s support of education for rural children resulted in at least four generations of teachers in their family, educators who see that as the family identity.

Historians across the nation might enhance the view of early western communities, as well as many cities in the Old Northwest and along the Mississippi River, by looking beyond the culturally biased accounts to include this long- overlooked group of founding mothers.  

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172061 https://historynewsnetwork.org/article/172061 0
The Chinese Threat to American Higher Education

Tsinghua University in Beijing

 

 

Amid the Trump administration’s destabilizing trade war with China, President Xi Jinping is determined to make his nation into an unrivaled economic and military superpower. China has taken the lead in the development of green energy, 5G networks, and artificial intelligence and robotics—and Xi’s “Belt and Road” initiative will soon link the economies of Asia and Europe by road, rail, and water. 

 

The rise of China also poses significant challenges to American higher education. Although the U.S. remains a destination of choice for international students and scholars, China’s share of global scientific papers has increased from 6 to 18 percent from 2003 to 2013; the number of Chinese universities nearly doubled from 2005 to 2015; and eight million students graduated from Chinese institutions of higher learning in 2017 alone (a tenfold increase in a generation). As a result, twenty-two Chinese universities now rank in the top 100.

 

As President Trump’s nationalistic rhetoric and restrictive immigration policies have resulted in a 10 percent drop in international student enrollments on American campuses, China’s “Double First Class” plan aims to make forty-two of its universities into world-class institutions by 2050. To that end, the Thousand Talents Plan brings leading scientists, academics, and entrepreneurs to teach and research in China, and it provides financial incentives for Chinese scientists living abroad to return home. The government also doles out tens of thousands of full scholarships to attract international students from more than 170 countries. Alarmed by ominous signs that American higher education was losing its edge, I penned Palace of Ashes (Johns Hopkins University Press) in the summer of 2014 to illustrate how the forces of globalization were helping rapidly developing Asian nations—particularly China—to transform their major universities into serious contenders for the world’s students, faculty, and resources. I stand by the central claims of that work but could not anticipate the chilling effect the subsequent formulation of “Xi Thought” would have on higher learning. To create a culture based on “market socialism with Chinese characteristics,” Xi Jinping asserted political control over higher education in 2016 in a widely-publicized speech that put ideological and political work at the heart of university education to promote socialism. Xi also expressed a desire that Chinese colleges and universities be “guided by Marxism” to become “strongholds that adhere to Party leadership.” As he began a second term as president in 2018, an enhanced system of monitoring faculty members by using classroom informants emerged to identify faculty members who engaged in “improper speech.” Although censorship and classroom informants have remained a persistent part of higher learning for generations, Xi Thought challenges widely held assumptions in the West that economic development would result in more social and intellectual freedom. To encourage adherence to the core values of Chinese socialism, Xi advanced the “Chinese dream,” a vision of economic progress that will culminate in a “great renaissance of the Chinese nation.” To export that vision and to promote the country’s image as a global leader, the government funded Confucius Institutes to encourage the study of Chinese language and culture on campuses around the world. Concerned that these institutes erode academic freedom and institutional autonomy by functioning as a propaganda arm of the Chinese government (since they often recruit and control academic staff, influence the curriculum, and restrict free debate), there has been a spate of recent closures of Confucius Institutes in the United States. Perhaps in retaliation, President Xi recently initiated a campaign against Western values that encourages communist party members in academia to redline innocuous and apolitical expressions of American culture. Today, in place of the free flow of information, China’s "Great Firewall" blocks thousands of websites (including Facebook, Twitter, YouTube, and Google Scholar). Even science is not immune from censorship in Xi’s China—a situation that has raised alarms in Hong Kong, Britain, Australia, and the United States. In my view, banning topics such as constitutional democracy, civil society, income inequality, freedom of the press, human rights, and historical critiques of the communist party from university classrooms, research seminars, and publications is regressive. President Xi’s ambition to make China “a global leader in terms of comprehensive national strength and international influence” by 2050 is jeopardized by the ideological control of innovation, China’s slowing economy, the trade war with the U.S., and the strict regulation of information. It is also hard to imagine that heightened authoritarianism, emphasis on party ideology and socialist morality, and censorship and surveillance will contribute to the establishment of world-class research universities. Ultimately, an emphasis on ideological agendas produces strong incentives for researchers to value the quantity of research—over its quality. By contrast, less governmental intervention in higher education might generate results more in line with Chinese aspirations. Until such a moment arrives, institutions of higher education around the world should remain wary of China’s academic model in which free inquiry plays little part.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172063 https://historynewsnetwork.org/article/172063 0
Yes, you should have free speech on Facebook.

 

 

Facebook purged hundreds of political pages and accounts last fall when responding to criticism that it too easily allowed the spread of “fake news.” It’s removed posts about racism and about breastfeeding. It’s blocked content at Russia’s demand. Now it’s taking on the anti-vaxxers and political extremists. Users who post questionable items or material that otherwise violates Facebook’s rules can find themselves in “Facebook Jail.” 

 

As the largest player in a multi-billion-dollar industry, Facebook increasingly faces complaints that it has illegally censored content and has violated users’ constitutional rights to freedom of speech. Free speech protections only apply to governmental bodies and not businesses, people argue back. While such is currently law, political realities demand that free speech protections evolve.

 

First some background.

 

Laws always evolve, even when they involve something seemingly permanent like the United States Constitution. As conceived, the Bill of Rights only protected citizens from the federal government. Beginning after the Civil War with the ratification of the Fourteenth Amendment and a series of Supreme Court cases over the next half-century, these protections, including free speech, extended and placed limitations on the power of state and local governments.

 

Simultaneously, interpretations of “free speech” have varied greatly. A century ago, due to the Christian Great Awakening (beginning in the 18th century and continuing well into the 20th century) and the push for morality, censorship was an everyday part of life. Various images, topics, and words—now often deemed fully acceptable—were illegal in art, film, and literature. Using the U.S. Postal Service to mail “obscene” material—including birth control and love letters—was illegal during such efforts to reform society. There have also always been gaps between the ideals of free speech and the realities of day-to-day life, especially during the world wars and even more so when the person “speaking” is a minority because of class, gender, race, or sexuality and is challenging authority in the United States.

 

Moving closer to the present, neoliberalism has guided policy moves by nearly all politicians in the United States starting in the 1980s. It favors free markets, deregulation, low taxes for the richest, and privatization—a society guided by Social Darwinism. Neoliberalism further holds that government is bad, business good. Through these processes, power is shifted to mega, ultra-wealthy corporations and their leaders—like Facebook. 

 

Facebook (and Tumblr, Twitter, WordPress, etc.) is now how everyday people receive news and freely express themselves through sharing memes, posting pictures, commenting on posts, or interacting with their friends and relatives. And this is a beautiful thing—from the billions of Facebook Messenger messages sent daily, the millions of blogs and tweets posted daily and from the hours people spend on social media daily, people are arguably reading and writing more than ever before. Elected officials, city governments across the nation, even the White House all run multiple social media accounts and encourage communication from the public. In the previous century, the postal service and Southwestern Bell delivered messages between people; today Facebook performs these tasks and as such, requires that freedom of speech protections expand in order to continue fostering a healthy democracy.

 

Thus, when blocking content or banning users, Facebook uses its ubiquitousness, power, wealth, and monopoly—made possible by neoliberal transferences of power from “real” governments to private businesses which function as unelected and unaccountable de facto governments—to silence. Such censorship deserves concern. As a result, Facebook is a kind of governmental institution because of its authority, capital, everyday role, and size. Private businesses are today’s governments.

 

Put differently, the United States and neoliberalism effectively enable such restrictions on “free speech” by allowing Facebook (with full rights of personage) to stand-in and to do the dirty work of suppressing content deemed immoral, unpopular, too radical, or incompatible with the status quo. 

 

But the internet should not give people free rein. Yelling “FIRE” in a theater just to cause panic is illegal. Restrictions are necessary. Using Facebook or other social media platforms to incite harm warrants appropriate silencing measures and should be accompanied by thoughtful questions about what exactly “harm” means and about enforcement mechanisms. The proliferation of propaganda-spreading bots remains an under-realized threat and also needs addressing.

 

There are always trade-offs with freedom of speech, too. As much as most people despise queerphobic or xenophobic rhetoric and don’t wish to encounter it online, any restrictions on free speech open the door to restrictions on all speech. Luckily, content filters available through household routers or through “block” buttons, make it relatively easy for a person to filter material they find offensive or harmful.

 

An additional note on neoliberalism and social media is necessary: Globalization is another characteristic of neoliberalism. Currently, 2.38 billion individuals across the globe use Facebook, and less than 20 percent are in the United States. Yet all of these people are subject to Facebook’s rules, developed primarily by white men based on mores in the United States—prudish by European standards and too freewheeling for governments in China, Iran, and elsewhere, which block the platform. All users are also subject to Facebook’s whims and forces as a new global power, as well as its security flaws. With a third of the globe Facebooking, what Facebook allows and doesn’t shapes everyone’s interactions in some way or another.

 

The internet facilitates opportunities for people to speak in 2019. In contrast to those who seek to end Net Neutrality, the United Nations has declared access to the internet and its rich libraries of information a basic human right. The American Civil Liberties Union continues to argue in and out of court that digital freedoms of speech, as broadly defined as possible, are vital. I wouldn’t be able to legally ban your ability to speak, neither should Mark Zuckerberg. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172062 https://historynewsnetwork.org/article/172062 0
The Expansion of Presidential Power Since 1973

 

Nearly a half century ago, famed historian and scholar Arthur Schlesinger, Jr. published The Imperial Presidency. This path breaking work described the growing centralization of the executive branch of the American government since the 1930s. The Imperial Presidency was published at the height of the Senate Watergate hearings in 1973, and brought essential attention to the need to prevent further abuses in the office of the Presidency.

 

Congress reasserted its authority after Watergate as it passed the War Powers Act of 1973 (which they passed by overriding Nixon’s veto), tried to limit FBI and CIA activities through the Church Committee investigations of the mid 1970s, and passed the Ethics in Government Act to create Special Prosecutors to investigate accusations of illegal activities in the executive branch. Unfortunately, these actions didn’t have the impact many in Congress hoped for: the War Powers Act was ignored by future Presidents who intervened regularly without giving notice under the law and the Church Committee investigations had no substantial long range impact.  

 

Presidents continued to expand their executive power. Republican President Ronald Reagan, despite his promotion of conservatism and the goal of making the federal government smaller, expanded the power of the presidency not through law but through precedent: because his substantial unilateral actions were not challenged, he set a precedent for future presidents.This was particularly evident in foreign policy, most notably the Iran Contra Affair.   Congress had banned any involvement or intervention in the civil war raging in Nicaragua against the leftist Sandinista government. Reagan’s administration nonetheless arranged secret arms sales to Iran and used the funds from those sales to support the anti government “Contras” in Nicaragua. Although some members of Congress called for impeachment proceedings, it was avoided because Reagan was in his second and final term, and because his warm personality and great oratorical ability made him widely popular. Reagan also used his executive power to authorize a secret intervention in Afghanistan against the Soviet Union, supported Iraq and Saddam Hussein in their war against Iran, and simultaneously sold arms to Iran. 

 

Reagan was then succeeded by his Vice President, George H.W. Bush. Bush, with his experience as Director of the Central Intelligence Agency under President Gerald Ford, also intervened internationally without Congressional authority. Bush authorized  the invasion of Panama in 1989 and organized a coalition to force Iraqi dictator Saddam Hussein out of Kuwait during the brief Persian Gulf War of 1991. Bush did not seek a Congressional declaration of war, and instead simply gained authorization to use force. At the end of his first and only term in the Oval Office, Bush, with his Attorney General William Barr, pardoned the major figures who had been convicted or were still facing trial as part of  the Iran Contra scandal.  This prevented any further investigation of the possibility that Bush himself was involved in that scandal. Bush followed in Reagan’s footsteps: he continued to take unilateral action in foreign policy and acted to ensure that Reagan never was held responsible for his presidential actions, confirming that presidential powers had expanded. 

 

When Democrat Bill Clinton came to office, and once the Republican opposition gained control of both houses of Congress in the midterm elections of 1994, the Republicans were less supportive of unchecked presidential power. While they had been unconcerned about Presidential power under Reagan and Bush, they complained that Clinton abused executive orders on domestic issues, including the environment. Clinton was also heavily investigated and even impeached over his personal behavior with women, including Paula Jones and Monica Lewinsky.

 

When Republican George W. Bush (2001-2009) came to power after the contested Presidential Election of 2000, he brought into government two figures who were particularly keen to add to executive leadership: Vice President Dick Cheney and Secretary of Defense Donald Rumsfeld. After the attacks on September 11, 2001, Cheney and Rumsfeld used national security concerns to justify the use of surveillance and “enhanced interrogation techniques” (torture). The Patriot Act was passed with very few dissenting votes, and the Department of Homeland Security was created. 

 

The decision to go to war in Afghanistan and Iraq based on faulty information caused some to call for the impeachment of George W. Bush, and a bill was introduced by Congressmen Dennis Kucinich of Cleveland, Ohio, and Robert Wexler of Boca Raton, Florida in 2008. The charges lodged against Bush in the impeachment resolution included accusations that Bush had misled the nation on the need for the invasion of Iraq; his conduct of the Iraq War; the treatment of detainees; the National Security Agency Warrentless Surveillance; and failure to comply with Congressional subpoenas. Democratic Speaker Nancy Pelosi resisted any move toward impeachment, with Bush’s time in office nearing its end.

 

Once the Democrats lost control of the House of Representatives in the midterm 2010 elections, and control of the Senate in the 2014 midterm elections, the Republicans worked mightily to attempt to block the agenda of the new President Barack Obama. Republicans argued Obama was abusing his power with his “excessive” use of Executive Orders on issues, such as the creation of numerous commissions, boards, committees, and task forces; along with Obama’s actions on environmental protections, his health care actions, and his initiatives on opening up relations with Cuba and authorizing the Iran Deal to prevent nuclear development.  Republicans further curtailed his agenda as they refused to even consider the nomination of Merrick Garland to replace Antonin Scalia on the Supreme Court after the latter’s death in early 2016, and prevented other judicial confirmations to the lower courts.  But Obama’s administration was scandal free, and no cabinet officers or other high ranking figures were indicted or convicted for corruption, which had been endemic under Reagan and the second Bush in particular.

 

Now Republican President Donald Trump has made the controversies under earlier Republican Presidents Reagan and the two Bushes look minor by comparison. Some even consider his  abuse of power as more scandalous than the Presidency of Richard Nixon. Many are concerned over the  involvement of Russia in the 2016 election; Trump’s violation of the Emoluments Clause; the abuse of power; obstruction of justice; and the massive corruption and incompetence of so many cabinet officers and other high officials under Trump which make him seem unfit for office. 

 

The crisis is greater than Watergate in many respects because Trump has now made it clear he will not cooperate with any Congressional committee demands for evidence or witnesses. He, perhaps jokingly, perhaps seriously, asserted the right to an extra two years as president because he believes he has been mistreated in his first two years due to the Robert Mueller investigation.  And his Attorney General William Barr, the same man who assisted George H. W. Bush in his move toward blanket pardons at the end of his term in 1992, is also refusing to give Congressional committees the entire Mueller report without any redactions.  And now Trump has declared he will not cooperate on any legislative actions by Congress until the “witch hunt” he sees against him comes to an end, which is not about to happen.

 

With Trump using his executive powers to attempt to reverse all of Obama’s accomplishments in office, and that of many other Presidents in the past century, unchecked presidential power has never seemed more of a threat.  Arthur Schlesinger Jr’s book from 1973 is now just the prelude to a far greater constitutional crisis that is possibly, in a permanent manner, transforming the Presidency and destroying the separation of powers and checks and balances created by the Founding Fathers in 1787.

 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172065 https://historynewsnetwork.org/article/172065 0
For Anti-Racist Educator, Teaching History Was a Calling

 

 

Tobin Miller Shearer is the Director of the African-American Studies Program at the University of Montana and an Associate Professor of History. He conducts research into the history of race and religion in the United States with an emphasis on prayer, the civil rights movement, and white identity. 

 

What books are you reading now?

 

I just finished a week-long reading marathon while my partner was out of town. Of the eight books I plowed through in a week’s time, my three favorites were Laila Haidarali’s Brown Beauty: Color, Sex, and Race from the Harlem Renaissance to World War II (NYU Press: 2018); Tera Hunter’s Bound in Wedlock: Slave and Free Black Marriage in the Nineteenth Century (Belknap: 2019); and Mark Whitaker’s Smoketown: The Untold Story of the Other Great Black Renaissance (Simon & Schuster, 2018).

 

What is your favorite history book?

 

Albert J. Raboteau’s A Fire in the Bones: Reflections on African-American Religious History (Beacon Press, 1995). His lyrical prose, historical insight, and personal passion are stunning.

 

Why did you choose history as your career?

 

This is a second career for me. After working for fifteen years in the non-profit sector as an anti-racism educator and organizer, I realized that the thing I loved most was the occasional guest lecture I got to give when consulting with colleges and universities. Every time that I explored historical questions with groups of students, I left wanting more. That continues to be the case today.

 

What qualities do you need to be a historian?

 

A love of minutiae, a passion for reading, the mind of a detective, the imagination of a storyteller, and a commitment to making the past relevant to the present.

 

Who was your favorite history teacher?

 

One of my graduate school instructors, Nancy Maclean, demanded more of me as a writer than any other history teacher I’ve ever had. I often think of how she taught us to write when I am working with students today.

 

What is your most memorable or rewarding teaching experience?

 

A special topics course on the history of the White Supremacy movement. Even though (or perhaps because?) we held the class in an undisclosed location with police protection due to the death threats I received for teaching the class, the students were amazing. I learned as much as they did.

 

What are your hopes for history as a discipline?

 

That we as a guild would continue to find ways to be relevant and engaging to students who seek out a liberal arts education and to a broader public interested in connecting the present with the past.

 

Do you own any rare history or collectible books? Do you collect artifacts related to history?

 

I have an 1857 copy of The History of Slavery and the Slave Trade, Ancient and Modern, The Forms of Slavery That Prevailed in Ancient Nations, Particularly in Greece and Rome, The African Slave Trade and the Political History of Slavery in the United States, Complied from Authentic Materials by W. O. Blake that is every bit as ponderous and heavy as its title implies. The only thing that would fall into the historical artifact category is an original typeplate of hymn number 587 “O Send Thy Light” from one of the early Mennonite hymnals.

 

What have you found most rewarding and most frustrating about your career? 

 

Most rewarding: the daily balance of being able to spend several hours on my research (on a good day) and several hours engaging with students in and outside of the classroom. Most frustrating: negotiating the bureaucratic hoops that come with operating inside an institution of higher learning.

 

How has the study of history changed in the course of your career?

 

The biggest sea change has been in the realm of technological advances. Word processing, databases, and document digitization have revolutionized the craft and discipline of research and writing. The impact of post-modernism has been more mixed with important challenges being offered to the production of grand master narratives while those very challenges have made our ability to engage in public-facing historical work all that more difficult.

 

What is your favorite history-related saying? Have you come up with your own?

 

I like Marcus Garvey’s take for its simplicity: “A people without the knowledge of their past history, origin and culture is like a tree without roots.” In my African-American history survey class I sometimes make that observation that the most valuable history is often the most difficult to find; it is what we don’t even know that we need to know that gets us into trouble.

 

What are you doing next?

 

I am about three-quarters of my way through a new book – currently entitled Devout Demonstrators (Routledge – forthcoming) – that explores the role of religious resources in historical social change movements. I am studying four domestic and four international protest movements to better understand what has happened in the past when prayer, vestments, fasting, pilgrimage, and song became part of the arsenal of activists’ tactics.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172059 https://historynewsnetwork.org/article/172059 0
The Fall of Communism in TV’s The Weissensee Saga

 

Among the numerous TV offerings available for streaming is The Weissensee Saga, a first-rate 24-episode German production now available on MHZ. Like Thomas Mann’s German novel Buddenbrooks  (1901), it has all the qualities required of a good multi-generational saga. The TV episodes (each about 49 minutes) contain love and jealousy, good acting, suspenseful plotting, and picturesque and interesting settings. But they also offers viewers, especially those interested in history and communism, excellent insights into the final decade of East German communist rule, including its collapse in 1989-1990, and the reunification of Germany (1990). In that Communist parties collapsed throughout Eastern Europe and the Soviet Union from 1989 to 1991, the saga depicted in East Germany provides valuable reflections on an even wider scale.

But first, many viewers who are not very knowledgeable about Eastern European history can benefit from knowing a few background facts. 

  • The East German and other communist governments in Eastern Europe came to power in the years following the end of World War II in Europe (mid-1945).
  • These governments owed their existence mainly to Soviet military might and support in the region, which came about as a result of USSR military victories in 1944-45.
  • The East German government was the last to assume power (in 1949) because prior to that year East Germany was ruled directly by the USSR. In 1945 the defeated Germany was divided into four zones ruled by the USA, the British, French, and USSR. Berlin, geographically within the Soviet sector, was also divided into four zones. In 1948-49, the USSR blocked railway, road, and canal access to the three western Berlin areas, but the western allies countered with the Berlin airlift to deliver supplies to their zones.
  •  In 1949, the three Western countries  combined their zones, setting up the Federal Republic of Germany, and the Soviets responded by establishing the German Democratic Republic in East Germany (GDR).
  • Soviet troops squashed uprisings in East Germany in 1953, Hungary in 1956, and Czechoslovakia in 1968.  
  • In 1955, West Germany joined NATO, and the USSR responded by establishing the Warsaw Pact, a military alliance between Russia and communist East European allies. 
  • In 1961 the GDR, with Soviet approval, constructed the Berlin Wall, dividing East and West Berlin, partly to prevent more East Berliners from pouring into West Berlin. Only in December 1972 did the two Germanys sign a treaty diplomatically recognizing each other.
  • Mikhail Gorbachev became the head of the Soviet communist party, and thus de facto leader of the USSR, in 1985 and soon thereafter initiated economic and cultural reform policies at home, accompanied by measures to end the Cold War with Western democracies.  
  • In 1989-1990 the USSR, now under Mikhail Gorbachev’s leadership, was no longer willing to use Soviet troops to put down opposition to East European communist governments. 
  • The GDR, under Erich Honeckerfrom 1971 to October 1989, resisted following Gorbachev’s example of initiating widespread domestic reforms. (In his Memoirs Gorbachev wrote that trying to convince Honecker to reform was like “speaking to a brick wall.”)

 

A leading expert on German-Russian relations, Angela Stent, has written that the GDR was “a state that never enjoyed popular legitimacy and whose most successful industry was spying, not only on West Germany, but on its own people.” This is a fitting introduction to The Weissensee Saga because the main family around which the series revolves, the Kupfers, contains two members of the Stasi, the German equivalent of the Soviet KGB, father Hans and oldest son Falk. Another prominent family, the Hausmanns, containing mother (famous singer-songwriter Dunja) and daughter (Julia), are victimized in different ways by the infamous Stasi. (An interesting coincidence is that from 1985 to 1990, when most of the TV saga is set, KGB agent Vladimir Putin was stationed in Dresden, East Germany, but the series does not mention him.)

 

What happens to the singer Dunja (played by the East German actress Katrin Sass) is typical. Like major cultural figures in other communist countries, she is forced to cooperate with communist authorities if she wishes to enjoy certain benefits, like traveling and performing in the West (West Germany in her case). The Stasi, including Falk Kupfer, employ various methods like bugging her apartment to make her more accommodating. These security police also employed torture and other means of “persuasion.” Actress Sass had her own personal experiences with the Stasi, no doubt aiding her in presenting her convincing Dunja portrait. Another major actor—Uwe Kockisch, who plays Hans—was once imprisoned for a year for trying to escape from East Germany.  

The experiences of Sass and other actors in the saga bring to life the words of historian Tony Judt:

The Communist regimes did not merely force their rule upon a reluctant citizenry; they encouraged people to collude in their own repression, by collaborating with the security agencies and reporting the activities and opinions of their colleagues, neighbours, acquaintances, friends and relations. . . . The consequence was that while the whole society thus fell under suspicion—who might not have worked for the police or the regime at some moment, even if only inadvertently?—by the same token it became hard to distinguish venal and even mercenary collaboration from simple cowardice or even the desire to protect one’s family. The price of a refusal to report to the Stasi might be your children’s future. The grey veil of moral ambiguity thus fell across many of the private choices of helpless individuals. 

Living in a nice house on a picturesque lake, the Kupfer family enjoys the privileges typical of the communist elite—father Hans has a high position in the Stasi and son Falk is an ambitious and rising force within it. The other son, Martin (East-German-born Florian Lukas), begins the series as a member of the regular police force, much less prestigious than the Stasi, but eventually quits, having soured on his duties, which included suppressing any activities disapproved of by the communist authorities. 

As social turbulence increases in East Germany in the late 1980s, partly due to the Gorbachev effect on Eastern European communist countries, members of the Kupfer family react differently. In one scene Hans approvingly watches Gorbachev on TV. Falk, both a careerist determined to get ahead and a defender of hard-line communist ways, is much less sympathetic to the Soviet leader’s reforming ways. Martin is the least political of the three. The disorienting effect on youth of the collapse of communist power in East Germany, coupled with German reunification, is also well illustrated through two of the youngest generation, Falk’s son Roman and Martin’s daughter Lisa. 

The three older Kupfer men all have their marital difficulties. Hans once had an affair with Dunja Hausmann, and his wife, Marlene (Ruth Reinecke), remains suspicious of her.  Falk’s wife, Vera (Anna Loos), grows increasingly unhappy with him and eventually leaves him and becomes a dissident against the dying communist regime. From the beginning, Martin is divorced from his wife and begins a romance with Julia Hausmann. All three Kupfer women play important roles in the saga, as do two other women who will later become involved with Falk and Martin.

The new woman in Falk’s life in later episodes is physiotherapist Petra Zeiler (Jördis Triebel), who earlier in her life was interrogated by the Stasi and imprisoned. Martin becomes involved with Katja Wiese (Lisa Wagner), a West German journalist. 

In the saga’s last dozen episodes, set in the crucial years 1989-90, East German relations with West Germans become increasingly important—the Berlin Wall is demolished in November 1989. For example, as the communist government collapses and with it the Stasi, Falk goes to work for a West German insurance company wishing to expand into eastern Germany, and is also blackmailed into providing information to the CIA. Martin’s furniture company, adversely affected by currency changes connected with German reunification, turns to a West German financier for advice. Martin’s former police partner, Peter Görlitz (an often humorous Stephan Grossmann), starts selling used cars.

As in other parts of the collapsing communist world of 1989-1991, financial reforms and what to do with all the state-owned assets becomes a crucial question, and Falk’s ex-wife, Vera, works for an agency dealing with privatizing some of these assets. Prior to this, she had become active as a dissident and later political candidate in 1990 elections. (She had also become involved with a Lutheran pastor, who falls into the hands of the Stasi and her ex-husband Falk.)  Attempting to retain East German communist assets are Falk’s mother, Marlene, and Hans’s former Stasi boss, the evil Günther Gaucke (Hansjürgen Hürrig).

The most impressive educational aspect of The Weissensee Saga is that it presents a realistic and convincing portrait of the differing lives and reactions of East German individuals in the final decade of the GDR’s existence. In that the experiences of East German politicians, officials, and other citizens were similar to that of individuals in other eastern European communist countries, including the USSR, we also gain insights into what mattered to them, for example, financial changes; more freedoms, including less government restraints; and personal transitioning from one economic-political-cultural system to another—like the East Germans, Russians such as Vladimir Putin had to make such a transition).  Along with MHZ’s even longer series, the World-War-II-72-episode A French Village, The Weissensee Saga  now springs to the top of my favorite TV fictional historical sagas. 

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172053 https://historynewsnetwork.org/article/172053 0
All the President’s Humility: What We Can Learn From Young George Washington

 

The Founding Father whom Americans revere as the incarnation of steady, selfless leadership – George Washington – was, in his early twenties, a remarkably self-centered young man.  This poses an interesting question:  Can today’s leaders – beginning at the top – make a similar transformation from self-centered to steady and selfless?  Or is it just too late?

 

After four years of immersing myself in George Washington’s life between the ages of twenty-one and twenty-six, what I find most surprising is not that he eventually grew to become a great leader.  Rather, it’s that he became a great leader despite where he started. As a young man this guy was a mess. 

 

Washington is certainly not the first young man to be selfish, egotistical, vain, thin-skinned, ungracious, whiney, petulant, and brazenly ambitious.  Most young men who feel underappreciated, however, don’t quit or threaten to quit their job at least seven times in the first few years.  Nor are most obsessed with their best friend’s wife.  Nor do most twenty-somethings inadvertently set off a global war.  What distinguishes George Washington’s youthful follies in the 1750s is that his relentless ambition happened to coincide with the many unsettled territorial claims to North America, creating a volatile mixture.  As it combusted, his youthful self-centeredness played out on a stage that quickly expanded from local, to regional, to international – with disastrous consequences.

 

From his mid-teens onward, Washington’s ambition shows.  It accelerates to a relentless upward clawing as he enters his twenties.  His father, Gus, had died when George was eleven, setting back George’s prospects for a secure future.  As a younger son in a fourth-generation family of middle-level Virginia tobacco planters, George, unlike his older half brothers, was not sent off to Britain to receive a polished boarding-school education, nor did he inherit enough land from Gus to support himself.  After his forceful and cantankerous mother, Mary Ball Washington, shot down his plan at age fourteen to go to sea, George, needing a way to make a living, polished up his father’s old surveying instruments, taught himself to operate them, and set himself up as a freelance surveyor.

 

By age eighteen, he’d earned enough money to start buying his own pieces of frontier land beyond the Blue Ridge Mountains.  By age twenty-one, still not rising fast enough in the Virginia aristocracy to sate his driving ambition, Washington took a part-time post in Virginia’s colonial military and volunteered for a dangerous winter mission.   He was to carry a message from Virginia’s British governor, Robert Dinwiddie, over the Appalachian Mountains and deep into the Ohio wilderness, delivering it to the commandant of a newly built French fort.  

 

The message said in essence, Stay out! All these lands belong to King George.        

 

This launched Washington on five years of harrowing adventures in the Ohio wilderness, its dangers further fueled by his heedless push to make a name and his almost utter lack of experience.  He came within an inch of dying on that first mission – pitched off a makeshift raft into an icy river then nearly freezing to death during the frigid night on a snowy island.  On his second mission into the wilds he rashly ambushed a French diplomatic party that was breakfasting in a wooded glen.  Not surprisingly, this triggered a massive retaliation by hundreds of French soldiers and Indian warriors, during which Washington’s outnumbered men perished in a pouring rain in blood-and-mud-filled trenches.  He had to surrender (although he refused to use that word) the claptrap fort he had thrown together, appropriately named Fort Necessity for the desperate circumstances he had created for himself and his troops.  This resulted in deep humiliation for the British Empire and its authorities in London, touching off tensions that exploded into the French and Indian War (and spread to Europe and around the globe as the Seven Years War).

 

Young Washington fervently wanted a British Royal Army officer’s commission – instead of his much less prestigious Virginia colonial commission – and he rode great distances to petition various aristocratic British generals to give him one. But he was a hayseed by their standards, an uneducated rube, and a military loser besides.  He was never granted a “king’s commission,” cementing his lasting resentment toward the British whom he felt treated him as second class.

 

He is mostly remembered today, of course, as the immortal embodiment of sound leadership. So how does one evolve from a festering mass of insecurities and perceived injustices to become a great leader? Not easily, and not all at once. It took Washington many years to metamorphose from self-centered, impetuous young man burning with ambition to gain personal “honor” into a steady, selfless, seemingly unflappable leader.

 

Yet one catches glimpses from his early twenties that hint at what he might become – transformative moments that show a young man beginning to extend his emotional and intellectual reach beyond himself.  There is a moment when he literally gets down off his high horse – the living embodiment of a Virginia gentleman’s status – and walks the muddy trail beside his men, freeing the animal to haul armaments over a steep mountain pass.  He shows an almost desperate sense of helplessness when Virginia frontier settlers, whose safety has been entrusted to his care, plead with him to save them from roving bands of Indians who scalp their loved ones and burn their homesteads, offering to give his life to save theirs if only it would help.  One senses in these moments his growing empathy for the plight of others.  As a young aide-de-camp to British General Braddock, he barely survives the wilderness ambush by Indian warriors and French soldiers of a large column of the general’s Redcoats.  Washington’s narrow escape from death was marked by the multiple bullet holes through his coat and hat.

 

“[T]he miraculous care of Providence,” he wrote his younger brother after the battle, “…protected me beyond all human expectation….” 

 

Implicit in this remark is that Providence may have chosen him for some greater role. Perhaps his destiny is not simply all about George Washington.

 

One sees steps toward a more mature style of leadership.  After he makes a series of heedless blunders in his rush to prove himself in his first engagements, Washington learns to listen carefully to intelligent and trusted advisors and weigh their words judiciously before making a decision. 

 

Young George Washington was not immune from his own era’s culture of voluble denial and dexterous shifting of blame, the same that besets us today.  He initially denied his mistakes or obfuscated his moments of failure.  The surrender of Fort Necessity comes to mind, when the twenty-two-year-old colonel’s less-than-complete public recounting of the bloody debacle reads as if his forces and the French simply agreed to stop fighting and walk away, rather than the reality of a slaughter leading to Washington’s forces’ surrender and signed documents to that effect.

 

As he grew older, he learned to cultivate his image and project a sense of dignity. Many commentators have remarked that he seemed to see himself as an actor on a stage.  He rarely revealed his deepest emotions, at least in public. But as he matured into leadership he clearly learned to accept his failures, take responsibility for them, and acknowledge his own human frailty, if sometimes only to himself and his beloved Martha.  When, at age forty-three, he was asked by the Continental Congress in June of 1775 to command the newly formed Continental Army against British forces, Washington responded, “…I this day declare, with utmost sincerity, I do not think myself equal to the command I [am] honoured with.”

 

What leader would say that today?  Who has that kind of humility?  Maybe it’s simply too late for most of our current leadership.  Self-centeredness and driving ambition have always played a role among American politicians.  One wonders, however, if today those qualities are amplified in our leaders – even encouraged – by instant polling, social media accounts that precisely measure “popularity” by counting followers or hits, and a media environment that thrives on volatile, off-the-cuff political commentary.   Is it just too hard for our leaders to embrace humility in that churning vortex, to acknowledge their own weaknesses?

 

Or do the lessons of humility have to come from somewhere far deeper, a place where the penalties for arrogance land far more severely?  Amid all the noise among our leaders today – the posturing, the blaming, the denying – how powerful are the consequences of self-centeredness and ambition, how immediate, how graphic, how frightening?  

 

The young George Washington, by contrast, suffered horrendous consequences for his self-centeredness and driving ambition, such as seeing his dead comrades-in-arms sprawled in the bloody trenches of Fort Necessity.  While rain cascaded down and darkness fell, it was a sobering reality.

 

Much later in life, during his presidency and after, Washington worried about extreme partisanship literally tearing the young nation into pieces.  Maybe we as a nation nearly two-and-a-half centuries later have felt invulnerable in our unity – fearing no consequences for our sniping fractiousness – until suddenly we find that unity shattered into unfixable, razor-edged shards.  Washington did not take unity for granted, in the least.  As commander-in-chief of the Continental Army and then as president, he understood that the greatest task facing him was not to enhance his stature but to unify the troops, then the nation.  The battering he received in his early twenties in the Ohio wilderness helped him arrive at this realization.  He learned – in the hardest way – that it was not only about him.  It was about everyone.  He learned to settle his anger, open his ears, subsume his hefty ego to a greater good.

 

In Washington’s era, as today, much of the responsibility for leadership fell on the citizenry and their honesty with themselves.  They could see the man and judge him for what he was.  He learned to welcome that, instead of ducking from it.  Washington’s growing self-assurance allowed him to acknowledge his own weaknesses and imperfections and, as a result, maintain his dignity rather than assuming a reflexive position of bluster, muscle-flexing, and blame.  Those who looked to him for leadership recognized his humility as a sign of wisdom and strength.  They saw a leader who was sincerely trying to do his best for a struggling nation.  They rallied behind him.  When they did, his humility ultimately became a source of the citizenry’s wisdom and strength.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172009 https://historynewsnetwork.org/article/172009 0
Roundup Top 10!  

 

We Don't Have to Imagine the Consequences of Abortion Bans. We Just Have to Look to the Past

by Leslie J. Reagan

Making abortion illegal never meant abortion didn’t happen. For the entire century of criminalized abortion, women of every class, marital status, religion and race still obtained them.

 

Why nuclear diplomacy needs more women

by Elena Souris

Historically, a homogenous group of policymakers make innovation less likely.

 

 

Rashida Tlaib’s critics have Palestinian history all wrong

by Maha Nassar

The decades-long process that led to the creation of Israel involved plenty of Palestinian suffering.

 

 

The Real Reason Iran’s Hardliners Don’t Want To Talk To America

by Shireen T. Hunter

Tehran’s reluctance to engage in direct talks with America at a normal state-to-state level within a bilateral framework long predates the Trump administration.

 

 

Calhoun statue should not stand in prominent public space

by Joseph A. Darby

The only good “compromise” is to take it down and involve those who cherish his memory in choosing a suitable venue for its more appropriate display.

 

 

We need to stop focusing on the mental health of mass shooters

by Deborah Doroshow

Mentally ill Americans are already stigmatized — and wrongly so.

 

 

Living in a Nation of Political Narcissists

by Tom Engelhardt

American election exceptionalism from 1945-2019.

 

 

How Democrats can win the abortion war: Talk about Roe's restrictions as well as rights

by Jonathan Zimmerman

Republicans are lying when they paint us as the party of death and infanticide. Fight back by championing both the right to abortion and limits on it.

 

 

On the Recent Executive Order on"Free Inquiry" in Higher Education

by James Grossman and Edward Liebow

President Donald Trump’s executive order of March 21 on “free inquiry, transparency, and accountability in colleges and universities” is a textbook example of a classic negotiating ploy—misdirection.

</

 

Reclaiming History From Howard Zinn

by Naomi Schaefer Riley

The left’s portrait of America’s past has triumphed thanks to the abdication of serious historians. Wilfred M. McClay offers an antidote.

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172054 https://historynewsnetwork.org/article/172054 0
Susan Sarandon Shines in Happy Talk

 

The musical South Pacific, the Richard Rodgers and Oscar Hammerstein show about life in the Pacific during World War II, debuted in 1949 and was a huge hit. Now, all these years later, Lorraine has been cast as Bloody Mary in South Pacific in a production of the show at her small town’s Jewish Community Center. She sees herself as a glowing celebrity in yet another starring vehicle in her local theater group. She is, as she constantly says, loved by all. The musical comes to represent her life, and the life she wishes she lived.

 

Lorraine’s mother is dying and receives 24 hour care from a former Serbian caregiver with problems galore, Ljuba. Lorraine has been in a hollow marriage for years with her husband, Bill, who rarely speaks to her and is absorbed daily in a book about the Civil War. Lorraine bristles that Bill has turned into an old man because all old men in America find that they must read a book about the Civil War before they pass on. 

 

The travails of Lorraine are the material of Jesse Eisenberg’s very funny and very moving new play, Happy Talk, with a tremendous performance by Susan Sarandon as its star. The play opened Thursday at the Griffin Theater at the Pershing Square Signature Theater Complex on W. 42d Street, New York.

 

What do you get from a 1949 musical such as South Pacific? Everything, according to Lorraine. In the play, you continually hear the song Bali Hi, a song about hopes and dreams and a special place in a troubled world. That’s Lorraine’s world. She has built a self-centered, egomaniacal world for herself and refuses to recognize the odd and painful life in which she exists. She constantly goes back into the past, and to mythical Bali Hi, to try to re-discover herself, continually failing.

 

She never did have a good marriage and, of course, blames her husband, who can’t stand her. She and her mother never got along and for that she blames – mom. She never had friends and for that she blames all the people she says are her friends but will have nothing to do with her. As an example, after each rehearsal of the play all the actors go to a nearby bar to have a drink, but never ask Lorraine to join them.

 

Lorraine’s daughter, Jenny, whom she raised to be the same fantasy world chaser as her, arrives in mid-play and harangues her mother in long, hateful dialogues. Her chickens have not only come to roost, but to harass her.

 

She does have a wonderful relationship with Ljuba, the Serbian caregiver, who tells the audience a bit about the history of Serbia over the last few decades, Ljuba loves Lorraine, but for no apparent reason (you find out soon enough).

 

Ljuba has a time honored American historical problem involving immigration. She has been in the U.S. illegally for years and must find someone to marry her so she can stay. She’s willing to pay the going black market rate for arranged marriages, $30,000, and asks Lorraine to find her a hubby.

 

Lorraine recruits actor pal Ronny, who could be her lone friend, who agrees and is in it for the money and the money alone, even as he leads Ljuba to believe he likes her and that she could find happiness with him.

 

The road Ljuba and Ronny go down, urged on by the smiling and encouraging Lorraine is slippery slope and is the same road thousands of illegal immigrants have followed for a century in America. The marriage scam is an old one. The government permits many women, or men, from foreign lands to stay in America if they marry an American. Whole industries are involved in this. Men get off a plane and are married to a total stranger a few days later for a specified amount of money. Some women marry dozens of men, all at a fixed price.  It is a marriage mill that has been churning out legal couples who barely know each other for generations. When it starts in the play, numerous members of the audience nodded knowingly because the scam is so familiar to all.

 

The play is very, very funny and playwright Eisenberg takes the audience along on a comedic roller coaster with ups and downs and spins around dangerous curves. Then, later, there is a dramatic change in the story. His script is brilliant when it is funny and deep and provoking when it is dramatic.

 

Eisenberg’s work is smarty directed by Scott Elliot, who gets full use out of the music in South Pacific, particularly the song Bali Hi, using it as a backdrop to tell the story.

 

All of the actors do fine work. Ronny, the bubbling gay actor eager to collect his money, is played by the delightful Nicci Santos. Grumpy Bill, so enchanted by Lee, Grant and Gettysburg, is played well by Daniel Oreskes. Marin Ireland gives an enchanting and memorable performance as Ljuba.

 

The centerpiece of the show is Lorraine, played by Ms. Sarandon, The well-known screen actress (Bull Durham, Thelma and Louise, etc.), the star of so many movies, is just as comedic, and powerful, here on stage as she has been in any film. Her character takes both slow and sharp turns as the play progresses and Ms. Sarandon masters all of them. She is lovable and embraceable when she is funny and menacing when she is angry. She is hateful, and yet very vulnerable. She turns Lorraine into a memorable character, a pathetic middle-aged woman you will never forget. 

 

PRODUCTION: The play is produced by the New Group. Set Design: Derek McLane, Costumes: Clint Ramos, Lighting: Jeff Croiter, Sound: Rob Milburn and  Michael Bodeen. The play is directed By Scott Elliott. The show has an open-ended run.

      

]]>
Tue, 18 Jun 2019 14:41:13 +0000 https://historynewsnetwork.org/article/172010 https://historynewsnetwork.org/article/172010 0