Active Shooter

My good friend Jill recently posted a comment about the release of a new video game called “Active Shooter” in which the player is armed and enters a school to see how many “cops and ‘civs'” he or she can shoot. The “civs” are civilians — presumably including children? I don’t know because I haven’t seen it. No do I want to. But her summary and description of the game caused me to burst forth with a comment in which I insisted that we must finally face the fact that violent games cause violence in children. Scottie, a fellow blogger, then politely took me to task on the grounds that he was (and is) a game-player and also in the armed forces later in his adult life and he has no desire whatever to enter a school and shoot children. Point taken. I would like to respond to his comment and expand on my argument in this post.

To begin with, let’s agree that a causal relationship is notoriously difficult to establish. Just ask the cigarette companies who denied for years what everyone now knows, to wit, that smoking causes lung diseases, including cancer. The problem is that in order to show that A causes B one must establish that B never occurs without A and that whenever we have A we have B. In the case of cigarette smoking, there are smokers who never get any lung diseases and there are those who never smoke who nevertheless do end up with terrible lung diseases, including cancer. So how can we say the one causes the other? In the end it is because there is a constant conjunction  or a high correlation of A and B, enough of a conjunction to conclude that there is a causal relationship between the two — not an inviolable relationship, admittedly, but a causal relationship none the less, in the sense that it is highly likely that A will be followed by B.

Now, we know a number of things about human beings. Freud has told us, to our chagrin, that we are all aggressive and inclined to violence in one way or another. As infants we are immersed in our own world where our demands are almost immediately met. As the months and years pass we gradually learn that there are things we cannot have and things we are not supposed to do. (Well, we should learn those things; we assume that parents and teachers are doing their jobs.) The result is what we call “civilization,” and it comes from the sublimation of violent, aggressive impulses into socially acceptable channels, such things as art, philosophy, and science. Or else we find socially acceptable channels to provide us with vicarious release of those impulses, such as humor and violent games like football and boxing.  Moreover, we also know about humans that we learn by imitation– like all animals. What we see we tend to imitate.

Thus, it would seem natural to conclude that constant playing at violent games would result in children growing into adults who seek to imitate those same actions in order to release aggressive impulses.  But what about those kids that play the games endlessly, not only in this country but all around the world? Violence is more prevalent in this country than in others where the games are still played. And as Scottie noted in his case, he played the games and later became a professional soldier and yet he has no desire whatever to shoot children. We seem to have come a cropper.

The answer, I think, lies in the Freudian notion of the “reality principle,” which Freud uses to explain how the infant we spoke about a moment ago gradually learns to adapt to a society that disallows the sudden release of violent impulses. With good parenting and good role models, the young children who play the games (in this case) learn to sublimate those violent impulses, as we all should. But in a permissive society where parents both work and kids are raised by the television (which is also filled with violent images) and day-care where they cannot possibly receive the love they crave, kids are more likely to have a weak reality principle and find it more difficult to separate the games they play from the real world around them where, if someone is shot, there is terrible pain and serious consequences for the shooter.

In a word, I think the case can be made that there is a conjunction between the repeated immersion in an imaginary world where violence is the norm and the trend toward greater violence in this society that is generally too busy to instill in the young what used to be called “good character” and which Freud called a sound reality principle — the ability to distinguish between games and reality. I think the conjunction is strong enough to call it a causal relationship. But just as there are smokers who do not get cancer of the lungs, there are game players, like Scottie, who have a stronger reality principle and who do not become violent adults entering the schools and shooting “civs.”

The way to test this theory would be to take the games away from the kids and see what results. But that will never happen. So the alternative is to have parents spend more time with their children, reducing their game-playing somewhat while at the same time explaining to them how things work in the real world. I suggest that if this does not happen we shall see more and more examples of violent behavior on the part of more and more people.

Advertisements

Strict Construction

During the Reagan administration Attorney general Edwin Meese and Judge Robert Bork and other conservative spokesmen demanded that the Constitution be interpreted as the founders intended it, that there be a “strict construction” of the Constitution in attempting to decide contemporary court cases. This view has been around for some time, but it rests on the questionable assumption that we can know what the hell the Founders meant when they wrote that document in the eighteenth century. We can’t. We don’t even know what we ourselves mean when we say or write something today — even our own words in many cases! Can anyone reading this please tell me, for example, what the dickens our President means when he tweets his endless drivel? Does he even know?.

James Madison, who authored the Constitution in large part, and who kept copious notes on the proceedings of the Convention when the document was being discussed, insisted that his notes be kept private until after his death. In his words, it was better that people make of it what they can on their own, that his notes remain unpublished

“till the Constitution should be well settled by practice, and till a knowledge of the controversial parts of the proceedings of its framers could be turned to no proper account.”

In a word, it is better, in the view of the founders themselves, that no attempt be made to try to determine what a group of men — most or whom disagreed with one another on nearly every topic — might possibly have meant, especially hundreds of years ago. As John Murrin says in his excellent book, Rethinking America, ”

“Even if we decide to accept the accuracy of these accounts, they only tell us what one man [Madison] thought, not why the majority voted as it did or what the majority assumed it was doing.”

Strict Construction is a fiction. It demands that we strive for an impossible goal: to know what a group of men thought years ago in the heat of debate and during a time when thirteen colonies had very different agendas and there was yet no sense of a “united” states. Murrin concludes that

“The real question is not what the drafters thought they were writing, but what the people believed they were implementing [when they ratified the document].”

And that, as we know, is an impossible quest. But, then, so is “strict construction.”

New Perspectives

In reading John Murrin’s new book, Rethinking America: From Empire to Republic, I was struck by the deep divisions that separated the original thirteen colonies and made the uniting of those disparate entitles almost impossible. I have always thought it was simple: England abused the colonies; they united and threw off the weight of the Empire. As Murrin points out, however, deep divisions among the colonies existed before the revolution broke out and persisted long after the war was over — eventually leading to the Civil War.  At one point the New England states threatened to separate themselves from the rest and establish their own identity. And the South was never happy about joining the North where, they thought, abiding loyalties to the English king persisted and a determination to end slavery would cripple the economy of the South.  The adoption of the Constitution was not a matter of course; it was a struggle:

“[By 1787] the only alternative to the Constitution was disunion.”

This remained a real possibility during that turbulent period as the aforementioned interests of the New England states differed almost completely from those of the deep South. And the Middle States wavered back and forth between Federalism, following Alexander Hamilton, and Republicanism, following Thomas Jefferson. There were, throughout the period, many who remained loyal to England and, indeed, most Americans at the time regarded themselves as English citizens — even after the revolution. As Mullin presents his case, it is remarkable that the colonies were ever able to unite enough to carry off the war, much less adopt a Constitution that would unite such diverse entities. But the Stamp Act, together with the Boston Massacre, in addition to a series of political blinders on the part of the English parliament, persuaded enough people in this country that separation from England was the only way to go. And, after the revolution, strength lay in a united states of America, not separate colonies or states. But, almost without exception, the colonists did not want a strong central government. They wanted their independence and minimal interference with their lives. Murrin describes the struggles in detail, and they were immense.

What I found particularly interesting was the widespread distrust at the time of the people, the common clay, along with the difficulties connected with the ratification of the Constitution itself — regarded by many historians as an “elitist” document, full of compromises and exhibiting the aforementioned distrust — as in the case of the notion of representation restricted to

“one for every thirty thousand people (a figure about twice the size of contemporary Boston) . . . . . [This was a document] designed to secure government by ‘the wise, the rich, and the good.’ Only socially prominent men could expect to be visible enough over that large an area to win elections, and they might well get help from one another. . .”

It is fairly well known that a great many people, loyal to the English, fled this country and headed for Canada during the revolution. In fact, my wife’s ancestors were among them — while one of my ancestors fought alongside Washington and died at the battle of Princeton. (It has not caused problems in our marriage you’ll be happy to know!) What is not so generally known is that a great many people who remained behind during those years were loyal to the English and played a role in the revolution itself — spying for the English and making secrecy in Washington’s tactics nearly impossible. More than one-third of the population of New Jersey, for example, was fiercely loyalist during the revolution. One wonders how on earth the colonists pulled off the victory at Trenton after crossing the Delaware — given the presence of so many who would have gladly told of the movements of the militias.

Alexander Meiklejohn once said that people should read history after they know everything else. I know what he meant, but I disagree. History is fascinating and important. And in an age that is self and present-oriented and inclined to dismiss history as “yesterday’s news,” an age in which history has been jettisoned from college curricula across this land, it becomes even more important, especially for those who know nothing. We learn how to act today by reading about the mistakes we made in the past — just as the young learn from their parent’s mistakes. But, like the young, we think we know better. We think that ours is a unique experience and nothing the old folks have to say has any bearing on what is going on our life.

It may have been best said by the ancient historian  Diodorus of Agyrium in 85 B.C. (surely you have heard of him?) when he noted that

“History is able to instruct without inflicting pain by affording an insight into the failures and successes of others. . . History surpasses individual experience in value in proportion to its conspicuous superiority in scope and content.”

The kids are wrong: we can learn from others. We had better.

 

 

Communities

In an ideal world, that is a world as I like to imagine it, colleges and universities would be communities of learning, places where folks with different points of view, ages, and preferences meet to discuss with open minds the issues that have confounded humanity for generations. The emphasis here is on “communities,” since the idea is that there is a common purpose, a common goal: all are together to learn from one another and from other minds outside the community that are invited in to share what they know and join in the conversation.

In the real world, the world we all know and love, it is not quite like this. Increasingly, colleges and universities have become warring camps where faculty and students align themselves with one another on political or ideological grounds and dare others to intrude. Increasingly commonplace are such things as the denial of invitations to certain people to come to campus to join in the conversation; such invitations are met with howls of protest as they are regarded as the anathema of what education is now all about. Faculty members select reading material that conforms to their own particular “take” on the issues of the day, insisting that others have done so for generations and it is now their turn. Whether or not this is true, and I seriously question it, there is no place for this sort of selective indoctrination, the hammering into young and impressionable heads the last word on controversial topics that allow for a variety of opinions, indeed, demand a variety of opinions in order to help the young people to learn to think. Cultural diversity, with the stress on the superiority of other cultures (any other cultures) to our own, has taken the place of intellectual diversity, the open expression of a variety of points of view on complex issues. Shouting has replaced civil discourse, and open minds have been closed.

Years ago, when I taught at the University of Rhode Island there was increasing interest among faculty members in the new unions that were forming around the country. At URI we had the American Association of University Professors, the mildest form of union, but one dedicated to guaranteeing freedom of speech, intellectual freedom, and, of course, decent wages for the hard work that many are unaware goes into teaching the young. At the time I worried that this new wave of unionization might well lead to a confrontational relationship between faculty and administration, that it would destroy the collegiality that I though central to the purpose of a community of learning. How naive! But, in a sense, I was right. Unions, for all the good they do, tend to grow like an experiment gone wrong and to become all-powerful and all-important. Instead of working to protect the ideals of communities of learning they lend themselves to the growing conviction that education is all about business and learning must take a back seat.

All of this, I suppose, is the complaint of an old, fossilized college teacher who complains that things were never as they should have been but are even worse today then they were once upon a time. There is some truth in this, of course, as old folks tend to look back with rose-colored glasses. But, at the same time, it is undeniably true that the gap has grown wider and wider between the ideal of education as a place where the young come to gain true freedom, the possession of their own minds, and the reality of college as a business. I have seen it happening and while I have done what I could to close that gap I do realize that it is too little too late. Things were never ideal, and there have always been reasons to complain — legitimately so. But of late, the larger culture has come together with the academy to create a world within a world in which business is the order of the day and intolerance has replaced tolerance while the young struggle to understand why they are there in the first place — and how on earth they are going to pay for the privilege after graduation.

There are success stories, of course, excellent students who want to learn and grow led by dedicated teachers who realize that the student’s intellectual growth is of paramount importance, and it is not fostered by indoctrination posing as education . And these exceptions are the foundation on which to build our hopes as they are in the world at large where good people struggle to do good while all around them folks worry only about how to do well, how to “succeed” in  world in which success is measured in dollars and cents.

US CEO Pay has reached epic differential

This is sad, but not new. It has been going on for many, many years. And the problem is even more distressing when the figures are compared and contrasted with the CEOs in other “developed” countries.

musingsofanoldfart

As reported in The Guardian today, US CEOs now make in pay 339 times the pay of the average worker according to a Bloomberg study of 225 companies. In retail companies, the ratio is 977 to 1 on average. Let that sink in a little.

A quote from the article entitled “‘CEOs don’t want this released’: US study lays bare extreme pay-ratio problem” by Edward Helmore is very revealing:

“According to a recent Bloomberg analysis of 22 major world economies, the average CEO-worker pay gap in the US far outpaces that of other industrialized nations. The average US CEO makes more than four times his or her counterpart in the other countries analyzed.”

Some people may push back and opine that US CEOs may be worth 4X that of their non-US industrialized nation counterparts. If that were true, it would be mean US company performance is 4X that of non-US companies…

View original post 387 more words

The Hollow Man

Bartley Hubbard is a hollow man. He is a flawed character and totally without principles. He is self-absorbed and uses others to improve his standing in his own mind. He is not a wicked man in the strict sense of that word: he hasn’t killed anyone and hasn’t raped any women — so far as we know. Though, in all honesty he does flirt mercilessly with pretty young women while in the company of his beautiful wife. Oh, did I mention? His wife is beautiful and worships the ground Bartley walks on — which is why he married her. While she is away one Summer after they have been married for some years, he ruminates on his wife and his feelings for her, recalling that when they broke apart some years before, she was the one who sought him out and wanted to be with him, accepting all the blame for his many shortcomings:

“As he recalled the facts, he was in a mood of entire acquiescence; and the reconciliation had been of her own seeking; he could not blame her for it; she was very much in love within and he was fond on her. In fact, he was still fond of her; when he thought of the little ways of hers, it filled him with tenderness. He did justice to her fine qualities, too; her generosity, her truthfulness, her entire loyalty to his best interests; . . .[however,] in her absence he remembered that her virtues were tedious and even painful at times. He had his doubts whether there was sufficient compensation for them. He sometimes questioned whether he had not made a great mistake to get married; he expected now to stick it through; but this doubt occurred to him.”

Bartley and his wife Marcia have a child. He is only a fiction, of course, a figment of William Dean Howells’ imagination. But he is, in Howells’ words, a “modern instance” in the novella by that name. Bartley Hubbard, pragmatic and unfeeling at the core, is a modern instance of a hollow man whom Howells worried was beginning to become more and more common in the late nineteenth century, the so-called “modern” age. In our “post-modern” age his type is becoming legion. And in a country led by the grand pooh-bah of hollow men, we should be quite familiar with the type by this time.

Bartley drifts along writing for newspapers and accepting the accolades and financial rewards, when they come, as a matter of course. A turning point in the novel, when Bartley steps over a line and becomes less a hollow man and perhaps more a cad, is when he steals intellectual property from an old and trusted (and trusting) friend, Kinney, “the philosopher from the logging camp.” Kinney was, among many things, a cook at that logging camp in Maine who had befriended Bartley because he saw in him a bright and good-humored person. One evening Kinney shares with Bartley and another friend stories of his exploits during his long and fascinating life. He plans one day to write them down and get them published, but before he can do that Bartley has written them down and had them published himself to wide acclaim. In the process he allows it to be mistakenly believed that the friend who was with him that evening wrote the stories — his friend is allowed to take the blame for the theft of another’s intellectual property when it becomes known. Needless to say, in the process Bartley loses two close friends. But he cares not. Not really; after all, he has lost a number of friends along the way, people who have seen through the facade and don’t like what lies behind. After all, his story was a success and it garnered him a large financial reward.  And money is very important to Bartley — along with the prestige it gives him.

The truth slowly comes out about what Bartley has done and he finds himself fired from his high-paying job on one of Boston’s most popular newspapers and set somewhat adrift. He borrows some money from a man he regards as a friend and proceeds to gamble it away. His wife finally begins to see the sort of man she has married and sends him packing, though she immediately regrets it because she can never quite shakes the image she has of the man she still loves. It bothers him not, because he can rationalize that what he did is not wrong and others are wrong to persecute him. Bartley is very good at rationalizing and placing the blame on others. As a hollow man he has no center, no principles that might otherwise give his life meaning and direction. This in one reason he remained with his wife as long as he did: she had been very willing to take the blame for his many faults and brush them aside as they did not fit in with her image of what her husband is.

William Dean Howells is a brilliant novelist and A Modern Instance may be his best work., But in any event, he is prescient as he saw coming soon after the Civil War that the Bartley Hubbards would become increasingly numerous, men who are hollow at the core and who are lost within the labyrinth of their own diminished self whose only goal is to seek pleasure and financial ease. And like any great work of literature, there is much food for thought and many insights into the modern, and the post-modern, temper. We can learn a great deal from those old, dead, white, European (or in this case American) men, can we not?

 

Religion and Mammon

I recently blogged about what I argued was the inverse relationship between morality and wealth, insisting that we have somehow lost the proper perspective — one that the Greeks shared, for example — between wealth and morality. Aristotle, for example, insisted that the accumulation of wealth was a means to an end, not an end in itself, and that too much wealth was a threat to a balanced character. The goal of humans is to be as happy as possible and a certain amount of money is necessary to that end. But when the accumulation of wealth becomes all-consuming it is problematic. In my argument I suggested that the preoccupation with wealth in this country since the Civil War has brought about the reduction of our moral sensibilities. But, it might be asked, what about religion? Folks like Gertrude Himmelfarb insist that Americans are among the most religious people on earth. Doesn’t this undercut my argument somewhat?

I would say not because, as I argued in a blog not long ago, much depends on what we mean by “religious.” If we mean, simply, that many Americans attend church regularly, this is probably true — though there are a great many empty churches in this country and traditional religions are struggling, especially in attracting the young. Furthermore, mere attendance at church hardly sets one apart as a deeply religious person. In any event, William Dean Howells noted in a most interesting novel, A Modern Instance, that religion — even in the late nineteenth century — had become something less than all-consuming and considerably less important than such things as the pursuit of pleasure. Speaking of the small New England town in which his novel was set, the narrator notes that:

“Religion had largely ceased to be a fact of spiritual experience and the visible church flourished on condition of providing for the social needs of the community. It was practically held that the salvation of one’s soul must not be made too depressing, or the young people would have nothing to do with it. Professors of the sternest creeds temporized with sinners, and did what might be done to win them to heaven by helping them to have a good time here. The church embraced and included the world.”

Howells was good friends with Mark Twain and we can see the same scepticism in his novel that I noted in some of Twain’s comments quoted in the previous post. The Civil War did something profound to the ethos of this country. The death of 620,000 young men almost certainly had something to do with it. But the sudden accumulation of great wealth by many who took advantage of the war to turn a profit was simply a sign to others that this was the way to go. And the Horatio Alger myth was born as inventors and business tycoons seemed to appear out of nowhere. The cost to the nation as a whole was the conviction that it is easier for a camel to pass through the eye of a needle than for a rich man to enter the Kingdom of Heaven. Wealth increasingly became a sign of success and even of God’s favor. As I noted, the relationship between the accumulation of wealth and the sacrifices necessary to pursue one’s sense of duty was turned upside down and instead of looking askance at those with fat pocketbooks, as was the norm for hundreds of years, the rich became instead role models for the rest of the country — and the world, as it happens. Simultaneously, as it were, morality was set on the shelf by many who now insisted that right and wrong are relative concepts and virtue is something to be read about but not to bother one’s head about. It certainly shouldn’t be allowed to interfere with pleasure and the accumulation of as much money as possible.

 

Dollars and Sense

I am borrowing this title from my senior thesis in college. I have been fascinated since that time (back in the Dark Ages) by the direct relationship between the accumulation of great wealth and the weakening of moral precepts. We are at present witness to the very fact to which I allude in the form of a very wealthy president who has (shall we say?) his own unique take on morality. But this is merely an isolated example and hardly makes my case.

In the pages of a novel by George Eliot in Victorian England around the time of our Civil War, the author pined for a time before the coming of the railroad when:

“reforming intellect takes a nap, while imagination does a little Toryism by the sly, reveling in regret that dear, old, brown, crumbling, picturesque inefficiency is everywhere giving place to spick-and-span new-painted, new-varnished efficiency, which will yield endless diagrams, plans, elevations, and sections, but alas! no picture.”

Perhaps reflecting this same sentiment in an introduction to an edition of  Mark Twain’s Huckleberry Finn he wrote in 1950, Lionel Trilling focused on the fact that Twain noted that the Civil War in this country marked the sudden transition from a mere desire for money to a fixation with it, the growth of greed in this country on a grand scale and the loss of something of major importance, something very much like what George Eliot regretted losing. He also drew on such prominent thinkers as Twain, Henry Adams, Walt Whitman, and William Dean Howells when he noted that

“. . .something had gone out of American life after the war, some simplicity, some innocence, some peace. None of them was under any illusion about the amount of ordinary human wickedness that existed in the old days, and Mark Twain certainly was not. The difference was in the public attitude, in the things that were now accepted and made respectable in the national ideal. It was, they all felt, connected with new emotions about money. As Mark Twain said, where formerly ‘the people had desired money,’ now they ‘fell down and worship it.’ The new Gospel was, ‘Get money. Get is quickly. Get it in abundance. Get it in prodigious abundance. Get it honestly if you can, dishonestly if you must.'”

Now, to be sure, one could go back to John Calvin for the source of the Protestant “work ethic” and the birth of the notion (which has become commonplace among the spiritually certain) that wealth is a sign of God’s love. But, in this country at least, in the early years there was a healthy suspicion about wealth and a concern that too much was not a good thing.  Indeed, a preliminary draft of Pennsylvania’s Declaration of Rights included an article that stated:

“. . .an enormous Proportion of Property vested in a few individuals is dangerous to the Rights and destructive of the Common Happiness of Mankind.”

This, perhaps, was a result of the Puritanical view that the love of money is the root of all evil. In any event, nearly all of the colonies has proscriptions, even laws, against the accumulation of too much wealth — laws against such things as primogeniture, for example. After all, that way lies aristocracy and the separation of people into classes. It was frowned upon. It was undemocratic.  It was regarded as leading the country in the wrong direction — even by such enlightened thinkers as Thomas Jefferson.

The Civil War marked the radical changing point because, like all wars, there were many technological advances — especially in armament but also in such things as steam engines and the sudden “need” for thousands of miles of railroad tracks and new and faster engines to haul more goods and people to places they wanted to go. And the war made many people, especially in the North, very wealthy. In a word, the Civil War marked the true dawning of industrial capitalism in this country and soon we saw the birth of the Horatio Alger myth that insisted anyone could become fabulously wealthy overnight. The notion that wealth was a sign of God’s favor was now a certainty. And with this certainty much of the simplicity that Trilling and Eliot talk about disappeared and, along with it, the notion that there was moral high ground that was sacred, certainly more important than building miles of railroad tracks and making more money than one can spend in two lifetimes.

To be sure, it is difficult to make a case for the causal relationship between two such diverse factors as great wealth and the decline of morality. But there does seem to be a conjunction between the two. How often are we struck by the generosity and charity shown by the very poor who have nothing and the obsession with money that seems to consume the very rich who never seem to have enough? I ask this as a question, but it is largely rhetorical because the relationship I speak about  is evident. And it may help to explain modern man’s “search for a soul” as Jung would have it, and our uncertainty about what truly matters and what is of considerably less importance.