Shifts in American Football

Last night, ESPN radio broadcast a statistic that stunned its reporter. In comparing viewership of Monday’s National Collegiate Championship Game with that for the Wild Card playoff game between Green Bay and the New York Giants, he found that 25 million people watched the national championship game while nearly 40 million watched the NFL Wild Card game. He granted that, yes, New York is the country’s largest media market, but the Sunday game was not especially close or well-played, while the Monday game saw all of the networks showing reruns and other programming, since they suspected that most viewers would be glued to the most important college game of the year and one that turned out to be extraordinarily exciting. His question, then, was what happened?

It is a good question and one that I can’t pretend to know how to answer, but it points to a number of issues, some of them non-football related, that may be possibilities.

The College Championship game featured Alabama and Clemson. With no disrespect intended toward either university, we have to wonder whether Alabama’s sheer dominance in recent years has turned off potential viewers. We also have to wonder if, by having two schools from the same geographic area, viewers from other parts of the country simply decided they didn’t care much who won the game.

Perhaps…but the NFL had its largest market share during the years in which the Dallas Cowboys were dominant, and similar market share distributions have been the norm in other sports. Sociologists have hypothesized that having a “dynasty” team increased viewership because the public decides either to love or hate the team that wins all the time. Under that theory, viewership for Alabama versus Clemson should have reached an all-time high.

As for geographical bias, while it may be a more likely explanation, it still seems to fall short. It is difficult to determine with any certainty whether geography influences college sports viewership because the NCAA constantly changes its parameters for championship participation in its big-money sports on a regular basis. The best comparisons I can find have been when two teams from the same conference competed for the national championship in men’s basketball. In each case, there was no appreciable rise or decline in comparing those games to their counterparts in the years before and after.

One final hypothesis I have heard is that the United States has turned away from football, in general, because of drug allegations, other criminal accusations and, most critically, concerns about post-concussion syndrome. All of these issues appear in headlines far more often than in the past.

However, an examination of sports pages over the past three months shows that concerns over these issues, concerns that one might expect to drive away viewers, are  perceived as a much greater problem for professional football than for college football. If this hypothesis were true, we would expect that the college game would outdraw the NFL game by 60%, not the other way around.

So what is it? Less time to generate enthusiasm for a college team whose star players will likely leave after only three years? Too much attention paid to college coaches? I simply don’t know.


The Change in Media Effect (part one)

It is no secret that there is a lot of hostility directed at the news media. Immediately following the disclosures of Watergate in the 1970’s, there were few careers that seemed more heroic than being an investigative journalist. How times have changed…but why?

There seem to be four primary elements that have led to this shift. In chronological order, they are:

  1. The elimination of the wall between news and entertainment.
  2. The suspension of the Fairness Doctrine.
  3. The increased concentration of media ownership among fewer and fewer people.
  4. Allowing the disadvantages of high technology to prevail over the advantages.

Let’s look at the first two.

According to Harvard’s Nieman Institute, the late 1970’s saw a global shift in the expectations held for news organizations. Hardly noticed by the general public, the late 1970’s marked the first time that news organizations, usually connected to or co-owned by entertainment organizations, were expected to be profitable ventures. Prior to that time, networks saw news coverage as the responsibility of being a citizen of the community, In fact, news stations are still required to allow the public access to write commentaries about a network’s success or failure as a prerequisite to renewing a broadcast license. Unfortunately, few citizens realize that they have the opportunity to comment, opening the door for “infomercials” and other pseudo-news shows that draw high ratings.

Oh, but there are two ways to in crease profits! In addition to increasing revenues, agencies started cutting costs. Typically, a U.S.-based news agency covers the entirety of Africa, a continent with a land mass and a population nearly four times the size of the United States, with just two reporters, usually based in Egypt and South Africa. Whether one is a Republican or a Democrat, does anyone really believe that the Benghazi incident would have happened if there had been prior journalistic scrutiny? Or, going back in time, what about blood diamonds? Rwanda genocide?

When those of us old enough to remember journalism at its peak recall the stories, we remember them both for being truly newsworthy and unbiased. Unfortunately, the biggest blockade to violence has been eliminated: the Fairness Doctrine. Created in 1949 by the FCC, it was eviscerated in 1987 when Congress refused to continue to fund its enforcement.

The Fairness Doctrine required a number of criteria to be met for a radio or television source could broadcast a story. The most significant were that a story had to present all major ideological perspectives, if called for, that editorials be clearly designated as such when broadcast, and that call-in shows could not screen out callers based on their political views. While some challenged the Fairness Doctrine or any attempt to restore it as a First Amendment violation, The U.S. Supreme Court ruled that, since broadcast stations occupied a portion of the finite amount of electromagnetic bandwidth, the government was within its rights to place stipulations upon its use.

By 1987, political pressure and changes in broadcast technology gave opponents of the Fairness Doctrine the ammunition that they needed to render it useless. The proliferation of cable broadcasting, followed later by digitization, meant that there was so much bandwidth available that the argument that one station’s broadcast would keep another off the air was no longer valid. With media becoming more and more corporate, there was little incentive to enforce a policy that would allow anti-corporate interests to be represented. The emergence of commentators such as Rush Limbaugh reinforced this, since their anger and other style elements attracted new listeners, usually disaffected workers, like flies to honey. Cracks about the “mainstream liberal media,” “femiNazis,” and comparing teenager Chelsea Clinton to a dog may have been unfair and terribly shallow, but there was nothing to stand in the way of his remarks. Finally, to this day, staff members are taught routinely to block anyone who has a point of view that contradicts the star of the show.

Since 2007, there have been efforts to restore the Fairness Doctrine but, besides the fact that one side of the debate has tremendous incentive to block it, two other obstacles remain. One is that the United States has become rabidly anti-taxation in recent years,  making it hard to sell any regulatory legislation. The other is the Internet; due to its nature, all Internet news falls into a gray area that is neither broadcast nor print. Could it be regulated?
































































































































































































Taking partisanship out of economics

Before continuing, let me say that in some ways I regret that this post is grouped in the category, “Politics and Economics,” because the central thesis is that we need to remove political labels from legitimate economic tools.

There are three major tools that the U.S. government uses to help the economy, either to stimulate it when more jobs are needed or to slow it down when inflation is too high. One of those tools, monetary policy, is a constant that is controlled by the Federal Reserve Board, which acts autonomously. While it has its critics, its autonomy places it beyond the reach of partisan politics and, therefore, is not subject to analysis in this post.

The first tool, historically, is called Fiscal Policy, first applied by Franklin D. Roosevelt during the Great Depression. In a nutshell, it means having the government create programs that directly or indirectly put people back to work by spending money. We often hear people proclaim that, “government would work better if it were run more like a business;” however, this attitude contradicts fiscal policy. In times when more jobs are needed, it is unfair to place a burden on businesses to hire more workers and, since the government is not profit-driven, it can create jobs when conditions discourage businesses.

The most recent of the tools is called Supply-Side, first applied by Ronald Reagan. It works by cutting taxes imposed on corporations and the most wealthy. We often hear people proclaim that, “it is unfair to give tax cuts to the people who are already well off;” however, this ignores the basic reality of income. When wealthier people get tax cuts, they don’t need the money to survive, so they can afford to put the money into savings. When lending institutions have more money in savings, they have more money to lend to investors who can create new jobs in the private sector.

Unfortunately, both of these tools are weakened by politics. Democrats, for the most part, are fiercely loyal to fiscal policy, remembering that it was used by the Democratic President Roosevelt to take us out of the Great Depression. But this loyalty makes the economy more and more dependent upon the government to keep it running, at the expense of private companies.

Republicans are correspondingly loyal to supply-side policy, remembering the job creation under Republican President Reagan that came without heavy government involvement. But this loyalty leads to a strong distrust of government regulation, which means that many of the jobs that are created are created outside the United States, plus there is no guarantee that the taxes saved won’t be spent on luxury goods that create few new jobs.

In summary, then, both of the major parties need to stop looking at these economic policies in partisan terms. There are politicians who argue that, if corporations are given tax cuts, they should be forced to invest the money in creating American jobs. There are also politicians who have formulated ways to gradually convert government-created jobs into private sector jobs. The time has come for all politicians to cross partisan lines, follow the aforementioned trend-setters, and do what is best for the country.

What should I drink at Thanksgiving?

Ah, yes. You have that fifteen year-old bottle of Barolo, or a First Growth Bordeaux, or an Oregon Pinot Noir that got a perfect score from Robert Parker, and you are thing that a festive occasion like Thanksgiving is the perfect time to open it. DON’T!

The problem is that Thanksgiving, with its emphasis on family togetherness and a mixture of foods, plus the fact that most of the people at your table will not be wine aficionados, means that Thanksgiving is perhaps the worst possible time to open a bottle that you want to show off.

Setting aside the guests, let’s look at the menu. Turkey, by itself, is going to go with about anything you serve, although it is unlikely to create a pairing with wine that will be memorable. The real problem comes from the side dishes. One, if your guests are bringing dishes, you have no idea of what will match up with the hodge-podge of food. Two, most Thanksgiving dinners contain dishes that are sweet, and sweetness is the enemy of any good dry wine.

Now, let’s look at your guests. Unless you have a group of people who are all serious wine aficionados, you are going to make most of the people at your table feel uncomfortable if you serve an expensive bottle and they don’t like it. Besides, the center of attention at Thanksgiving should be the love you share with one another.

Does that mean that you are doomed to drink something that you don’t like? Of course not. Here are some of the reliable standards that I have served over the years that solve for the mixture of foods and that avoid the appearance of wine snobbery, starting with whites and rose’s:

Spanish Sparkling Wine, aka Cava (I have a soft spot in my heart for Serra, but that’s only because I used to sell it.) Just be sure not to get Brut, which is too dry for your meal. An Extra Dry (I know, it sounds wrong, but Extra Dry is actually fruitier than Brut) or a Rose’ should go well and will be festive.

Riesling. I lean toward Oregon or Washington Rieslings. Of course, if you live on the Atlantic Coast, there are some great Rieslings, especially the ones from Long Island. If you are feeling like splurging, a German Spätlese would be great.

Gewürztraminer. Half the fun of this wine is saying the name, but it’s my personal favorite for Thanksgiving. “Gewürz” means “spicy’ in German, and it smells like nutmeg, vanilla, cardamom, and other wonderful stuff, but it tastes similar to Riesling as you drink it. Besides saying the name, it matches well for things like sage-flavored dressing or peppery vegetable dishes.

Rose’. You need to be a little careful here, because there is some really bad rose’ out there. When America went through the White Zinfandel craze years back, the market was flooded with stuff that tasted like bubble gum. In fact, my wife was convinced that she hated rose’, period. Well, we survived that craze, and there are a lot of good rose’s out there. In some ways, this is an ideal wine to serve because it doesn’t make a statement, but it still tastes good. What you are looking for is something that has both fruit and acid in a nice balance. Wines from Washington, Oregon, and the cooler regions of California fit the bill, as well as Tavel, d’Anjou, or Provence from France. Now, here are my favorite Thanksgiving reds:

Beaujolais. The domestic equivalent to Beaujolais is Gamay; remember, good winemakers in the U.S. never steal the names of regions to describe their wines, but use the name of the grape instead. Unfortunately, Beaujolais is sometimes difficult to find, due largely to a scandal that hit the region last decade. But the wines today are wonderful, nice and fruity, perfect for people who think they don’t like red wine.  I used to be a big fan of Beaujolais Nouveau which, due to way it is made, is meant to be consumed as quickly as possible after the harvest. I still love it but, unfortunately, it has become a fad wine that is now grossly overpriced.

Barbera. This is sometimes called, “the Italian Beaujolais.” Not a terrible description, but it is a little bit more full-bodied than Beaujolais so it might not be as popular with people who don’t like red wines. Since Barbera is the name of the grape, you can find both Italian and domestic Barberas, and they tend to be good values.

Zinfandel. This can be wonderful with Thanksgiving dinner, but be careful! There are some hefty Zinfandels out there that will overpower the food. If the wine label states that it is an Old Vine Zinfandel and/or it contains more than 14.5% alcohol, save it for a wine tasting. But if the wine is around 12.5% alcohol, what you will get is a wine that smells like blackberries and even tastes like a variety of berries. Back in the bad old days, people would ask me, “is there such thing as a red Zinfandel?” Yes, there is, and that’s what you should be drinking instead of the plonk that was popular twenty years ago.


The Electoral College: should it be abolished?

Not surprisingly, millions of Americans are angry at the Electoral College right now, especially Democrats; the 2016 Presidential election marks the second time in less than 20 years that the Democratic candidate received more votes than the winning Republican candidate. Before reacting in anger, though, we should all look at why the Electoral College was created.

In the Federalist Papers, #68, Alexander Hamilton wrote about his fears of having the president directly elected by the people. At the time, only about 10% of Americans were literate, and Hamilton felt that many people would respond based on emotional appeals. What most of us do not realize is that, at the time the Electoral College was created, it was left to each state to determine how it would choose its electors; today, of course, all 50 states have chosen to let its voters determine who the electors would be. This is the primary reason that has been cited for having the Electoral College.

However, there are two additional arguments that support the Electoral College. The first is that, when the Constitution was drafted, the 13 states regarded themselves as autonomous entities. In other words (avoiding political science geek speak), the 13 states were much closer to today’s European Union than the country we know today. In getting the Constitutional framers to compromise successfully, it was vital that each state be allowed to maintain at least some of its autonomy; this was well-accomplished by allowing the individual states to proclaim which candidate they supported.

The other supporting argument emerged as the country’s history unfolded. The tendency of the Electoral College, statistically, was to exaggerate the margin of victory of the winning candidate. While that may not seem advantageous, it meant that the losing side would be more likely to accept the results of the election and to support the incoming president.

So, should we abolish the Electoral College and replace it with the popular vote, given the three arguments above? Let’s look at the three arguments.

In response to Alexander Hamilton’s fears, we are already more democratic than he wanted the country to be, and our literacy rate is well over 98%. Does this mean that voters are less likely to respond to emotional appeals, given the prevalence of social media and the lack of limitations that candidates have in their campaigns?

In response to the second concern, we are no longer a country of autonomous states…at least not officially. I would argue that we still, at the cultural level, have a divide that seems to be growing and that the Electoral College is one of the direct causes of exaggerating that divide, perhaps even cause it. For instance, Oregon is labelled a “blue state,” and Texas a “red state,” but a closer look shows that the difference in registered Democrats and Republicans is less than 2% in both states.

The final concern, of course, has blown up in our faces in both 2000 and 2016. George W. Bush’s administration was weakened by the disparity between the electoral and popular votes, and we are already seeing widespread refusals to accept president-elect Trump. So much for exaggerating the size of the victory and generating extra support for the winner!

So, I would lean toward abolishing the Electoral College, given what is described above. The remaining question is how a popular vote would be conducted. Most people believe that there are greater opportunities for third parties under a popular vote system, with the possibility that, if no party received a majority of the vote, two or more parties would be forced to work together to create a majority. Such a system is used in the majority of parliamentary democracies. Another possibility would be to force a run-off if there was no clear majority; and there are many other possibilities to consider.

Here is a link to an excellent article on the Electoral College, whose author takes the opposite side from my conclusion:

Please note that I am not the one who is referring to Trump as a demagogue.

Common Sense in Free Speech (continued)

Previously, I identified three problems in the American electoral system that seemed paramount as needing repair:

  1. Follow the example of the United Kingdom, and limit the amount of time campaigns are allowed to go.
  2. Overturn SCOTUS rulings that have broadened “free speech” far beyond the intent of those who wrote the Bill of Rights.
  3. Overturn the SCOTUS ruling that define corporations as being people.

In my previous post, I analyzed the first two. Today, here my thoughts on the third. First, some background.

The concept of “corporations as people” came from the SCOTUS ruling Santa Clara County v. Southern Pacific Railroad Company (1886) and, frankly, it was necessary. Without the ruling, corporations would have no legal standing in the courts. This should be welcomed by people of all parts of the political spectrum; for instance, it guaranteed that corporate investments could be done with appropriate legal protection, and it guaranteed that corporations could be sued for environmental damage or other harms. So, how did we get from this sensible ruling to our current dilemma, a dilemma that is the closest thing yet to a unifying factor in current politics?

As much as I am loath to attack anyone who is recently, deceased, the primary blame rests squarely at the feet of the late Justice Antonin Scalia. He led SCOTUS in a leap of illogical thought that would rival the world’s record for the long jump.

Justice Scalia was a brilliant man and, by all accounts, extraordinarily charming and charismatic. During his time on the bench, he was frequently called upon to write majority opinions and even his concurring opinions carried great weight in guiding fellow justices. Unfortunately, he behaved hypocritically in two vital areas.

The first was that he proclaimed his philosophy to be based on upholding the original intent of the Constitution; however, in striking down the McCain-Feingold Act and then siding with Citizens United, he was totally behaving like a judicial activist. Apparently, judicial restraint was to be argued only when it supported Scalia’s opinions.

This brings us to the second hypocrisy, the concept that judges, especially Supreme Court justices, were to set their politics aside when taking the bench (ironically, SCOTUS is the only level in the federal judiciary that lacks written guidelines for ethical behavior, largely because no one ever suspected that anyone on the high court would need them.) Now, this is not to say that having political views should disqualify someone from serving on the SCOTUS; former President William Taft and former California governor Earl Warren were both successful Chief Justices. But Scalia was quite open about his continued political activism; he openly maintained his membership in the Federalist Society and other partisan think tanks, and gave numerous speeches that foreshadowed how he believed that SCOTUS should rule in pending cases. With his intellect and charisma, he would have been a terrific senator, even president, but he had no business behaving this way as a member of SCOTUS.

And so, he proceeded to take the precedent of Santa Clara County and stretched it further and further until we ended up with the designation of corporations as being people. Here is an analogy. In 2015,  national polling showed that 97% of Americans felt that animals needed legal protection, with almost 2/3 favoring stricter criminal penalties for those who abuse animals. I think of my beloved canine and feline companions and am in complete agreement; however, I don’t think that this means that my dog has now turned into a human being.

The good news is that we may be closer to overturning Citizens United than seemed possible. Most people believed that the election of Clinton to the presidency would guarantee the appointment of a justice who would join the original four dissenters in reversing the case. But, there is a distinct possibility that this could still happen under a Trump presidency. Looking at Trump’s list of potential appointees, even the most right-leaning candidates have shown that they disdain political activism and, given that Citizens United is a direct slap in the face to the populists who elected him, it is difficult to imagine Trump appointing someone who would support the ruling. The question, then, is whether the justice appointed would feel bound enough by stare decisis that she or he would be reluctant to reverse the previous ruling.

Frankly, the greatest danger to overturning Citizens United comes from the U.S. Senate, where senators from both parties have benefited from the contributions now legal. Will they vote according to their consciences on the nominee? Will they see the anger that Americans have shown toward “business as usual?”

FOOTNOTE: The Citizens United  case originated from a propaganda film aimed at demonizing Hillary Clinton. If overturned, this would also boost the broadening of free speech to allow virtually no controls on slander and libel against political candidates, earlier addressed as item #2 on the list of possible remedies. And, lest anyone starts feeling smug, there are a number of slanderous websites attacking Donald Trump, as well.


Common Sense in Free Speech

If we look at the entirety of the just-completed election, one thing seems clear: a record number of Americans are fed up with “politics as usual” and we don’t need to dig deeply to find proof. Polls show this but, even more compelling, we see that the greatest total enthusiasm, prior to November 8th, was generated for Donald Trump and Bernie Sanders, two men whose political views are almost polar opposites. The one thing they share is the message that business as usual is no longer acceptable.

In exit poll interviews, many voters indicated that Trump’s personality and comments were offensive, but their desire for change outweighed that. And how many times did we hear that there was a sense of “choosing the lesser evil” in the presidential vote? Personally, I don’t find Hillary Clinton evil at all, but I sense that a lot of people associated her so strongly with the Washington status quo that they were willing to believe even the most outlandish statements made about her.

So, where do we go now?

I see that there are three steps that can be taken. The first two are the most important; if we were to adopt just one of them, it would be an improvement. The steps are:

  1. Follow the example of the United Kingdom, and limit the amount of time campaigns are allowed to go.
  2. Overturn SCOTUS rulings that have broadened “free speech” far beyond the intent of those who wrote the Bill of Rights.
  3. Overturn the SCOTUS ruling that define corporations as being people.

Today, I will write about the first two. If they were both enacted, #3 would become a moot point.

OVERVIEW: in reading the Federalist Papers and other primary sources, it is clear that the framers’ intent for free speech was to protect political dissent. OK, but SCOTUS has taken that so much to heart that it has failed to keep that freedom in the context it was intended. We can safely conclude that free speech was to be protected if it had a purpose in advancing the political dialog; it was never intended to protect ad hominem attacks nor to encourage mindless babble (see Federalist #10-19  and the letters written by James Madison, Alexander Hamilton, and Thomas Jefferson. (Considering the antagonism between Hamilton and Jefferson, their concurrence on this issue is especially notable.)

Regarding the first issue, we have plenty of evidence that we can limit political campaigning. There are already limits in place that have all withstood judicial scrutiny. We can surmise that the framers never thought to impose time limits because the spread of information was so slow in 1791, but we all know that communication speed is no longer an issue. The United Kingdom limits campaigns to 30 days. Given the physical and population size of the United States, that limit should probably be between 60 and 90 days.

There are a lot of advantages to this, but two leap to mind as most important. One, a limit on campaign time would limit the amount of money that is poured into elections, money that so desperately could be used for better purposes and that leaves the 99.8% of Americans who are not wealthy feeling disenfranchised and angry. Why? Because it would be senseless to pour money into a campaign if the candidates and parties wouldn’t have time to spend all of it; without any other legislation, this would make the playing field more level. Two, it would tend to force candidates to concentrate on issues instead of insults, because they otherwise would not be able to give the people adequate reasons to vote for them. It wouldn’t eliminate negative campaigning, but it would reduce it.

Regarding the second issue, starting in 1952 (Wieman v Updegraff), SCOTUS cited the “chilling effect” that libel suits would have when brought against political candidates. Rightfully so, they feared that lawsuits based on alleged defamation of character would go cripple the framers’ intent for there to be open political discourse. If this were still the early 1950’s, I would be inclined to agree; but, let’s look at what has changed in the past 64 years:

  1. News agencies are no longer run without regard for profit; that wall began to crumble in the late 1950’s.
  2. Journalists no longer adhere to the understood code of ethics that demanded that news stories have two independent confirmations before being published or broadcast.
  3. Social media’s dark side is that it allows for unsubstantiated rumors to be circulated almost instantly, and very few Americans take the time to examine the validity of the sources.
  4. “Legitimate” news sources are now owned by a select number of corporations, instead of being independently owned.

If you combine all of these, the net effect is that we have broadcasters competing to get rumors onto the airwaves as fast as possible, knowing that they have to get good ratings and that they don’t want to be usurped by the Matt Drudges of the world. We also have a general public that tend to read things on social media and assume they are true because they think of those words as published when, in truth, they are only words typed out by anybody. For example, who the hell is Publius17? Is he a learned expert in political science, or some crank, sitting on the corner barstool, chugging beer?

Given all of this, the prohibition on legal action for defamation, simply because the defendant is running for office, is an outdated concept. Comparable republics have allowed defamation lawsuits (most of the EU, for instance) and have seen no chilling effect on political discourse.

Taken together, these two items would create campaigns that were more on point and would avoid the mudslinging. That alone would result in a massive improvement to our current election system.