Monday, October 28, 2013

Employer-Provided Health Insurance: A Uniquely American Blunder


Most Americans nowadays--myself included--get health insurance through their employers. At last count, about 60% of non-elderly Americans obtained their insurance this way, although that number has been falling for years and shows further signs of dropping--in no small part due to some of the provisions in Obamacare. I'm sure some of you have no doubt read stories about how companies like Walgreen's and Trader Joe's are dropping employees from their health plans and pushing them into the health insurance exchanges. And contrary to popular belief, that's a very good thing. Indeed, health policy wonks from across the political spectrum--from Milton Friedman to Austin Frakt--are very much in favor of completely eliminating employer-provided health insurance (EPI). 

And why shouldn't they be? No other industrialized country does it quite like this. If we're being honest, the system of EPI wasn't devised as some carefully-crafted public policy--it was basically an accident brought about by wage controls during World War II. Basically, what happened was that these wage controls were put into place by the National War Labor Board, which set a cap on all wage increases. But there was a loophole: the caps didn't apply to employee benefits. So employers started competing for labor on the basis of health insurance plans instead of higher wages. Then, the IRS made it even more explicit in 1954 when it declared that employer-provided health benefits were not taxable. And the rest, as they say, is history, with EPI being subsidized by taxpayers to the tune of around $300 billion in 2012

But as I said before, this system is uniquely American, and it is also uniquely problematic for a wide variety of reasons. First and foremost, it makes opaque a system that, like all other things in a market economy, would benefit greatly from a healthy dose of transparency. Specifically, it masks who actually pays for an employee's health care, which makes employees less cost-conscious when consuming health care (you always spend more when it's other people's money, right?), which in turn drives up health spending. Furthermore, employers, rather than employees, are choosing health plans for a large, diverse group of people with varying health care needs, while at the same time the actual costs aren't readily visible to people on these plans. As a result, the rising costs of health care due to this over-consumption and lack of cost-consciousness has created a situation in which wages are depressed in order to continue paying for health insurance that, oftentimes, includes coverage for things the employee doesn't even want or need. This graph should prove helpful to explaining my point:



In addition to depressing wages, the existence of EPI creates a situation where employees are reluctant to quit their jobs (job lock, in wonk's parlance) to, say, start a new business or find a job that better suits their skills--because they'd lose their insurance. One study found that employee turnover was reduced by a whopping 25% because of this! On a more practical level, changing insurers every time you change jobs--and the resulting change of health care providers that often follows--also makes for much poorer quality of care because it makes health care coordination and early diagnosis that much harder. It also costs more, to boot. To be sure, the Affordable Care Act does largely ameliorate the former problem; the latter one, not as much. 

Given such significant downsides to America's system of health insurance, why does it persist? At a guess, I would argue that it's a sort of status quo bias, where any change from the present state of affairs is seen as a loss, by both ordinary people and policymakers. As Milton Friedman once put it, we "regard it [employer provided insurance] as part of the natural order." Indeed, President Obama famously quipped "If you like your insurance plan, you will keep it. No one will be able to take that from you." Michele Bachmann complained that Obamacare will cause millions to lose their employer-provided insurance coverage. She's right about that, but this is one loss I won't be weeping over.

Thursday, October 24, 2013

Out of Sight, Out of Mind


Austin Frakt over at The Incidental Economist had a good post today about how Ted Cruz's spokeswoman argued that, because the Senator was on his wife's employer-provided health plan, it didn't cost taxpayers any money. In his post, Frakt points out the obvious mistake--all employer-provided insurance is taxpayer-subsidized (to the tune of about $180 billion a year). After reading his post, I realized that this mistake on the part of Cruz's spokeswoman was reflective of a larger problem, in which many people fail to realize (or, more often, deliberately mask) the fact that, just because a program or policy doesn't have direct costs to taxpayers, doesn't mean that it doesn't cost people any money. 

Take, for example, the sugar quota that the U.S. has, which strictly limits the amount of foreign sugar we're allowed to import. There's nothing sweet (I'm sorry, I couldn't resist) about this policy--in 2011, it cost U.S. consumers an estimated $3.86 billion. Yet the American Sugar Association has the gall to insist that "U.S. sugar policy ran at zero cost to American taxpayers from 2002 to 2012." Zero cost indeed. To reiterate my point, here's a graph of U.S. vs world sugar prices:



In any case, you get my point: Just because the U.S. government doesn't directly funnel taxpayer dollars to the sugar industry, doesn't mean that a quota doesn't cost Americans any money. These policies are quite literally everywhere in the U.S., and in every case, they were a clumsy attempt at avoiding a tax hike while still achieving the same goal.

The list goes on and on with this sort of thing: instead of taxing gasoline, we insist on higher fuel economy standards for automakers. Instead of using tax dollars to provide health care to workers, we instead insist that employers must provide health insurance to their employees. But not the smaller companies, because that wouldn't be fair. The problem with a lot of these policies is that they end up having unintended consequences and are far less transparent and far more complicated in their mechanisms than a simple taxes-and-transfers scheme would be. Milton Friedman was right when he said that there's no such thing as a free lunch, and it holds true in nearly all things, from the sugar quota to the employer mandate. The sooner we realize that, the sooner we can stop trying to get something for nothing and start crafting better policies.

Monday, October 21, 2013

Everything You Need to Know About the Eurozone Crisis


With the onset of the global financial crisis of 2008, the U.S. economy took a steep nose-dive into recession. Similarly, the economy of the Eurozone, which was likewise dependent on a housing bubble, collapsed. However, the nature of Europe’s monetary union created a unique aspect to the region’s crisis. More specifically, the monetary union and unified currency of the area played a large part in creating what has now become known as the Eurozone Crisis, in which peripheral nations such as Greece, Ireland, Spain, Italy, and Portugal, (GIPSI nations) have found it increasingly difficult to finance their government deficits in global bond markets. While leaders on both sides of the Atlantic have sought to explain the Eurozone Crisis as a story of fiscal profligacy and overgrown welfare states, their narratives simply run counter to the available evidence.

Before attempting to explain the current crisis, though, I've always found it helpful to contextualize matters with a bit of history. In 2002, member states of the Eurozone gave up their independent currencies and adopted a single currency--the Euro. In so doing, these nations also effectively relinquished their respective independent monetary policies. The single currency fueled a false sense of security in the core Eurozone nations, leading French and German banks to lend large amounts of money to banks in the European periphery nations--because hey, it was all the same currency, so nothing could go wrong, right? This, in turn, created a large property bubble, similar to the one we had here in the U.S. This bubble, formed concurrently with its American counterpart, led to a period of rapid growth in the GIPSI nations. This rapid growth, which was financed primarily by rapidly increasing private-sector debt levels resulting from those massive core-to-periphery capital flows I just mentioned, led wages and prices in the GIPSI nations to rise substantially faster than those in core European countries:



Note the large gap between Germany and the periphery nations. In any case, these increased wages and price levels made these economies deeply uncompetitive compared to Germany, with their products being far more expensive on the world stage, driving their exports down. With the popping of the housing bubble, private spending sharply declined, and inevitable cyclical (recession-induced) deficits arose as governments found themselves with far lower tax revenues and far higher government expenditures, with no prospect of exporting their way to an economic recovery.

In response to this severe crisis, the European Central Bank (ECB), like most global central banks at the time, cut interest rates in an effort to increase liquidity in the markets and spur a recovery. While the downturn in the Eurozone was fairly ubiquitous across different nations, the subsequent recoveries across Europe spanned from fairly robust in Germany to nonexistent in the periphery nations. Germany, with its persistently large current account surpluses, traded its way to a vigorous recovery, with unemployment peaking just above 8% in 2009 and falling to 5.3% at last count. In stark contrast, the unemployment rates of the GIPSI nations remain well above any measure of full employment and show few signs of falling


Nevertheless, the ECB, which was (in theory) supposed to conduct appropriate monetary policy for all of the Eurozone, raised interest rates in mid-2011, sending the GIPSI nations' bond yields soaring and pushing the sovereign debt crisis into a much more acute stage.

What the ECB's actions indicated to the bond markets was that it sought to conduct monetary policy that was appropriate not for the whole of the Eurozone, but for Germany alone. In order for a government to be able to borrow at low interest rates, like America can, bond markets must have confidence in the ability of that government to pay back what is owed. What the ECB demonstrated is that Spain, Italy, Ireland, Italy, and Portugal lack a central bank willing to stand by their respective economies. As a result, the future economic prospects of these nations are correspondingly diminished in the eyes of the bond market. That's why they pay high interest rates on their debt. And it's precisely for this reason, for example, that the United Kingdom is able to borrow at a lower rate of interest than Spain while it simultaneously has larger debt-to-GDP as well as a larger deficit. 


Simply put, the bond market understands that the Bank of England will do what is necessary for England’s economy to thrive and produce the necessary tax revenues to pay off its debts. Spain—and indeed the rest of the GIPSI nations—by contrast, are left with a central bank that has demonstrated to bond markets that it is more concerned with Germany’s economy than with their own.

Ultimately, the critical lesson that ought to be drawn from this ongoing crisis is that its causes are largely not fiscal in nature, they stem from the flawed structure of the European monetary union itself. Unfortunately, leaders in both the United States and across Europe have subscribed to the theory that fiscal profligacy and overly large welfare states on the part of the GIPSI nations are the root causes of this crisis. 

An easy way to debunk this thesis is to compare the average public social expenditures (the size of the welfare state) in 2007 and average budget deficits among Eurozone nations between 1999 and 2007. It quickly becomes clear, looking at the chart below, that the size of the GIPSI nations’ welfare states is not a root cause of the crises, as Germany’s welfare state on the eve of the financial crisis in 2007 was larger than any of those found in the debtor nations:


The more widely cited theory among European policymakers is one in which the GIPSI nations partook in excessive government spending during the years leading up to the current crisis. However, in examining the average budget deficits between 1999 and 2007, this theory just doesn't stack up:


To be sure, Greece did run large deficits, but all other nations showed few signs of profligate spending. Italian deficits were roughly equal to those in France, yet France is not facing Italy’s current borrowing costs. Further discrediting this theory is the fact that Spain and Ireland both ran budget surpluses during the years leading up to the crisis, with their large budget deficits arising as a result of the economic crisis, rather than being its cause. 

This misdiagnosis on the part of the policymakers has led them into mistakenly demanding fiscal austerity in the face of high unemployment as the price of numerous bailout packages. However, as indicated before, the true problem facing many of the GIPSI nations is the fact that their labor costs and price levels grew wildly out of balance with those of Germany and other core nations, creating massive trade deficits:


Ordinarily, a nation would rectify this problem simply by having its central bank print lots of money to devalue its currency, thus making its exports competitive again. However, due to the monetary union, the GIPSI nations don't have the ability to devalue their currencies, and the ECB’s generally timid and ineffective policy responses have done little to this end.

At the end of the day, the Eurozone faces three choices: First, the ECB can opt to pursue a policy of slightly above-average inflation, such that prices and wages in the places like Germany rise to balance with those in the periphery. This policy option would be most likely to preserve the Eurozone and restore the periphery’s competitiveness. In general, it would also serve as economic stimulus to the entire Eurozone, because a continent of growing economies is a lot more beneficial to Germany than one made up of nations in crisis. In turn, GIPSI nations would be able to stage a more robust, export-led recovery, which in and of itself would restore confidence in the bond markets as well as improve their budgetary outlooks. With robust recoveries underway, these nations could then turn to balancing their longer-term budgets, rather than the other way around.


The second choice would be a Eurozone break-up or a peripheral nation’s exit from the union. This option, though politically disastrous, would allow the GIPSI nations to devalue their own currencies accordingly to restore competitiveness, but would also likely lead to bank runs in the exiting nation as well as cause turmoil across financial markets for some time after an exit. The likelihood of an exit is pretty hard to pin down, since you never really know how long a country is willing to put up with such unacceptably grueling economic conditions before doing something extreme.

The third option, which is currently being pursued by the Eurozone, is one of “internal devaluation,” as economist Paul Krugman dubbed it, in which nations would adopt structural changes and austerity measures and slowly suffer through deflation (falling prices) until their nation’s exports regained competitiveness. This option, due to the downward “stickiness” of wages (that is, workers are less willing to take pay cuts than pay rises) and in turn of prices, would be tremendously slow and painful, with the debtor nations facing years of catastrophically high unemployment and all of the associated human costs.

Indeed, while Mario Draghi has prevented the Euro from completely falling apart when he pledged to buy an unlimited number of GIPSI government bonds, there's a hell of a difference between staving off an acute crisis and staging a recovery. And he seems intent on keeping it that way. Even though Europe has faded from the limelight in recent months, it is very much still in crisis. For Spaniards and Greeks, 26% unemployment doesn't just fade away. For Italians, 12% unemployment isn't just something you forget about. Sooner or later, something's gotta give.

Wednesday, October 16, 2013

Back to the Future--Eurozone Edition


Now that the debt ceiling and government shutdown crisis has basically been averted, I thought I'd take a bit of time to talk about something other than the sorry state of affairs in our government and instead talk about the sorry state of affairs in the Eurozone. What actually spurred this post was a passage from Paul Krugman's 1996 book Pop Internationalism. In it, Krugman discusses something called the European Rate Mechanism (ERM for short). Obviously, your first question is probably "what's the ERM?"

In a nutshell, the ERM was essentially the precursor to the Euro, with a few key differences. First, instead of the Eurozone countries adopting a single currency, the participating nations all fixed the values of their currencies to the value of the German mark. Without getting too technical, they'd do this by having their central banks buy and sell German marks to maintain a steady value of their own currencies. In any case, the key insight here is that every country that chose to do this gave up their ability to conduct monetary policy. To use an example closer to home, imagine if the Fed couldn't cut interest rates during a recession or raise them to head off inflation. 

So why did they do this? Well, on the face of it, it was an effort to further unify Europe into a common market. They chose the German Mark because the Germans, due to their unfortunate history with inflation, (see: Republic, Weimar) were exceptionally vigilant in fighting inflation, which, during the 1980s, was a serious problem in most of the industrialized world. So by pegging their currencies to the mark, these countries also sought to bring inflation down to more tolerable levels. And after a bumpy start in 1979, things seemed to be going well for Europe. Inflation was down across the board: 
Without getting too bogged down in the details, things went fairly well in Europe until the fall of the Berlin Wall. With Germany unified, the costs of supporting and rebuilding the eastern half of the nation represented a massive fiscal expansion on the part of the Germans. To head off the inevitable inflation that would result from this, Germany's central bank raised interest rates sharply. The problem was that the rest of Europe, in keeping with ERM rules, was also forced to raise interest rates to keep their exchange rates in line with the Deutsche mark. But without the same fiscal expansion that Germany was pursuing, this led to a severe recession for the entire continent, in which the UK was ultimately forced to leave the ERM and sharply devalue the pound.

At this point, some of you may be picking up on some seemingly eerie parallels to this story. Just to recap, in 1979 you had several European nations forfeit their monetary independence to Germany to further unify Europe and help fight inflation. It worked well for about 8 or 9 years and then there was a crisis. In that crisis, Germany pursued a monetary policy that was appropriate for itself but not for anyone else in Europe.

Tweak a few of those details and you'd be looking at the Eurozone today. In a nutshell, the Eurozone was formed when several European nations forfeited their monetary independence and adopted the Euro. The Eurozone's monetary policy is set by the European Central Bank, which is in Frankfurt, Germany. Things went pretty well for about 8 or 9 years from 1999 to 2008, but then a severe crisis set in. The ECB cut interest rates during the early part of the recession, but then decided that since Germany's unemployment rate was comfortably below 6%, it should raise interest rates in the spring of 2011 to head off inflation, triggering one of the most acute phases of Europe's crisis to date. A crisis, I might add, that is still very much a reality, in spite of being out of the limelight for many months. 

In short, Europe has learned nothing from the lessons of its past. It threw itself all too willingly into a monetary system that its policymakers knew from experience probably would end in a crisis. And that might just be the most damning thing of all.

Monday, October 14, 2013

Why Delay the Employer Mandate But Not the Individual Mandate?


Last week, Jon Stewart (whom I generally like a lot), had Kathleen Sebelius on his show and he asked her a very pointed question:
"Would you say that’s a legitimate criticism that an individual doesn’t get to delay it [the mandate], but a business does? Is that not legitimate?”
He was, of course, referring to the decision by the Obama Administration to delay the employer mandate by 1 year, while not doing so for the individual mandate. If this sounds like a familiar line of argument, that's because it is. Indeed, the GOP used it to justify delaying the individual mandate for a year in one of their ill-fated government funding offers a few weeks ago, with John Boehner saying:
"This is indefensible. Is it fair for the president of the United States to give American businesses an exemption from his health care law's mandates, without giving the same exemption to the rest of America? Hell no, it's not fair."
On the face of it, this may sound like it is unfair. I would argue that's not a question that can be really answered one way or the other. Not honestly, anyway. The question of fairness aside, there are certainly reasons why one mandate was delayed, while the other was not. 

So why was the employer mandate delayed? Well, first and foremost, because the employer mandate just doesn't matter all that much in the grand scheme of things. It wouldn't, in any appreciable way, disrupt the proper functioning of the law. 

The employer mandate, for those of you still unfamiliar with it, states that all employers with 50 or more full-time (over 30 hours/week) employees must provide all of them with health insurance that meets certain affordability standards. Now that may seem like it would affect a lot of people, but consider the following: only about 25% of employers in the U.S. actually have over 50 full-time employees. Of those, roughly 96% of them provide health insurance already. All told, RAND estimates that about 1,000 firms employing about 300,000 people will be affected by the mandate. That's 0.2% of the country's population. Now you see why the employer mandate could be delayed. But why was it ultimately delayed? Truth be told, we've never gotten a terribly coherent answer for this, other than the regulatory complexities involved with implementing it were numerous. 

On the other hand, the individual mandate is not complex. It stipulates that, barring severe economic hardship, you are required to carry health insurance. Period. The economic reasoning for this has been fleshed out at length. Simply put, though, the reason the individual mandate hasn't been delayed is because it cannot be delayed if the law is to function properly. 

In economics, we'd say that delaying the mandate would create an adverse selection problem. Put in layman's terms, if insurers are to stop discriminating based on pre-existing conditions and sicker people enter the risk pools in larger numbers, so too must the young and the healthy who may otherwise not purchase insurance. Otherwise, risk pools will become saturated by sick people, driving up premiums in a self-perpetuating cycle that ends in the dreaded "insurance death spiral." Graphically, it looks something like this:


That's why we need the individual mandate. And that's the answer Kathleen Sebelius should have given Jon Stewart last week. Instead, they got mired in an ultimately unsatisfying and quite honestly meaningless debate over fairness. None of it even mattered.

Look, the only thing that is of any consequence here--the only thing anyone should be focusing on--is the fact that the individual mandate is what keeps insurance affordable and available for everyone under the law. That's why it's fair. Everything else just misses the point.

Wednesday, October 9, 2013

The "Doctor Shortage" Argument Against Obamacare


You've all heard it before: Obamacare will exacerbate our already very limited supply of primary care doctors by creating an influx of new patients. As a result, wait times will significantly increase for all manner of routine procedures and medical appointments. Okay, there's nothing per se wrong with this argument, but it really begs another question: "how is this problem any more specific to Obamacare than, say, a conservative's idea of health care reform that also insures more people?"

Think about it: doctor shortages happen for one of two reasons. The first reason is that there are too few doctors. The second is that there are too many patients per doctor. In both instances there's a shortage and longer wait times. No one really disputes that fact. No one being honest, anyway. But here's where I have a problem with this thinking: people are using it as an argument against Obamacare, when the reality is that any law that increases the number of insured people would lead to longer wait times.

Put another way, let's say that we instead implemented a conservative idea for health reform that also allows 30 million more people to afford health insurance. Wait times would be longer then, too. It has precisely nothing to do with how people get insured and precisely everything to do with the number of people who are insured in relation to the number of doctors we have. Indeed, imagine this scenario Aaron Carroll thought up:
"We could convert the entire country to Medicare tomorrow, increase the number of physicians, and have no wait times at all. We could convert the entire country to private insurance in a Switzerland-like system, make no changes to the number of physicians, and see wait times go through the roof."
Now, don't misinterpret what I mean here--I'm not saying this is a non-issue, not by any means. It is a problem, and a very serious one at that. But if you, like me, think that people having access to health insurance is not only morally right but also economically beneficial, then the solution to the doctor shortage is not to keep more people from getting proper medical care, but to increase the number of people able to disburse that care. You see, the U.S., in spite of spending way, way more than other countries on health care, has a lot fewer primary care doctors:



It should be come clear rather quickly that we're lagging far behind in this metric by world standards. I'll allow my past self to explain why
"I don't like the American Medical Association. I don't like them because they've spent most of the past 20 or 30 years lobbying state medical licensing boards to restrict the number and size of medical schools due to a supposed over-supply of doctors (which was controversial in the first place). Moreover, they've also worked to significantly limit the number of foreign doctors who can come practice in the U.S. With a restricted supply of physicians, doctors have higher salaries, thus we pay more for medical care. Worse yet, most of the already limited supply of physicians coming out of medical schools, for a variety of reasons, want nothing to do with primary care, which is arguably the most important form of health care."
So now that we've identified at least part of the problem, what can we do to fix it? Besides throwing the uninsured under the bus, I mean. True, the Affordable Care Act does take modest steps to increase the salaries of primary care practitioners (PCPs) in an effort to boost the number of medical students becoming general practitioners, but that's hardly sufficient in the face of such a daunting problem. 

The solution to this problem--and indeed the whole reason I wrote this post--might be found in the pages of California's SB 493, which was signed into law last week. What this law does is it takes an important step in rolling back the often pernicious effects of medical licensing cartels like the AMA--it grants the state's many pharmacists PCP status. That means more doctors, more appointment slots, and shorter wait times.

Indeed, there are 18 other states in the U.S. that have done something similar by allowing Nurse Practitioners to be PCPs. Even more encouraging was the fact that they found that there was no significant decline in the quality of care resulting from this decision. The great irony here, though, is that the right to grant different medical professionals PCP status rests exclusively with state governments, which means that they're the ones who will have to step up to the plate and fight the doctor's shortage exacerbated by a federal law.

But the fact remains that something substantial can be done to fight it, and it seems disingenuous at best and downright cruel at worst to suggest that the uninsured must continue to suffer through their lives without proper medical care because of a largely solvable problem.

Tuesday, October 8, 2013

Guest Post: Platinum is a President's Best Friend


Editor's Note: Clayton is a long-time friend of the blog as well as a talented economics writer. His own blog, which can be found here, is excellent, and I'm thrilled to publish this fantastic post he wrote.

It's day nine of the government shutdown, and the world has not ended. Sure, many services and programs that the desperately poor rely upon are not functioning, and the FDA is no longer able to properly monitor our food supply, but that's small potatoes--at least to hear folks in the media tell it. What's a little salmonella between friends?

But even if I could forgive the shutdown, and the lack of leadership on both sides of the aisle that preceded it (although it's almost entirely the Republicans' fault, I can't imagine why Obama did not craft some deft political strategy ahead of time), I'd like to call attention to the much bigger issue at hand: the debt ceiling that we are scheduled to hit on October 17.

On the 17th, the Federal government will be legally prohibited form issuing any new bonds to make up the difference between its revenues and expenditures. Currently the Federal government spends about $60 billion per day and collects about $30 billion in revenues. You see where this is going; almost immediately, the government becomes inoperable.

Let's leave aside the stupidity of a law that says Congress must set tax rates, spending commitments, and then have a third vote to allow the implications of the previous two. You can decide to buy three apples and four bananas, but whether or not you end up with seven pieces of fruit afterwards is not really up to you; numbers add up. Full stop.

And let's ignore the fact that there won't be enough money coming in to send out Social Security checks, Medicare payments, soldiers' wages, and the budget for the TSA (okay, maybe it's not all bad). The truly terrifying prospect now looming on our collective horizon is that the Treasury may very well miss coupon payments that are due to holders of our national debt. And that, to understate things dramatically, would be not to our advantage.

The full faith and credit of the United States is the underpinning of not only our financial system but that of the world. If the Treasury department fails to service outstanding debt by remitting coupon payments as they come due, some $16 trillion in global financial assets will suddenly be relegated to junk-bond status, with a corresponding plunge in value and spike in yields. Financial institutions and investment funds the world over will take a massive capital hit, with many becoming insolvent. Think the crisis of 2008-2009 writ large.

Which brings me to the trillion dollar platinum coin. Obama should instruct Jack Lew, the Treasury Secretary, to mint the thing tomorrow. It would then be deposited with the Federal Reserve in Uncle Sam's account and presto, more than enough money appears to operate the Federal government. It really is as simple as that; Social Security checks go out, interest gets paid, the whole shebang.

I can hear the protests now: "You can't do that; it's illegal! It's immoral!" I beg to differ. Due to an arcane but nonetheless legal loophole, the Treasury can mint platinum coins of any denomination, and that includes $1 trillion. Some have argued that this infringes on the independence of the Federal Reserve to conduct monetary policy, and they're correct. But that is a small price to pay, and has legal precedent that predates the creation of the Fed in 1913. From 1862 to 1971 the Treasury issued United States notes directly, which circulated alongside the Federal Reserve notes we all know and (with notable exceptions) love today.

The real question here isn't whether or not the platinum coin is an awkward policy instrument, or whether or not it it will make for some unwanted political theater. It will. But the question facing President Obama and his advisers at the moment is this: Are you willing to save the financial integrity of the United States government, the credibility of the dollar, and our nation's preeminence in the global economy in return for some institutional hurt-feelings and a scathing review on Fox and Friends?

All global hegemons experience a moment when it becomes apparent that they are no longer the powers they once were. For Rome it was the moment Alaric and his Visigoths stormed the Eternal City in 410; for Great Britain it was the week in 1916 that the Royal Navy could not shake the Sultan's troops from the shores of Gallipoli. The global hegemony of the United States deserves a better end than a few hand-wringing politicians standing around afraid to try an unorthodox financial transaction.

Monday, October 7, 2013

Guest Post: Muslims? Democracy? Preposterous! Or is it?



In the dialogue surrounding Middle East foreign policy and the Arab Spring these days, I often hear discussion of the question, “Is the Middle East ready for democracy?” Many pundits tend to argue that, no, the Middle East is not ready to switch from its authoritarian roots to an open, democratic societal system. But why do they feel that way? What is the logic behind this reaction? It’s simple, really: according to the aforementioned pundits, Middle Eastern culture and values make it impossible for Arab nations to attain such lofty levels of political freedom and awareness. The idea that religious and cultural beliefs alone can preclude a nation from achieving democratic governance is rather insulting to Arabs the world over.

I wonder, how do the nay-sayers know for certain that democracy in the Middle East is an impossibility? The glaringly obvious answer would be that, gee, up until now, there haven’t been any successful established democratic systems in Arab nations. At first glance, it seems as if this line logic has left no room for argument. I would like to point out that, until now, certain foreign powers have prematurely vilified and obstructed elections when they felt that their strategic interests were threatened. Based on this new information, perhaps the question begging to be asked is not “Is the Middle East ready for democracy”, but rather, “Are outside powers ready to accept the forms of democracy that will be produced by the Middle East?”

In Iran, 1951--again, not an Arab nation, but still considered part of the greater Middle East--Mohammad Mossadeq was democratically elected Prime Minister. Mossadeq was not an illegitimate candidate. He promised progressive social and political reforms for Iran and also planned to nationalize Iranian oil so that his country could master its own natural resources. Until this point in time, a concession granted to William D’Arcy in the early 1900's dictated that the majority of Iranian oil and oil revenues belonged to Britain and the Anglo-Iranian Oil Company (AIOC). Nationalizing Iranian oil would mean that a greater portion of the revenues coming from the AIOC would be redirected into the Iranian economy, rather than finding their way into the hands of the British. Concerned at the prospect of giving up their lucrative enterprise, the British MI-6 collaborated with the CIA to overthrow Mossadeq in 1953 and replace him with the previously ousted Mohammad Reza Shah--a joke of a political leader and a Western puppet. Documents declassified in 2011 verified the CIA’s participation in the overthrow of a democratically elected leader in order to replace him with one who would be more easily malleable to Western foreign interests.

The coup of 1953 is a perfect example of Westerners' fear of what they cannot control--especially when it comes to the rise of Islam in politics. Many suffering from Islamophobia refuse to accept the legitimacy of candidates whose religious beliefs are at odds with their own. We would back a secular candidate over an Islamic one in a heartbeat, and justify it by saying, "Islam is incompatible with democracy," completely disregarding their qualifications to rule, as we did in Iran. The second Mubarak was ousted, the Western world was gripped with fear that an Islamic leader would take his place in the power vacuum. Before we had heard anything about Mursi's qualifications as a leader, the words "Muslim Brotherhood" had us all up in arms. It might be prudent to mention here that today's Muslim Brotherhood, although still far too conservative for my standards, is not the violent radical Islamist party of the 1930s-1970s, having renounced its violent ways decades ago. Despite Mursi's incompetence as a leader, the democratic process was founded on the principle that all parties wishing to participate must be allowed to do so, not just those with whom we agree.

Democracy is not something that can be taken out of a mold and forced upon a nation. Democratic systems must change and adapt to best fit the personality of the country in which they are operating. This means that yes, even Islamic parties can produce successful democratic candidates! Islam and democracy can exist simultaneously, if given the chance--just look at the developments in Turkey within the last decade. Foreign leaders need to accept that democracy, as a dynamic institution, develops differently in different parts of the world. A democracy that has developed outside of the narrow bounds of Western notions is not inherently any less legitimate than our own.

Saturday, October 5, 2013

Ted Yoho's Backwards Thinking

God help us all:
“I think we need to have that moment where we realize [we’re] going broke,” Yoho said. "If the debt ceiling isn’t raised, that will sure as heck be a moment. I think, personally, it would bring stability to the world markets," since they would be assured that the United States had moved decisively to curb its debt.
First of all, we're not going broke. Or at least, the world markets don't think we are. Bond yields are very low for both short and long term Treasury bonds. The US Treasury Bill is one of the safest assets on the face of the earth:


One thing that might change that is to do what Yoho seems hell-bent on doing--driving the U.S. into a completely avoidable default. Indeed, while markets are still relatively calm, you can see a small hint of panic in certain bond yields:



A default by the U.S. would certainly elicit a lot of things in financial markets the world over, but stability probably is not one of them. It also wouldn't tell markets that America is moving to "decisively curb its debt," since not raising the debt ceiling doesn't cause us to spend any less money. Ironically enough, not raising the debt ceiling would likely make our debt problems far worse due to higher borrowing costs as investors relegate the US government to "banana republic" status. 

As for dealing with our debt "problem," Yoho conveniently ignores the rapidly shrinking U.S. deficit. And it has been rapid, as you can see below (2013 is circled):



I know I shouldn't expect too much in the way of sanity from Tea Party members, but it's hard not to get depressed that this guy is a member of Congress.

Friday, October 4, 2013

Living Under Obamacare, Day 4.

Obamacare and you, in one flowchart. (credit to Nick Beaudrot)

In a strange turn, I didn't take time on Tuesday to write a post commenting on the official launch of Obamacare's exchanges, which took place somewhat ironically in spite of the shutdown. So, has the sky fallen in? Are we all preparing ourselves for a lifetime of meek serfdom and slavery to our glorious socialist Kenyan leader? Well, not quite. Since the shutdown is what's grabbing headlines right now, the launch hasn't been as widely covered as it might have otherwise been, which in some sense is a good thing, because it gives the administration a bit more time to iron out the problems. 

What coverage I have seen has made much of the many glitches and long wait times people have been facing to sign up for insurance online. Naturally, the usual suspects living in the Conservative Echo Chamber see the glitches as the first sign of the law falling flat at launch, with Sean Hannity going so far as to insinuate that the website's problems might hinder emergency ambulance care in one of the most bizarre segments of his show I've ever seen. To be sure, if the issues aren't fixed in the next few days or weeks, it could be problematic insofar as people may give up on trying to sign up. However, what people like Hannity seem to overlook is the fact that most of the glitches are coming from the huge number of people trying to get insurance. Simply put, the exchanges are popular. Ezra Klein summed it up well earlier this week: 
"So on the one hand, Washington was shut down because Republicans don't want Obamacare. On the other hand, Obamacare was nearly shut down because so many Americans wanted Obamacare."
Moreover, it would behoove those covering the launch to recall that when Medicare Part D launched in 2006, it was wildly unpopular--even more so than Obamacare--and launched with its own array of bugs and logistical issues. But woe unto any politician today that hints at repealing or even trimming down the program. And that is what terrifies many in the GOP about Obamacare: they aren't worried the law will fail and be bad for Americans, they're worried it will succeed and, like Medicare, become a permanent fixture of American society.


Wednesday, October 2, 2013

Guest Post: American Interventionism: A Cautionary Tale


Although President Obama has since scaled back his intentions to execute a strike on Syria, his finger still hovers over the proverbial “button”. Military action and U.S. intervention in the Middle East in general, is a bad idea—I know, this all sounds terribly unpatriotic, but hear me out. What the United States government and military officials fail to understand is that there is no such thing as black and white when it comes to the Middle East –there is no quick fix. Did somebody say the War in Iraq? Oh, and the one in Afghanistan too? (Not technically an Arab country, but you get my point). Despite numerous failed attempts, we keep sticking our beak, as it were, into the Arab nations’ business. 

Why has the world’s premier military-intelligence complex been unable to enact viable change in the Middle East up until this point, you ask? Well, I’ll give you one reason, at least. It is because they don’t know what they’re getting themselves into! Enter, sectarianism. Here in the United States, despite our superficial differences, we are a relatively homogenous people. Our nation has had time in the post-industrial phase to develop a sense of stability and balance that nations in the Middle East have not yet been able to achieve. Arab nations are wrought with complexities – religious offshoots within religious offshoots, tribal affiliations, marginalized non-Arab ethnic groups, warring political factions, etc. In theory, the head of the government should deal with conflicts on all of these levels simultaneously, or risk the unraveling of their nation. Sounds difficult, right? That is precisely why, in reality, many dictators, such as Bashar Al-Assad, choose not do so. Let me clarify what I mean when I say “sectarianism” in regards to Syria, as it has a different meaning for every Arab nation. Here is a visual that should help:





As you can see, the map is full of complexities. For simplicity's sake, let’s just say that religious sectarianism is the primary source of divisiveness in Syria, and that the two major players are the Shi’a and the Sunni. Now, to make things even more complicated, the President of Syria, Bashar Al-Assad, his officials and the military—thanks to some careful planning and nepotism—are Alawites. With the start of this revolution in 2010, the already strained relationship existing between these groups was pushed to the breaking point. If U.S. intervention tips the scales either way, the entire nation will crumble. If we support the Sunnis, the oppressed majority could take this opportunity to make the Assad family pay for seventy years of Alawite oppression. By contrast, if Assad emerges victorious in spite of our support for the Sunnis, he will most certainly redouble his efforts to decimate the Sunni population. There is no way to win this war, as America has no say in a religious conflict that has existed in its present form for decades, and traces its roots back to 632 A.D. 

Terrible as it is to say, the existence of chemical weapons does not change this fact. We intervened in Iraq and toppled a dictator for the very same reasons, creating a power vacuum that led to a rapid increase in sectarian violence, sent Iraq into a tailspin and embroiled the United States in a decade-long war. CNN’s Fareed Zakaria makes a similar argument here, also drawing on the 1975 civil war in Lebanon as an example. I know I would not be keen on adding another war to the national agenda and public polls hint that others aren’t too keen on the idea either. Public opinion aside, imagine the devastating impact our intervention would have on ordinary Syrians. For the sake of the civilians, please, America, mind your own business. 

Tuesday, October 1, 2013

The GOP's Precarious Position

Remarkably apt.
In case any of you hermits out there hadn't heard, the federal government shut down today, because Congress wouldn't pass a continuing resolution to fund the government. Specifically, what happened was the House GOP refused even bring up a "clean" CR for a vote on the House floor, likely because John Boehner knew that it would pass with support from both parties and he'd be left facing the wrath of the more extreme elements of his own party. Now, what that means, for those of you non-wonks out there, is that they refused to vote on a bill that would agree to fund the government at post-sequester levels of funding. No Democratic pet projects, no increased spending, nothing. And yet, they refused to pass that unless vital parts of Obamacare were delayed for a year. And so the federal government shut down. Greg Sargent put it best, I think (emphasis mine):
"What is there to negotiate, given Republicans are demanding unilateral concessions in exchange for doing what they themselves say is necessary for the good of the country, i.e., keeping the government open?"
This is a very precarious position the GOP has put itself in. Ignore for the moment the fact that they're flagrantly disregarding the results of the last election, in which they lost all three major popular vote counts by large margins. The fact remains that not only do most Americans vehemently oppose shutting down the government to delay or disrupt implementation of Obamacare, but they do so in spite of the fact that they have mixed-to-negative opinions about the law. Yes, I realize that I've repeatedly said that polling about complex laws is fairly meaningless, but the important takeaway here is that the GOP's current strategy is viewed by the public as exceedingly extreme.

But let's leave the murkiness of polling behind for the moment and think about the GOP strategy outside the context of public opinion. Their strategy is to shut down the government (and possibly cause the U.S. to default on our debt) out of a desire to disrupt Obamacare. But here's the kicker: Obamacare launched today in spite of the shutdown. It was largely unaffected. That seems like something of an oversight on their part.

So let's review: the GOP is shutting down the government (and far more importantly, threatening to breach the debt ceiling) to try to delay Obamacare, and yet Obamacare is unaffected by the shutdown and launches on schedule. Oh, and public opinion (and election results!) does not look kindly upon their strategy in the least, giving them little time to wait for the Democrats to cave.

Like I said, precarious.