Friday, November 29, 2013

Wednesday, November 27, 2013

The Rush to Judgement

A friend or acquaintance comes to you with a story of how badly he has been mistreated by someone—his employer, his girlfriend, a store, an airline. He expects you to agree with his complaint, take his side, despite the fact that you have not heard the other side of the argument and so, unless you happen to have some other source of information, have no way of knowing whether his side is correct. Your honest response would be to point that out—at which point he will get mad at you too.

Seen from a sufficiently cynical point of view the pattern makes sense. Agreeing with him makes him your ally, allies are useful, and the target of his attack is far away and, with any luck, will never know you have sided against him, her or it. Agreeing is also stupid if it does not occur to you that you have heard only one side of the story or if you have not yet learned how dangerous it is to reach conclusions on that basis,  dishonest if you have.

I was reminded of this particular recurrent irritation by recent news stories about a waitress who claimed to have been stiffed by a couple she served, given a note criticizing her (lesbian) life style in lieu of a tip. Her original account did not identify the couple, but it provided sufficient information for them to identify themselves—at which point they provided what looks like convincing evidence that she was lying, including the visa charge for their dinner, tip included. The most recent story I have seen includes comments by friends and former colleagues of the waitress reporting a history of minor lies designed to provoke sympathy on the basis of invented stories.

What struck me was not the behavior of the waitress but the behavior of the large number of people who took her side, including reporters who took the waitress's initial story as gospel, reporting it as something that happened, not as something someone claimed happened, despite no evidence beyond a digital image of what purported to be the check with note and without tip. Judging at least by reports, thousands of people on Facebook condemned the supposed behavior of the couple—with no evidence beyond the news stories—and many sent donations to the purported victim. 

Their behavior was stupid and unjust. The behavior of the reporters was also professional incompetence. 

One question about the story that nobody else seems to have commented on occurred to me. All of the reports describe the waitress as an ex-marine. She is also described as 22 years old, and the most recent story mentions "a day care center where she once worked." The minimum age of enlistment for the marines is 17. The usual terms of enlistment are for three to five years of active service. Marine corps training requires an additional three months. It is not impossible that someone could have enlisted at 17 on the shortest terms, left the corps at 20 and by 22 have worked first at a day care center and then at a restaurant, but the timing is sufficiently tight to be at least mildly suspicious, especially when combined with evidence that the person in question is a habitual liar. 

It would be nice to know if any of the reporters checked with the marine corps to make sure that "ex-marine" was not another fabrication.


Sunday, November 24, 2013

Obama, Silicon Valley, and Learning by Testing

There were PhDs working as low paid data managers during Obama’s ’08 campaign and top product managers developing interactive during ’12 campaign. There are many talented developers/product managers/data modelers who would take a pay cut to work on something they believe in. Especially for those with enough life experience to know how important the Affordable Care Act is, even if it’s not an ideal solution.
The quote is from a comment on a very interesting essay about the failure of the Healthcare.gov website.  Part of the essay's point is the danger, in IT projects and elsewhere, of a particular approach to doing large projects:
The preferred method for implementing large technology projects in Washington is to write the plans up front, break them into increasingly detailed specifications, then build what the specifications call for. It’s often called the waterfall method, because on a timeline the project cascades from planning, at the top left of the chart, down to implementation, on the bottom right.

Like all organizational models, waterfall is mainly a theory of collaboration. By putting the most serious planning at the beginning, with subsequent work derived from the plan, the waterfall method amounts to a pledge by all parties not to learn anything while doing the actual work. 
It occurred to me that the comment, combined with that point, raised an issue that had probably not occurred to the commenter. The Silicon Valley people who worked to reelect Obama were acting on their view of Obama and his policies. The arguments of the essay imply that they ought to be willing to revise that view and alter their political activities accordingly as further evidence comes in.

If, as many sources seem to suggest, Obama did not realize that healthcare.gov was not going to work, and if the reason he did not realize it was that he had created a culture around him in which people did not feel free to pass on bad news to their boss, then he is not, and was not, competent to be President. If, as Obama himself implied in contrasting the failure of healthcare.gov to the success of the IT efforts of his reelection campaign, government is very bad at doing this sort of thing, that is at least some evidence that the ACA was a mistake, likely to make health care worse rather than better.

I wonder how willing his supporters in Silicon Valley will be to apply the "test and revise accordingly" approach to their own political views.




Saturday, November 23, 2013

The Second Amendment in the 21st Century

A recent facebook post pointed me at an entertaining video in favor of gun control. The point of the video, surely correct, is that mass shootings were a lot less practical with 18th century firearms than with modern firearms. Its conclusion: "Guns have changed. Shouldn't our gun laws?"

There are two problems with the argument. The first is that gun laws have changed quite a lot over the past two hundred plus years. The second is that, while mass shootings get a lot of publicity, they represent only a tiny fraction of all killings.

There is, I think, a better argument to be made for the effect of technological change on the argument for the right to bear arms. As I interpret the Second Amendment, it was intended as a solution to a problem that worried eighteenth century political thinkers, the problem of the professional army. As had been demonstrated in the previous century, a professional army could beat an army of amateurs. As was also demonstrated, a professional army could seize power. Oliver Cromwell and the New Model Army won the first English Civil War for parliament and then won the second English Civil War for itself, with the result that Cromwell spent the rest of his life as the military dictator of England.

The Second Amendment, as I interpret it, was intended to solve that problem by combining a small professional army with an enormous amateur militia. In time of war, the size of the militia would make up for its limited competence. In time of peace, if the military tried to seize power or if the government supported by the military became too oppressive, the professionals would be outnumbered a thousand to one by the amateurs. It was an ingenious kludge.

It depended, however, on a world where the weapons possessed by ordinary people for their own purposes, mostly hunting, were as effective as the weapons possessed by the military. We are no longer in such a world. The gap between military weapons and civilian weapons is very much larger now than then. One result is that the disorganized militia, the population in general, no longer plays any role in military defense. Another is that, if there ever was a military coup in the U.S., ordinary civilians would be much less able to oppose it with force than they would have been two hundred years ago.

Civil conflict in a modern developed society is much more likely to be carried on with information than with guns—a government that wants to oppress its population does it by controlling what people say and know. It follows, in my view, that the modern equivalent of the Second Amendment, the legal rule needed to make it possible for the population to resist the government, has nothing to do with firearms. The 21st century version would be a rule forbidding government regulation of encryption. A government that has no way of knowing what who is saying to whom lacks the most powerful weapons for winning an information war.

There remains a strong argument for the right to bear arms, different from but related to its original function. People who are unable to protect themselves are dependent for protection on the police. The more dependent people are on the police, the more willing they are to tolerate, even support, increased police power. Hence disarming the population makes possible increased levels of government power and the misuse thereof, although for a somewhat different reason than in the 18th century.

Which is an argument against restrictions on the private ownership of firearms.

Wednesday, November 20, 2013

The Killer App for Google Glass

I can remember large amounts of poetry, but people's names, faces and the  information associated with them are a different matter. For the most part, I successfully conceal my handicap by a policy of never using names if I can help it, but once in a while the tactic fails. I still remember, as perhaps my most embarrassing moment, recommending Larry White's work on free banking to someone who looked vaguely familiar—and turned out to be Larry White. 

Help, however, is on the way. I first encountered the solution to my problem in Double Star, a very good novel by Robert Heinlein. It will be made possible, in a higher tech version, by Google glass. The solution is the Farley File, named after FDR's campaign manager. 

A politician such as Roosevelt meets lots of people over the course of his career. For each of them the meeting is an event to be remembered and retold. It is much less memorable to the politician, who cannot possibly remember the details of ten thousand meetings. He can, however, create the illusion of doing so by maintaining a card file with information on everyone he has ever met: The name of the man's wife, how many children he has, his dog, the joke he told, all the things the politician would have remembered if the meeting had been equally important to him. It is the job of one of the politician's assistants to make sure that, any time anyone comes to see him, he gets thirty seconds to look over the card.

My  version will use more advanced technology, courtesy of Google glass or one of its future competitors. When I subvocalize the key word "Farley," the software identifies the person I am looking at, shows me his name (that alone would be worth the price) and, next to it, whatever facts about him I have in my personal database. A second trigger, if invoked, runs a quick search of the web for additional information.

I am told that Google itself has a rule against building face recognition into glassware, so my Farley file software may not appear in the immediate future. But it is the killer app, and someone will build it.


Monday, November 18, 2013

What Should Replace Obamacare

A recent post on the Forbes site offers a convincing explanation of what was wrong with the current system of health insurance before Obama, hence what both it and Obamacare ought to be replaced by. Its central point is that what we call medical insurance is in part actual insurance, protection against low probability/high cost risks, in part prepayment of ordinary medical expenditures. The reason insurance policies take that form, also the reason that most of them are provided by the employer and so not portable, is that employer provided health insurance is bought with pre-tax dollars, ordinary medical care with after tax dollars. 

One result is that individual consumers have little incentive to be careful shoppers for health care services, since for the most part they are not the ones paying for them. A second is that insurance companies, in order to provide a substitute for careful shopping by customers, require a lot of paperwork from providers, driving up their costs. Costs are also driven up by state regulations that require insurance companies to cover things that the customers might prefer not to pay to have covered—the same problem that Obamacare produces on a national scale. In my state, California, for example, health insurance must cover acupuncture, and in Connecticut it must cover hair prosthesis.

One implication is that tax law should be changed to put employer provided insurance, privately purchased insurance and payments for uninsured medical expenditures on the same footing. To get the economics right, all should be treated as ordinary consumption expenditures. From the standpoint of the relevant politics, however, what the Republicans ought to propose is to make all three tax deductible, at least up to the level of what most people now pay. It's a lot easier to sell a tax cut than a tax increase.

A second implication is that insurance companies should be allowed to sell policies interstate. That would eventually eliminate inefficient regulatory requirements, since state insurance regulators would have to compete with each other to provide regulations that generated the policies consumers wanted to buy. In this case as in many others, competition is a good thing.

A well written and informative article by someone I am pretty sure I interacted with online many years ago. It's a small world.

Sunday, November 17, 2013

Multitasking or Parallel vs Serial Thinking

It is useful to know what one is good at, but also what one is bad at. 

The example I am thinking of is multitasking, doing and thinking about several things at once. The first clear evidence of my inability to do it well appeared decades ago in the context of my medieval hobby, which included combat with medieval weapons done as a sport. I was much worse at melee combat—one group of fighters against another—than at single combat. In single combat I only had to focus on the opponent I was fighting. In melee, I had to be, or at least should have been, simultaneously keeping track of everyone else near me. And I wasn't.

The same problem showed up much later in the context of World of Warcraft. Group combat there, a raid with a group of from five to forty people, requires the player to keep track of what he is doing, what other people in the group are saying—in the form of typed messages on the screen—and other things going on around him. I focused on what I was doing and frequently missed important things other people were saying. Interestingly enough, that was less of a problem if the group was using software that permitted voice communication, so that one kind of information was coming in mostly through my ears, another through my eyes. 

It is not just that paying attention to multiple things is hard. My daughter, playing the same game, can not only pay attention to everything in the game, she can also conduct one or two independent conversations, in typed text, while doing so. Pretty clearly, it is a real difference in abilities, whether innate or learned I do not know.

Thinking about it, it occurred to me that I had observed the same pattern in an entirely different context, the difference between how I think and how Richard Epstein, a friend and past colleague, thinks. I usually describe the difference as my thinking in series, Richard in parallel. It shows up when he is sketching the argument for some conclusion. 

A implies B. B implies C. C ...

At which point I demonstrate that B doesn't really imply C, that there is a hole in the argument. That is no problem for Richard, who promptly points out that A also implies B', a somewhat different proposition than B, which implies C', from which he can eventually work his way back to D, or perhaps E or F, and so to the conclusion that the original line of argument was intended to establish. Pretty clearly, he is running a network of multiple lines of argument in his head and only has to find some set of links in the network that gets him where he is going. I am focusing on running a single line of argument. Hence parallel vs series.

Saturday, November 16, 2013

Nonconforming ≠ Substandard

President Obama may have secured a measure of political relief for himself by allowing substandard insurance policies to be renewed for another year.

… 
 
“The president told me that if I like my health insurance, I could keep it. And that shouldn’t have an expiration date,” said Crusco, who has been covered under a nonconforming plan that did not cover maternity care. That fit her needs because, she says, she doesn’t plan to have more children.”

One of the things that irritates me about news coverage of the Obamacare mess is the willingness of many in the media to describe plans that do not fit the requirements of the ACA as "substandard."

The two quotes above, both from the same news story, nicely illustrate the rhetorical trick. A plan that does not cover maternity care is nonconforming, since it does not conform to the ACA requirement that all insurance plans provide maternity benefits.  It is substandard for someone who does not plan to have children, possibly a man or an elderly woman who is unable to have children, only if one assumes that the standard of what all plans ought to cover for everyone is determined by what Congress wrote into the act, which, as the example shows, is crazy. It should not take more than about thirty seconds of thought for a fair minded journalist to realize that at least some plans that do not fit the ACA's requirements are what their purchasers do and should want. 

Which suggests that quite a lot of journalists are either incapable of thought or engaged in deliberately biased reporting.



Friday, November 15, 2013

More on Selective Enforcement as Legislation

My previous post raised the question in the context of Obama's apparent intent to unilaterally modify his healthcare legislation. But it is an interesting problem more generally. The theory of our system is that the legislature makes laws and the executive enforces them. But laws cannot, in practice, be perfectly enforced, so the executive is necessarily making the decision about what resources to allocate to enforcing what laws. Where can or should one draw the line between that decision and using selective enforcement to rewrite the law?

This is at least the third time that Obama has offered to do it. The first was when, during his first campaign, he said that under his administration federal marijuana law would not be enforced against people using medical marijuana in conformance with state law—a promise that he promptly broke. The second was when he announced that certain categories of illegal immigrants would not be prosecuted. Revising Obamacare is the third.

Imagine the following scenario at the state level. The governor of California proposes a bill to tax cars that burn gasoline but not electric cars. The bill fails to pass. He responds by announcing that he has instructed the state police that they should enforce speed limits strictly against gasoline powered cars but only stop electric cars if they are going at least twenty miles an hour over the speed limit. 

I am not a constitutional scholar and do not know whether there are legal limits to executive power that would prevent such a tactic. It is legal to selectively tax gasoline powered cars. It is legal for the police to devote their limited resources to catching some speeders but not others, for instance by patrolling highways where they believe speeding is a particularly serious problem. Is it legal to accomplish the substance of the former under the form of something like the latter? 

More generally, what are the limits of such an approach? The executive is not entitled to enforce a law the legislature has not passed. But is it entitled to selectively enforce one that the legislature has passed in order to achieve the effect of one that it has not passed?

Comments from those who know more about constitutional law, state and federal, than I do are welcome.

Thursday, November 14, 2013

Selective Enforcement as Legislation

The poet Ibn Harma performed before the caliph, and so delighted was the Prince of the Muslims that he asked the poet to name his reward.

"The reward that I want from the Prince of the Muslims is that he send instructions to his officials in the city of Medina commanding that when I am found dead drunk upon the pavement and brought in by the city guard, I be released from the penalty prescribed for that offense."

"That is God's law, not mine," the Caliph replied. "I cannot change it. Name another reward."

"There is nothing else I desire from the Prince of the Muslims."

The Caliph thought a moment, then sent instructions to his officials in Medina commanding that if Ibn Harma was found drunk and brought in for punishment, he should receive sixty strokes of the lash as the law commanded. But whoever brought him in should receive eighty.

It is one of my favorite medieval Islamic law and economics stories. In theory, Islamic law is not made by the sovereign but deduced by legal scholars from the Koran and the Hadith, traditions of what Mohammed and his companions did and said. The Caliph accordingly could not change the law against drunkenness. He could not even change the punishment, since it is a Hadd offense, one with a fixed punishment deduced from the religious sources. He could, however, repeal it de facto although not de jure by changing the incentive to enforce it.

I was reminded of this by today's news. President Obama is attempting to forestall congressional efforts to alter the Obamacare legislation by doing it himself without appeal to congressional authorization. Presumably the theory is that, since he is in charge of the executive branch and the executive branch is in charge of enforcing the law, he can simply announce that the part of the law forbidding insurance companies from continuing to offer plans that do not meet the requirements of the new law will not be enforced, at least as far as existing customers of such plans are concerned.

This raises two questions. One is a question of constitutional law, whether what he is doing is in law, as it obviously is in fact, a violation of the division of powers between the legislative and executive branches. The other is a political question. Arguably, the political effects of the present mess will have at least partly died down over the next year. Is Obama making things worse for his party rather than better by pushing the next failure, the result of good risks choosing to keep existing plans and leaving the plans sold through the market to the bad risks, making them very expensive, to just before the next election?

Wednesday, November 13, 2013

How to Run a Restaurant

My previous three posts were serious proposals for changes that I thought worth making. This one is more nearly a puzzle. In the other cases I can see plausible reasons why the changes might not have occurred even if I am right in thinking them desirable. In this case, I take the nonexistence of what I propose as pretty strong evidence that I am missing something, that it is for some reason a considerably less good idea than I think.

When I sit down in a restaurant, I am consuming two different things—the food produced and the use of seat, table, heating or air conditioning, the part of the restaurant I occupy and the services it provides. I can choose to eat lots of expensive food fast, in which case I consume lots of the first and little of the second. Alternatively, I could order something inexpensive, perhaps a bowl of soup, and linger over it for an hour. 

Since the restaurant charges only for the food, I have no direct pecuniary incentive to economize on my consumption of space. Since the price of the food has to cover the cost of both food and space, I have too strong an incentive to economize on my consumption of food. If desert costs the restaurant a dollar to produce, is priced at three dollars, and is worth two dollars to me, I don't buy it—a net loss to me plus the restaurant of a dollar in potential surplus. 

The obvious solution to these inefficiencies is to price food and space separately. When I sit down, a clock at the table starts running. When I leave, my bill includes a certain amount per minute for the time, plus the cost of what I ordered. If I want to spend two hours chatting with a friend over tea and scones, I can do it without worrying about angry looks from the waiter—and pay for it. My total bill should average out about the same, since the combined bill still has to cover the same costs. But now the separate cost of sitting and of eating is being billed separately, giving me the right incentive with regard to each.

The puzzle is why no restaurant, so far as I know, is run that way. Some have crude approximations, such as a cover charge. But why not simply price food and space separately, just as rental cars sometimes price use of the car, mileage, and gas separately?

How to Admit Students to College

Colleges base their admission decisions on a variety of different criteria. One of them is how well the student can write. At present they have two ways of measuring that, neither of which is worth much.

One way is by the SAT writing exam. The problem is that consistent grading across a large number of students requires something close to machine grading, human graders checking the essay against a simple and objective set of criteria. That might tell you how well the student has trained for the test but it is not very good evidence of how well the student can write.

The other way is by having a prospective student send in an essay for the admission people at the college to evaluate. However good a job they do of evaluating the essay, they have no way of knowing who wrote it. The applicant may have written it entirely himself, he may have written it himself and had it gone over by someone more expert in writing, he may have hired someone to write it for him. I have no inside knowledge, but given how important college admissions have become I would be astonished if no such market exists.

There is a simple solution. Many applicants visit a college before applying. As part of the process, put the applicant in a room with a computer and a list of topics and give him an hour to write an essay. If the applicant is not going to visit the college, perhaps there is an alumnus living near him who would be willing to provide the computer and monitor the writing. If multiple colleges want applicants to write essays under controlled conditions, it should be in the interest of someone, perhaps the organization that now administers SAT exams, to arrange suitable facilities in cities scattered across the country.

It seems like an obvious idea and I do not know why, so far as I can tell, it has not yet happened.

How to Buy a House

If you go to a real estate agent in search of a house, there are two questions you are likely to be asked. One is how much you want to spend. The other is what sort of house you are looking for—how large, how many rooms of what sort, in what location. 

It is in the agent's interest to ask the first question; since his commission is a percentage of the sales price, he would like to sell you the most expensive house you are willing to buy. It is not clear that it is in your interest to answer it. Even if you can afford a two hundred thousand dollar house, you might prefer one that fits your requirements a little less well and costs substantially less. If you tell the agent that you are willing to spend two hundred thousand dollars, he may decide not to show you any house that will sell for much less than that.

The second question raises another problem—what you want to buy depends on what it costs. You would prefer a house with a bedroom for you and your wife, a bedroom for each child, and an extra room for a home office—but if an extra bedroom increases the price of the house by too much, you could put your desk at one end of your bedroom or persuade two children to share a room. In order to give a sensible answer to the question, you need a price list, a description of how much more you can expect to pay for a larger house, one with more bedrooms, one in a better location. If you had such a list, you could figure out about what sort of house you wanted to buy and what it should cost and ask the agent to select houses to show you accordingly.

So far as I know, no such price lists are currently available—but there is no good reason why they shouldn't be. Realtors have access to extensive information on houses that have sold. In any area with a sufficiently lively real estate market, it should be possible to use conventional statistics to work out from that data about how much more a house costs with one more bedroom, all else held constant, or with an additional hundred square feet of area, or with a larger yard, or in a better school district. My proposal is that somebody, perhaps the existing multi-lister service that provides the data to the realtors, should do so. The realtor could then provide his customer with a price list, the customer could decide about what sort of house he wanted, and the realtor could proceed to find the houses that came closest to fitting what his customer wanted.

I should  add that it is almost twenty years since I was last in the market for a house. If what I have just proposed is, at this point, common practice in the industry, perhaps one of my readers with more up to date information can tell me.

How Bar Passage Rates Should Be Reported

I have decided, for no particular reason, to post some modest suggestions for improving the world. This is the first.

Law schools compete for students, even more fiercely than usual in the current environment of declining enrollments. Most prospective students plan to be lawyers, being a lawyer requires, in most states, that you pass the bar exam, so an important metric of a law school's quality, arguably the most important, is the fraction of its graduates who pass the bar.

Whether a student passes the bar depends in part on the education he got from his law school, in part on how smart and hard working he is—characteristics for which LSAT score and undergraduate grade point average provide at least an approximate measure. Thus a law school can increase its bar passage rate, quality of education held constant, by admitting students with higher LSAT's and GPA's. Since increasing that rate will result in more students applying to the school, there is an obvious incentive to do so. 

One way of getting better qualified students to go to your school is by offering them scholarships. The result is a system where the ablest students, the ones most likely to end up as high paid partners in elite law firms, get an education subsidized by the tuition payments of the least able students, the ones most likely to end up with a large debt and no job. It is a bizarre result by almost any standard, and particularly anomalous given that the culture of most law schools is predominantly left of center and egalitarian.

It is also the result of a logical mistake, a fallacy of composition. Getting a student who is almost certain to pass the bar to come to my law school raises the school's bar passage rate, but not by raising the odds that other students will pass the bar. The information relevant to a prospective student is not what the average bar passage rate at a school is but what the bar passage rate is for students like him.

I  see no reason why that information cannot be generated and made available. Let each law school run a regression fitting bar passage rate for, say, the past three graduating classes to LSAT, showing how likely a student with any given LSAT is to pass the bar. Do the same thing with GPA. Publish the results. It might turn out that the student who is readily accepted at SCU but can barely get into Stanford will have a better chance of passing the bar if he goes to the former school, even though the latter has a higher average bar passage rate. The question is not which school is better but which school is better for which student.

If that information is made available and students choose to base their decisions on it, the perverse incentive currently faced by law schools largely disappears. They can still increase applications by doing a better job of educating their students to pass the bar. But stacking the deck by competing to get better qualified students to come will no longer work.

There is a second advantage to this change from the standpoint of those law professors, probably a sizable majority, who are in favor of affirmative action—meaning, in practice, lower admission standards for black students. There are two arguments against the practice that ought to concern them. One, from the standpoint of the school's selfish interest, is that lower standards will mean a lower bar passage rate, making it harder to get students to apply. Another, from the standpoint of the people the policy is supposed to help, is that admitting underqualified students may mean setting them up for failure. The same student who would get a good education at SCU, where he is about as well qualified as his fellow students, might have a much harder time at Stanford, where he is near the bottom of the distribution, and similarly all along the line. It is at least arguable, may well be true, that the real effect of affirmative action by law schools is to reduce, not increase, the number of black lawyers. 

My proposal solves both problems. Law schools that choose to admit less qualified students no longer lose reputation and applications as a result, provided they can do a good job of educating the students they admit. And students, black and white, will be in a position to decide for themselves which school will give them the best chance of passing the bar.

Did Rand Get Obama Right?

Not Rand Paul, Ayn Rand.

The current Obamacare mess reminds me of part of Rand's picture of the bad guys in her novels—that they thought that if only they gave sufficiently forceful orders, what they commanded would have to happen, that objective reality was subject to human will. The Obamacare exchanges had to work, Obama told his people to make them work, so it would happen. They didn't work. Obama ordered his people to fix them by the end of November, so that will happen.

My guess is that it won't. 

More important, whether the exchange works or not, I don't think it is possible for the program to work in the way Obama and his supporters predicted, to make better insurance available at lower cost to more people. If I am correct, there are two possible explanations for how the program got passed. 

One is that Obama and his supporters were engaged in a deliberate fraud along lines such as those I described (and rejected) in my previous post. The other is that Rand had it right, that they believed that if only they had sufficient will reality would bend and things would turn out the way they wanted, that all arguments to the contrary were produced by people with either bad motives or insufficient determination. 

What I gather that Thomas Sowell, in a book I haven't read but have read about, referred to as the unconstrained vision.

Monday, November 11, 2013

Consequences of the Failure of Obamacare

In reading critics of Obamacare, I occasionally come across an interesting, if mildly paranoid, theory—that it was designed to fail in order to bring in the single payer system that its supporters really wanted. I would not be surprised if there were supporters who saw it that way, but I doubt that they represented a significant fraction of those that initially supported the program. It is, however, worth thinking about whether that strategy, deliberate or not, will work. If Obamacare turns out, as now seems likely, to be a clear and massive failure, what will come out of its collapse?

The theory I described makes most sense from the standpoint of people on the left who were strong supporters of a single payer system. Many of them saw Obamacare as a compromise with the Devil, a kludge that retained an unnecessary and inefficient system of private insurance. Some even said so. From their standpoint, the obvious implication of its failure would be that it did not go far enough.

Whether or not they are correct in their view of what should happen, I do not think their view works in terms of what will happen. Obamacare, having been fiercely opposed by Republicans and especially conservatives, is widely perceived as a form of socialized medicine. After observing that the glittery promises offered to pass it were wildly false, I do not think voters are likely to conclude that it went in the right direction, just not far enough.

What I would like to see come out of its collapse is a shift in the other direction, away from the extensive government involvement in medicine and medical insurance that already existed before Obama. While people often talk as though the pre-Obamacare system was private, about half of all medical expenditure was by governments and the rest  in various ways regulated. I have seen it claimed, whether correctly I do not know, that the anomalously high cost of American medicine, the one part of the standard criticism that is clearly true, only dates from the introduction of medicare. Perhaps one of my readers can offer data to support or refute that claim.

Two obvious reforms in the direction of something closer to a free market would be to permit interstate selling of insurance and to eliminate the regulations, I think largely at the state level, that, like Obamacare, constrained what insurance companies had to offer their customers. It makes sense that suppliers of mental health services would lobby governments to force insurance companies to cover what they sold—whether or not most purchasers of insurance thought such coverage was worth its cost. Similarly for  other services.

Readers interested in discussions of these issues by someone who knows much more about them than I do may want to look at the website of the NCPA, John Goodman's organization.

Sunday, November 10, 2013

Defining Austrian Economics

Yesterday I spoke at a Students for Liberty Conference. Before the talk I had a conversation with several students who identified themselves as supporters of the Austrian school of economics. I asked them if they could explain what that meant by identifying a proposition in economics that almost all Austrian economists and almost no non-Austrian economists would agree with. 

One response was along the lines of "Austrians believe that one can derive economic conclusions from convincing axioms without adding any empirical facts." So I asked them to give me an example of such a conclusion, of a statement that one could  test, observe the truth or falsity of in reality, that could be derived in that way.

I now put the same two questions to any readers of this blog who consider themselves believers in Austrian economics. Can you state such a proposition? For the particular proposition that was proposed, can you give an example, a prediction about the real world that can be made with certainty from economic theory alone with no input of real world information? 

My talk was on National Defense in a Stateless Society; I  have now webbed
a recording of it.

Friday, November 08, 2013

Iran's Nuclear Program and the Political Time Horizon

Supporters of an expansive role for government often argue that we need government to make us take account of the long run consequences of our actions. As best I can tell, this is precisely backwards. One way of seeing why is to think about the incentive, in a private market, to make investments that pay off in the far future, possibly after the investor is no longer alive—for someone sixty years old to plant hardwoods that will take forty or fifty years to mature. The reason doing so is in his direct self interest is that he can sell the land with the trees on it to someone else in ten or twenty years, when he is still around to spend the money; the price he can sell it for will reflect the fact that the trees are that much closer to mature. Working through the logic of the situation, it is straightforward to see that the investment is worth making as long as the expected return is at least as high as any alternative investment.

For this to work,  the investor has to be reasonably sure that when he wants to sell the trees they will still be his. If he believes that, each year, there is a ten percent chance that someone will steal his trees or that the government will decide they are an essential national resource and confiscate them, the investment will only pay if the return is at least ten percent per year more than the return on a safe investment. To put it differently, long run planning in the private market depends on secure property rights.

Politicians have insecure property rights in their political assets. If Obama does something that is politically costly now but that produces highly desirable results twenty years from now, it will be the president in office then who will get the credit. One way of interpreting Obama's repeated claim that anyone who liked his insurance could keep it, a claim he surely knew was false and knew the voters would eventually discover was false, is that he steeply discounted a political cost that would only come due after his final election. Long term effects matter to politicians only to the extent that voters can predict them and care about them—and a voter, knowing that his vote is very unlikely to change the outcome of an election, has no incentive to be well informed about such things. A politician in office can claim that whatever the voters do not like about the present is  the price for getting something they will like in the future, but there is little incentive for the politician to care whether it is true. Josef Stalin could not plausibly tell the people he ruled that they were living well, but he could and did tell them that their current hardship was the price of catching up with and surpassing the capitalist West. It was only long after his death that that particular lie finally caught up with his successors.

Which brings me back to the subject of this post. Doing anything substantial about the Iranian nuclear program—for instance making war on Iran—would be politically very costly for this administration or previous administrations. It makes sense instead to do whatever relatively easy things they can to slow the development of an Iranian atomic bomb in the hope of postponing its appearance until someone else is in the White House. And when someone else is in the White House, the same logic applies.

I suspect the Iranians are smart enough to have figured that out.

I should probably add two further points. The first is that I think the Iranians are trying to develop nuclear weapons because it seems to me an obviously sensible thing, from their point of view, for them to do. Further, I can see no other reason why they would have put large resources into developing a nuclear industry. The second is that I am not arguing that the U.S. should have attacked Iran. My own view is that U.S. foreign policy throughout my lifetime has been too aggressive, rather than not aggressive enough. My point is merely that even if we should have done it, we probably wouldn't have, and that current negotiations are much more likely to produce an illusion of progress that Obama can use to bolster his ratings than real progress.

I hope, however, that I am wrong.