Thursday, May 16, 2013

What's up with Benghazi?

This has nothing to do with economics, but I was curious enough to write a post on it. And of course, there's a bit of moral hazard since the more I say Benghazi, the more google web hits I will get. I will attempt to lay this out as apolitically as possible.

Lets start with a fact. Deadly attacks on US consulates and embassies are relatively common. Since 1971, we've averaged at least 9.4 deaths per year in attacks on US Embassies and Consulates. Now, I certainly think that each and every attack warrants a thorough investigation, but that doesn't necessarily mean a congressional investigation--the main responsibility for investigating these attacks falls on the intelligence agencies and the State Department, who seek to find out who carried out the attack and why and then report to the White House, which makes decisions about retaliation and additional security measures. Of course, it is congress's prerogative to get involved if they feel that these agencies are mismanaging the investigations.

That's where I get lost in the whole public controversy about Benghazi--the "scandal" doesn't seem to be about whether the state department and intelligence agencies bungled the investigation, or whether the White House made the wrong call on security measures, but rather about some alleged cover-up wherein the White House deliberately lied to the public by saying it was a spontaneous protest, when they supposedly already knew it was a planned attack by organized terrorist groups.

First off, what does it say about American politics when, in the absence of any evidence one way or another, we automatically assume that the administration was malevolently misleading the public, rather than simply misinformed about the nature of the attack? As it turns out, the administration was actually just misinformed, as the congressional investigation has turned up memos from the CIA informing the White House that the attack was a spontaneous protest mob rather than an organized terrorist attack. From the CIA memo:
"The currently available information suggests that the demonstrations in Benghazi were spontaneously inspired by the protests at the U.S. Embassy in Cairo"
Given that the attacks occurred amid a wave of spontaneous protests in Egypt and Yemen that involved violent attacks on US diplomatic missions, the initial CIA assessment sounds reasonable to me, even if it turned out to be incorrect. The question is, did repeating the erroneous CIA assessment to the public cause any harm? It's hard to see how it could have--the attacks were already over at that point. As new information about the attacks, the working hypothesis was revised and, as far as I can tell, the press was informed promptly.

But suppose that the conspiracy theorists were right--the White House somehow knew immediately that the attacks were planned and executed by known terrorist organizations, but the White House decided to tell the press instead that it was a spontaneous mob attack. The obvious explanation, at least to my mind, for such misdirection would be that the White House was attempting to protect some intelligence source. This actually happens all the time--for example, even with the Soviet Union gone for more than a decade, the CIA and White House still maintain cover stories about known Soviet spies and plots, because when you infiltrate an enemy or break one of their codes, you never, never let anyone know until you absolutely have to, even if the original enemies are long gone. My point is this: even if it had turned out that the administration deliberately lied, it takes a degree of paranoia to automatically assume that the reason was solely malicious--as if the president hates America, or something--rather based on a genuine need for secrecy.

And even if it turned out that the White House lied for no good reason, I just don't see how it can be the watergate-type scandal the right wing thinks it is. Sure, it would be a scandal, in the same way that Bush's WMDs-in-Iraq lie was a scandal, or Clinton's "I did not have sexual relations with that woman" lie was a scandal. But Watergate? Resignation? No way.

At any rate, it is pretty clear that the scandal narrative that the right wing has been painting just isn't true. Not that I'm saying the republicans lied about it, they just didn't have the right information at the time. See what I did there?

Thursday, May 9, 2013

In-Home HIV Testing with OraQuick

Disclaimer: I have not communicated with and have no relationship with the makers of OraQuick, nor was I compensated in anyway for this post.
I have written about HIV testing here and here. In both cases I argued in favor of universal adult HIV testing, regardless of risk factors. To prove I also walk the walk, I decided to blog about my experience with the recently-approved in-home HIV test called OraQuick, which is a test that you can buy online or in a store and uses a sample of saliva to test for markers indicative of HIV. This is my first experience with in-home testing. Unlike previous in-home HIV tests, which required you mail the sample to a company for testing, OraQuick will give you results right on the spot in 20 minutes.

The first task, of course, was to find a place to buy the testing kit. I didn't want to order it online, so I looked for it while I was at the store. Their website here names a few stores that supposedly sell the product; however, while Walmart was on the list, I looked there and could not find it. So I went to Rite Aid instead.

It was not hard to find OraQuick at Rite Aid. It was on the top shelf in a section immediately next to the condoms among other sexual health products. The price was \$39.99--slightly less than the national average for in-hospital testing (which was \$48.07), about equal to the cost of two flu vaccines. I noticed that there was only one kit on the shelf--not sure if that means that not many people are buying so they didn't stock very many, or if that means plenty of people are buying so they are almost out of stock. At anyrate, I grabbed it and headed to the register. The cashier was very talkative with the guy in line in front of me, but there was a marked change in attitude when she saw what I planned to buy. Though she struggled act normal, I certainly felt like there was a stigma about the whole thing. When she rang up the testing kit, for some reason she needed to enter my birthdate into the computer--in retrospect I'd like to know why, was that a store policy, or the law? And since she didn't check an ID, what meaning could it possible have?

The packaging on the testing kit could be improved. Having done academic research on HIV and testing, I was already familiar with the product and knew what to look for. I had also made deliberate plans to go looking for it in the store. Although the rather ambiguous brand name "OraQuick" was instantly recognizable, someone who was unfamiliar with the brand would probably not have noticed the much less readable, smaller print where it said "IN-HOME HIV TEST." I would suggest they make this much more prominent on the packaging, and maybe that will help remind and encourage regular condom-shoppers to get tested. As it is, I doubt they even notice the HIV tests siting there on the shelf.

On the back of the package it lists off a set of "risk events" that mean you should get tested:
• Sex (vaginal, oral, or anal) with multiple sex partners
• Sex with someone who is HIV positive or whose HIV status you don’t know
• Sex between a man and another man
• Using illegal injected drugs or steroids
• Shared needles or syringes
• Exchanged sex for money
• Having been diagnosed or treated for hepatitis, tuberculosis or a sexually transmitted disease like syphilis
I'm not too happy with this list. For one thing, I think it is deceptive. At first glance you'd get the impression that a woman currently involved with only one sexual partner shouldn't get tested. But pay close attention to the second bullet--even if your sexual partner has been tested, you can't be sure they haven't been infected since the test, so the correct reading of this is that literally everyone who has had sex should get tested. They should just say so. My second problem with this list is that it is just a list of marginalized and stigmatized groups, and presenting the testing recommendations in this manner will only discourage people from getting tested for fear of being stigmatized. They should, by all means, educate people on the factors that elevate risk for HIV, but the recommendation should be that all adults get tested regularly. As an aside, it says you should not use the test if you are 16 or younger. Given that the CDC recommends tests for everyone 13 and older, and the USPSTF recommends everyone 15 and up, I'd like to know why 13 to 16 year olds shouldn't use OraQuick--is it inaccurate for these demographics?

Inside the cardboard box it came in, there is a second plastic box that contains the actual testing kit--they could streamline the packaging a lot. As I flip open the plastic container, I'm greeted with "Congratulations on you personal decision to test yourself for HIV!" Why thank you.

Inside, there is a test tube with a clear fluid in the bottom, a test stick that has a foam pad on one end and an indicator with a couple markings next to it on the other end, a short dark green wooden pencil, a couple booklets, and a self-sealing opaque bag meant to ensure privacy when you throw everything away afterwards. I'm advised to read the booklet titled "HIV, Testing & Me" before proceeding. One interesting factoid it reveals is that in a clinical trial, 8 out of 96 HIV-infected people actually tested negative using OraQuick. So this isn't exactly a highly reliable test. By contrast of the 4903 non-infected people in the study, only one got a positive test using OraQuick--it seems to me that the test is poorly calibrated, since ideally we'd want it to skew towards more false positives and fewer false negatives. Additionally the booklet contains a phone number you can call for advice and support, and to answer any questions.

After reading the booklet I read the next set of instructions--not to eat or drink or use any oral care products for 30 minutes prior to testing. Oops. We'll reconvene in half an hour...

To actually do the test, I swipe the pad on the test stick along by top gums and then along my bottom gums, as instructed, and then place the test stick into the test tube. Time: 3:18 pm. There is a compartment in the plastic container of the test kit to hold the test tube and test stick while you wait 20 minutes for it to test. I'm advised not to check the results before 20 minutes, and no later than 40 minutes, after which the test becomes inaccurate. There are two marks on the test stick next to the indicator, one labeled C and the other labeled T. I'm told that if, after 20 minutes, there is no red line on the indicator next to the C, then the test did not work. If there is a red line, even if it only shows up as a faint mark, next to the T, then it means that it tested positive for HIV.

Time 3:38 pm. It has been 20 minutes, so I'm ready to check the results. There is a red line next to the C, no mark of any kind next to the T. I'm HIV negative.

Now that I've done it, so should you! Go get tested folks!

Wednesday, May 8, 2013

Niall Ferguson's Gay Problem

Plenty has already been said about Niall Ferguson's gaffe in which he attributed Keynes's belief in his economic ideas to his homosexuality and lack of offspring. Ferguson issued an unqualified apology, except that, you know, he still believes what he said.

Now, I personally don't think Ferguson is necessarily anti-gay. He claims not to have any objection to homosexuality, and I will take him at his word. But, I think there is a deeper issue than that--his comments, attributing a person's professional opinions entirely to his personal life, reveal that Niall Ferguson isn't just bad at economics; he is also bad at History.

I say this as someone who, unlike Ferguson, has degrees in both history and economics. Ferguson bills himself as an "economic historian," but what you need to know is that there are two types of economic historians: there are economists who study historical topics, and then there are historians who study economic topics. Ferguson is the latter. There is nothing wrong with that--economics has been a powerful influence on history, and it is entirely appropriate to study that--but it means that Ferguson has literally no training in social science, policy analysis, or economics. He is an historian. That's an important distinction because it really is inappropriate for historians to give policy advice. Sure, Ferguson is entitled to give his opinion, same as anyone else, but we should recognize that Ferguson does not do so from a position of authority--he is not a social scientist, and has no background in policy analysis techniques.  If you still don't believe me that Ferguson should not be giving policy advice, let me explain with an analogy: Roy Porter held a doctorate in history from Cambridge, and wrote The Greatest Benefit to Mankind: A Medical History of Humanity. He was one of the world's leading medical historians; would you trust him to be your physician when you get sick? Like I said, Roy Porter would be entitled to say what he thinks about your illness, but not from a position of authority--you need to get yourself someone who, you know, went to med school.

But even when we examine Niall Ferguson's historical analysis, we also find it lacking. Ferguson's view that Keynes's sexuality directed his economic theories has permeated much of his work throughout his career. Brad Delong finds Ferguson way back in 1995 writing that Keynes's desire for more leniency towards the Germans in the Treaty of Versailles, as well as his inflation-dove views, were attributable to his homosexual affair with a German staffer at the treaty negotiations. This is all kinds of wrong. For one, the British in general, not just Keynes, pushed for more leniency for the Germans--this is well established in the historical literature, and wasn't because all British politicians had gay crushes on German boys. Besides, are we really to believe that there were no heterosexual affairs at Versailles? Ferguson really does single Keynes out for his sexual orientation. Second, Ferguson actually admits that Keynes's "affair" was not sexual at all. So, (if it was anything) it was really more of a bromance--what does that have to do with Keynes's sexual orientation? Are we to believe that had Keynes been straight, he would never have befriended any Germans?

But my real point is this: Ferguson's suggestion that an economist's professional theories are based on his personal life only makes sense if you don't think about it. For illustration, consider Nate Silver. He is gay. He predicted Obama would win in 2012. Obama supports gay marriage. Using Ferguson's reasoning, Nate Silver's professional forecast was a result of his desire to see a pro-gay president win re-election--and in fact, there were lots of right wing commentators who dismissed Silver's forecasts for precisely this reason. In reality, however, Nate Silver's forecast was based not on his personal sexual orientation, but on the overwhelming weight of the evidence. Nate Silver might be gay, but his statistical forecasting models most certainly do not have a sexual orientation. Any suggestion that Silver's forecasts were personal rather than scientific is simply wrong, if not downright offensive.

With Keynes, it is much the same story. The only way you can claim that Keynes's views on inflation or counter-cyclical fiscal policy were a consequence of his personal life--gay or straight, children or childless--is if you completely ignore all of the actual science he conducted, arguments he made, evidence he presented.

Niall Ferguson's problem isn't that he's homophobic. It isn't even that he lacks training in the social sciences. His real problem is that he is a bad historian. My hope is that Ferguson's gay problem will be his "excel error" moment, and that people will now start to reconsider his qualifications.

Monday, May 6, 2013

A few thoughts on statistical significance

The new study results from the Oregon Medicaid experiment have everyone weighing in on how to interpret various statistically significant and insignificant results.

One problem that has been pointed out is that some people are improperly comparing statistical significance across results. This isn't just technically invalid, but actually downright stupid when you think about it. For example, it is possible for two studies to find exactly the same results, but one of them is statistically significant while the other is not (if, for example, one had more observations than the other). Even if the two results are different, if the point estimate for the one falls within the confidence interval of the other, then we really can't say that the two results differed at all.

Other problems have to do with the interpretation of P-values. Some people yelled at Austin Frakt for saying that a P-value of 0.07 was "almost significant." Now, we have a problem if scientists are adjusting significance thresholds in response to the results of the studies, but as that is clearly not the case, Frakt's interpretation is completely valid in a frequentist world--the P-value is a continuous measure of evidence.

On a more general note, no one is quite sure what a P-value is. Pedagogically, it is the conditional probability of observing the data (technically, the discrepancy between some measure of the observed and hypothetical data) given that the null hypothesis is true. Stating it that way, I think, reveals how absurd it is to suggest that the P-value is the probability that the null is true--this is, in fact, the probability that we have specifically conditioned out. If that P-value concept sounds circumspect, that's because it is. Scientists would love to know the probability that the null is true, but we cannot observe unconditional probabilities--the data we observe is all conditional on the truth being true. So instead we essentially hunt for discrepancies between the observed data and data predicted from a theoretical model in which the null is true. If there is an unlikely discrepancy, we "reject the null."

That brings me to the next thought, which is that failing to reject the null should not in general mean accepting the null. Too often I see people talk about accepting versus rejecting the null--you are never supposed to "accept" the null hypothesis. Failing to find evidence against the null is not the same thing as finding evidence for it. In policy, it is often necessary to assume one or the other hypothesis--for example, a doctor must either assume a patient needs treatment or doesn't, and there is no neutral position. As a risk-management technique, we deal with this problem by making the null whichever reality in which it would be more damaging if we did the wrong thing. On a related note, rejecting the null also isn't necessarily the same as accepting the alternative. Or rather, I mean it's not the same as accepting your alternative. Case-in-point: rejecting the null hypothesis that the results of your randomized experiment were random should not lead you to conclude that magic is real. At best, you can conclude that the results were non-random. That is, we can only make statistical distinctions between hypotheses that are not only mutually exclusive, but complementary.

A lot of this confusion could be cleared up if, instead of reporting P-values, we reported confidence intervals. But I'm sure we'd have plenty of misinterpretations there too. The correct interpretation of a 95% confidence interval is that we are 95% "confident" that the true parameter is within the interval. But then, that's where frequentist assumptions come back to bite us--we have to say "we are 95% confident" rather than "the probability is 95%" because the later requires knowledge about priors that frequentists profess not to have. The Bayesians have us beat on that point. But the point I really want to make is this: inevitably someone asserts that the probability (or "confidence") of observing an outcome outside the 95% interval is 5%. NO! The confidence interval is the range for the parameter values, not individual observation values. In fact, you will typically observe lots of data falling outside the computed confidence interval. If forecasting is what you want, then you need to compute a prediction interval, not a confidence interval.

Government Backs Universal HIV Testing

Via Laura Newman, the United States Preventive Services Task Force, an independent government-funded agency that reviews research and makes recommendations about preventive medical services utilization (for example, recommendations on who should get tested for HIV), has now issued a recommendation that HIV testing should be included as part of the regular blood screening for all adults over 15 years old. Since this is what I argued for in a previous post here, it makes me think that maybe someone actually reads my blog.

As regular readers will recall, I did a crude cost-benefit analysis in a previous post and showed that on net, annual testing of all adults would be welfare-improving. As a side note, a number of readers misinterpreted this welfare analysis. I did not argue that the testing should be paid for by the government, though that's not something I'd necessarily oppose, nor did I argue that testing would be net savings to the US treasury. It will certainly cost a lot more money than it saves, but the statistical value of the lives it would save is much more than the cost. And from a social welfare perspective, that's true no matter who is paying for the tests and treatments. As a side note to my side note, I want to confess two major flaws in my back-of-the-envelope calculation: 1) I understated the amount that the treatments would cost, because I implicitly assumed that people who don't get treated die immediately from the infection. I should have used an overlapping generations model, and the treatment costs would be several times larger than the $400 million figure I came up with. This error, however, is mostly offset by the fact that 2) I did not count the savings from the fact that universal testing would reduce transmission rates by as much as 90%. That is, I implicitly assumed that detection and treatment had no effect on transmission. The blog post wasn't meant to be a scientific cost-benefit analysis, just a starting point for thinking about the costs. But it would have been wise to mention those two shortcomings in particular. I will also add that while I based my calculation on the census count of adults, which the Census defines as 18+, the task force recommends regular testing for everyone 15 years old and up, so the total cost of testing is somewhat larger than the \$12 billion I calculated (worth noting that the CDC recommends routine testing starting at age 13). Starting testing at ages 13 or 15 is a better choice rather than 18 because, for sexual health purposes, puberty is the appropriate demarcation between adults and children. Any population that is likely to be engaged in penetrative sexual activity is at risk for HIV. It is important to start treatment immediately after the infection (it has even been shown that early enough treatment, within a month, can functionally cure HIV infected people). And it is absurd to suggest that 15 to 18 year olds are all virgins.

I mention this because the American Academy of Family Physicians has disagreed with the recommendations, saying that 15 to 18 year olds should not be included in the recommendation. That's not to say that they thing no 15 to 18 year olds should be tested, but that testing should only be done if they are part of an elevated-risk demographic, such as sexually active gay boys.

CDC estimates that 1.3 out of 10,000 people between 15 and 19 years old have HIV. An interesting question is how likely these early-infection cases are to die from failing to start treatment early enough. It seems likely to me that the propensity would be quite high: research shows that patients, especially minors, aren't likely to disclose their risk factors for HIV (like gay sex or intravenous drug use) to their doctors, and that on average, symptoms don't start until 10 years after infection, when it is too late to completely prevent AIDS. Moreover, this 10-year window when the kid isn't aware of his infection is the period of his life when he is most likely to transmit it to other people. So while the infection rates are relatively low for this demographic, it seems to me that the potential benefits of testing are still pretty high. And for a mostly meaningless welfare calculation, the statistical value of 1.3 lives is about \$4 million, the cost of treatment is \$26,000, and the cost of testing 10,000 people is \$480,700. That's net benefits of \$3,493,300 per 10,000 adolescents. Of course that's mostly meaningless because 1)We need to subtract the costs and benefits of people who would have been tested and treated anyway, 2) the value of a statistical life I used was based on a population-weighted average age of somewhere around 40, not the 15 to 19 year olds under consideration 3) the benefits from reduced transmission rates were not included 3) we can't really say that 1.3 per 10,000 lives would be saved by the testing/treatment.

But still, that is a starting point for thinking about the cost-benefit. Medical doctors are aren't as comfortable with the idea of putting a dollar value on a statistical life, and I think that is too their discredit. To be sure, I think the empirical estimates of the value of a life are wildly inaccurate. I've seen estimates range from \$0.8 million to \$80 million, and there is convincing evidence that they systematically overstate the value because of publication bias (statistically insignificant results don't get published), which is why I used lower-end estimates. But on the other hand, whenever groups like the American Academy of Family Physicians make recommendations on things like HIV testing, they are implicitly placing a value on a statistical life--and to an extremely rough approximation, they have just said that they think value of a 15 to 19 year old's expected life span is less than \\$506,700--well below even the lowest published estimates. At least I make my values calculation explicit.