The problem with ProPublica’s surgeon scorecards

ProPublica is an organisation performing independent, non-profit investigative journalism in the public interest. Yesterday it published an analysis of surgeon-level complications rates based on Medicare data.

Publication of individual surgeons results is well established in the UK. Transparent, easily accessible healthcare data is essential and initiatives like this are welcomed.

It is important that data are presented in a way that can be clearly understood. Communicating risk is notoriously difficult. This is particularly difficult when it is necessary to describe the precision with which a risk has been estimated.

Unfortunately that is where ProPublica have got it all wrong.

There is an inherent difficulty faced when we dealing with individual surgeon data. In order to be sure that a surgeon has a complication rate higher than average, that surgeon needs to have performed a certain number of that particular procedure. If data are only available on a small number of cases, we can’t be certain whether the surgeon’s complication rate is truly high, or just appears to be high by chance.

If you tossed a coin 10 times and it came up with 7 heads, could you say whether the coin was fair or biased? With only 10 tosses we don’t know.

Similarly, if a surgeon performs 10 operations and has 1 complication, can we sure that their true complication rate is 10%, rather than 5% or 20%? With only 10 operations we don’t know.

The presentation of the ProPublica data is really concerning. Here’s why.

For a given hospital, data are presented for individual surgeons. Bands are provided which define “low”, “medium” and “high” adjusted complication rates. If the adjusted complication rate for an individual surgeon falls within the red-zone, they are described as having a “high adjusted rate of complications”.

1How confident can we be that a surgeon in the red-zone truly has a high complication rate? To get a handle on this, we need to turn to an off-putting statistical concept called a “confidence interval”. As it’s name implies, a confidence interval tells us what degree of confidence we can treat the estimated complication rate.

2If the surgeon has done many procedures, the confidence interval will be narrow. If we only have data on a few procedures, the confidence interval will be wide.

To be confident that a surgeon has a high complication rate, the 95% confidence interval needs to entirely lie in the red-zone.

A surgeon should be highlighted as having a high complication rate if and only if the confidence interval lies entirely in the red-zone.

Here is an example. This surgeon performs the procedure to remove the gallbladder (cholecystectomy). There are data on 20 procedures for this individual surgeon. The estimated complication rate is 4.7%. But the 95% confidence interval goes from the green-zone all the way to the red-zone. Due to the small number of procedures, all we can conclude is that this surgeon has either a low, medium, or high adjusted complication rate. Not very useful.

8Here are some other examples.

Adjusted complication rate: 1.5% on 339 procedures. Surgeon has low or medium complication rate. They are unlikely to have a high complication rate.

5Adjusted complication rate: 4.0% on 30 procedures. Surgeon has low or medium or high complication rate. Note due to the low numbers of cases, the analysis correctly suggests an estimated complication rate, despite the fact this surgeon has not had any complications for the 30 procedures.
3Adjusted complication rate: 5.4% on 21 procedures. ProPublica conclusion: surgeon has high adjusted complication rate. Actual conclusion: surgeon has low, medium or high complication rate.
4Adjusted complication rate: 6.6% on 22 procedures. ProPublica conclusion: surgeon has high adjusted complication rate. Actual conclusion: surgeon has medium or high complication rate, but is unlikely to have a low complication rate.
6Adjusted complication rate: 7.6% on 86 procedures. ProPublica conclusion: surgeon has high adjusted complication rate. Actual conclusion: surgeon has high complication rate. This is one of the few examples in the dataset, where the analysis suggest this surgeon does have a high likelihood of having a high complication rate.

7In the UK, only this last example would to highlighted as concerning. That is because we have no idea whether surgeons who happen to fall into the red-zone are truly different to average.

The analysis above does not deal with issues others have highlighted: that this is Medicare data only, that important data may be missing , that the adjustment for patient case mix may be inadequate, and that the complications rates seem different to what would be expected.

ProPublica have not moderated the language used in reporting these data. My view is that the data are being misrepresented.

ProPublica should highlight cases like the last mentioned above. For all the others, all that can be concluded is that there are too few cases to be able to make a judgement on whether the surgeon’s complication rate is different to average.

Statistical errors in published medical studies

I do a fair amount of peer-review for journals. My totally subjective impression – which I can’t back up with figures – is that fundamental errors in data analysis occur on a fairly frequent basis. Opaque descriptions of methods and no access to raw data often makes errors difficult to detect.

We’re performing a meta-analysis at the moment. This is a study in which two or more clinical trials of the same treatment are combined. This can be useful when there is uncertainty about the effectiveness of a treatment.

Relevent trials are rigorously searched for and the quality assessed. The results of good quality trials are then combined, usually with more weight being given to the more reliable trials. This weight reflects the number of patients in the trial and, for some measures, the variability in the results. This variation is important – trials with low variability are greatly influential in the final results of the meta-analysis.

What are we doing the meta-analysis on? We often operate to remove a piece of liver due to cancer. Sometimes we have to clamp the blood supply to the liver to prevent bleeding. An obvious consequence to this is damage to the liver tissue.

Multiple local liver resections. Patient provided consent for image publication.
Multiple local liver resections. Patient provided consent for image publication.

It may be possible to protect the liver (and any organ) from these damaging effects by temporarily clamping the blood supply for a short time, then releasing the clamp and allowing blood to flow back in. The clamp is then replaced and the liver resection performed. This is called “ischemic preconditioning” and may work by stimulating liver cells to protect themselves. “Batten down the hatches boys, there’s a storm coming!”

Results of this technique are controversial – when used in patients some studies show it works, some show no benefit. So should we be using it in our day-to-day practice?

We searched for studies examining ischemic preconditioning and found quite a few.

One in particularly performed by surgeons in Hungary seemed to show that the technique worked very well (1).The variability in this study was low as well, so it seemed reliable. Actually the variability was very low – lower than all the other trials we found.

 

Variation in biochemical outcome measures in studies of ischemic preconditioning.
Variation in biochemical outcome measures in studies of ischemic preconditioning.

The graph shows 3 of the measures used to determine success of the preconditioning. The first two are enzymes released from damaged liver cells and the third, bilirubin, is processed by the liver. All the studies show some lowering of these measures signifying potential improvement with the treatment. But most trials show a lot of variation between different patients (the vertical lines).

Except a Hungarian study, which shows almost no variation.

Even compared with a study in which these tests were repeated between healthy individuals in the US (9), the variation was low. That seemed strange. Surely the day-to-day variation in your or my liver tests should be lower than those of a group of patients undergoing surgery?

It looks like a mistake.

It may be that the authors wrote that they used one measure of variation when they actually used another (standard error of the mean vs. standard deviation). This could be a simple mistake, the details are here.

 

But we don’t know. We wrote three times, but they didn’t get back to us. We asked the journal and they are looking into it.


1 Hahn O, Blázovics A, Váli L, et al. The effect of ischemic preconditioning on redox status during liver resections-randomized controlled trial. Journal of Surgical Oncology 2011;104:647–53.
2 Clavien P-A, Selzner M, Rüdiger HA, et al. A Prospective Randomized Study in 100 Consecutive Patients Undergoing Major Liver Resection With Versus Without Ischemic Preconditioning. Annals of Surgery 2003;238:843–52.
3 Li S-Q, Liang L-J, Huang J-F, et al. Ischemic preconditioning protects liver from hepatectomy under hepatic inflow occlusion for hepatocellular carcinoma patients with cirrhosis. World J Gastroenterol 2004;10:2580–4.
4 Choukèr A, Martignoni A, Schauer R, et al. Beneficial effects of ischemic preconditioning in patients undergoing hepatectomy: the role of neutrophils. Arch Surg 2005;140:129–36.
5 Petrowsky H, McCormack L, Trujillo M, et al. A Prospective, Randomized, Controlled Trial Comparing Intermittent Portal Triad Clamping Versus Ischemic Preconditioning With Continuous Clamping for Major Liver Resection. Annals of Surgery 2006;244:921–30.
6 Heizmann O, Loehe F, Volk A, et al. Ischemic preconditioning improves postoperative outcome after liver resections: a randomized controlled study. European journal of medical research 2008;13:79.
7 Arkadopoulos N, Kostopanagiotou G, Theodoraki K, et al. Ischemic Preconditioning Confers Antiapoptotic Protection During Major Hepatectomies Performed Under Combined Inflow and Outflow Exclusion of the Liver. A Randomized Clinical Trial. World J Surg 2009;33:1909–15.
8 Scatton O, Zalinski S, Jegou D, et al. Randomized clinical trial of ischaemic preconditioning in major liver resection with intermittent Pringle manoeuvre. Br J Surg 2011;98:1236–43.
9 Lazo M, Selvin E, Clark JM. Brief communication: clinical implications of short-term variability in liver function test results. Ann Intern Med 2008;148:348–52.