Bayesian statistics and clinical trial conclusions: Why the OPTIMSE study should be considered positive

Statistical approaches to randomised controlled trial analysis

The statistical approach used in the design and analysis of the vast majority of clinical studies is often referred to as classical or frequentist. Conclusions are made on the results of hypothesis tests with generation of p-values and confidence intervals, and require that the correct conclusion be drawn with a high probability among a notional set of repetitions of the trial.

Bayesian inference is an alternative, which treats conclusions probabilistically and provides a different framework for thinking about trial design and conclusions. There are many differences between the two, but for this discussion there are two obvious distinctions with the Bayesian approach. The first is that prior knowledge can be accounted for to a greater or lesser extent, something life scientists sometimes have difficulty reconciling. Secondly, the conclusions of a Bayesian analysis often focus on the decision that requires to be made, e.g. should this new treatment be used or not.

There are pros and cons to both sides, nicely discussed here, but I would argue that the results of frequentist analyses are too often accepted with insufficient criticism. Here’s a good example.

OPTIMSE: Optimisation of Cardiovascular Management to Improve Surgical Outcome

Optimising the amount of blood being pumped out of the heart during surgery may improve patient outcomes. By specifically measuring cardiac output in the operating theatre and using it to guide intravenous fluid administration and the use of drugs acting on the circulation, the amount of oxygen that is delivered to tissues can be increased.

It sounds like common sense that this would be a good thing, but drugs can have negative effects, as can giving too much intravenous fluid. There are also costs involved, is the effort worth it? Small trials have suggested that cardiac output-guided therapy may have benefits, but the conclusion of a large Cochrane review was that the results remain uncertain.

A well designed and run multi-centre randomised controlled trial was performed to try and determine if this intervention was of benefit (OPTIMSE: Optimisation of Cardiovascular Management to Improve Surgical Outcome).

Patients were randomised to a cardiac output–guided hemodynamic therapy algorithm for intravenous fluid and a drug to increase heart muscle contraction (the inotrope, dopexamine) during and 6 hours following surgery (intervention group) or to usual care (control group).

The primary outcome measure was the relative risk (RR) of a composite of 30-day moderate or major complications and mortality.

OPTIMSE: reported results

Focusing on the primary outcome measure, there were 158/364 (43.3%) and 134/366 (36.6%) patients with complication/mortality in the control and intervention group respectively. Numerically at least, the results appear better in the intervention group compared with controls.

Using the standard statistical approach, the relative risk (95% confidence interval) = 0.84 (0.70-1.01), p=0.07 and absolute risk difference = 6.8% (−0.3% to 13.9%), p=0.07. This is interpreted as there being insufficient evidence that the relative risk for complication/death is different to 1.0 (all analyses replicated below). The authors reasonably concluded that:

In a randomized trial of high-risk patients undergoing major gastrointestinal surgery, use of a cardiac output–guided hemodynamic therapy algorithm compared with usual care did not reduce a composite outcome of complications and 30-day mortality.

A difference does exist between the groups, but is not judged to be a sufficient difference using this conventional approach.

OPTIMSE: Bayesian analysis

Repeating the same analysis using Bayesian inference provides an alternative way to think about this result. What are the chances the two groups actually do have different results? What are the chances that the two groups have clinically meaningful differences in results? What proportion of patients stand to benefit from the new intervention compared with usual care?

With regard to prior knowledge, this analysis will not presume any prior information. This makes the point that prior information is not always necessary to draw a robust conclusion. It may be very reasonable to use results from pre-existing meta-analyses to specify a weak prior, but this has not been done here. Very grateful to John Kruschke for the excellent scripts and book, Doing Bayesian Data Analysis.

The results of the analysis are presented in the graph below. The top panel is the prior distribution. All proportions for the composite outcome in both the control and intervention group are treated as equally likely.

The middle panel contains the main findings. This is the posterior distribution generated in the analysis for the relative risk of the composite primary outcome (technical details in script below).

The mean relative risk = 0.84 which as expected is the same as the frequentist analysis above. Rather than confidence intervals, in Bayesian statistics a credible interval or region is quoted (HDI = highest density interval is the same). This is philosphically different to a confidence interval and says:

Given the observed data, there is a 95% probability that the true RR falls within this credible interval.

This is a subtle distinction to the frequentist interpretation of a confidence interval:

Were I to repeat this trial multiple times and compute confidence intervals, there is a 95% probability that the true RR would fall within these confidence intervals.

This is an important distinction and can be extended to make useful probabilistic statements about the result.

The figures in green give us the proportion of the distribution above and below 1.0. We can therefore say:

The probability that the intervention group has a lower incidence of the composite endpoint is 97.3%.

It may be useful to be more specific about the size of difference between the control and treatment group that would be considered equivalent, e.g. 10% above and below a relative risk = 1.0. This is sometimes called the region of practical equivalence (ROPE; red text on plots). Experts would determine what was considered equivalent based on many factors. We could therefore say:

The probability of the composite end-point for the control and intervention group being equivalent is 22%.

Or, the probability of a clinically relevant difference existing in the composite endpoint between control and intervention groups is 78%

optimise_primary_bayesFinally, we can use the 200 000 estimates of the probability of complication/death in the control and intervention groups that were generated in the analysis (posterior prediction). In essence, we can act like these are 2 x 200 000 patients. For each “patient pair”, we can use their probability estimates and perform a random draw to simulate the occurrence of complication/death. It may be useful then to look at the proportion of “patients pairs” where the intervention patient didn’t have a complication but the control patient did:

Using posterior prediction on the generated Bayesian model, the probability that a patient in the intervention group did not have a complication/death when a patient in the control group did have a complication/death is 28%.


On the basis of a standard statistical analysis, the OPTIMISE trial authors reasonably concluded that the use of the intervention compared with usual care did not reduce a composite outcome of complications and 30-day mortality.

Using a Bayesian approach, it could be concluded with 97.3% certainty that use of the intervention compared with usual care reduces the composite outcome of complications and 30-day mortality; that with 78% certainty, this reduction is clinically significant; and that in 28% of patients where the intervention is used rather than usual care, complication or death may be avoided.


on “Bayesian statistics and clinical trial conclusions: Why the OPTIMSE study should be considered positive
4 Comments on “Bayesian statistics and clinical trial conclusions: Why the OPTIMSE study should be considered positive
  1. Thanks for sharing this, it’s an excellent example an motivator for people to take Bayesian analysis seriously. There’s still a lot of inertia keeping it from happening and pieces like this are great.

  2. I think this is a great article! I agree with Steven Vannoy’s comment that this is a good motivator for using Bayesian analysis.

    Here’s my question: what is the relationship between the value of 28%, which is the percentage of patients for whom mortality/complication may be avoided with treatment that you calculated, and the number needed to treat calculated the standard way from the paper’s reported results?

    The absolute difference in mortality between the two groups was 6.8%, giving a number needed to treat of 14.7.

    I don’t quite understand why these two measures of effect size (28% as you calculated and 6.8% from the trial) are so different. Aren’t they both measuring absolute reduction in risk for patients?

  3. It’s nice to see Bayesian analysis at work! I’m a little unclear about the conclusions, however. The posterior distribution on theta[1]-theta[2] has a 95% HDI that extends from about -0.14 to 0.0, and overlaps the ROPE. So, while the posterior shows explicitly the credible differences — which is one thing so nice about Bayesian analysis — we would not want to decide that the difference excludes zero. (And analogously for the ratio, theta[1]/theta[2].) Thus, the Bayesian discrete yes/no decision matches the frequentist decision, and notice that the 95% HDI on the difference is nearly the same as the 95% CI from the frequentist approach. The primary advantage here is the explicit posterior distribution on the difference (or ratio), which can be directly and intuitively interpreted. When using a realistic ROPE, the decision criterion becomes more severe than traditional NHST, that is, it’s more difficult to reject zero difference.

    (I’m unclear about the last bit, regarding y1==0 and y2==1. Don’t you also have to consider y1==1 and y2==0? Sorry for being slow on the uptake…)

    One other minor note: In the latest versions of the programs, the plotPost() function provides a little more detail regarding the percentage of the posterior below the ROPE, within the ROPE, and above the ROPE; see

  4. Steven, Alexander and John, thank you very much for your interest and comments.

    John, I think the ROPE is a very powerful addition to these sort of analyses, and as a concept will be familiar to anyone who has done frequentist equivalence or non-inferiority studies. As you say, demonstrating an effect beyond the ROPE is a higher bar to attain than in a conventional frequentist analysis. The ROPEs specified here are illustrative only – others may believe they should be larger or smaller. What I think is useful is a value can be placed on the probability lying out with the ROPE with regard to the decision, for instance if the intervention is cheap and safe it may still be chosen even if there is a fair chance that the actual effect size lies in the region of equivalence. It is useful to be able to have a discussion in these terms though, which is not facilitated with the conventional frequentist analyses. As an aside and not mentioned here, specifying informative priors impacts on the posterior. The original analysis of these data included a meta-analysis including the new finding, which showed significance. A socially acceptable prior?!

    Alexander, John’s last point answers your question and should have been included in the post. The simulation describes intervention == no, control == yes, but subtracting intervention == yes, control == no gives the absolute risk difference. That’s important!

Comments are closed.