In the early 1780s, a pseudo-scientific fad gripped the Paris salons: Austrian doctor Franz Anton Mesmer had brought his theory of animal magnetism, or mesmerism as it would come to be known, to the French capital. Mesmer’s notion rested on the assumed existence of an invisible but powerful ‘universal fluid’, which if harnessed correctly could be ‘capable of curing immediately diseases of the nerves’. Standing in a wooden tub, holding an iron wand to the afflicted area, patients would swoon, convulse and shriek. Coming round, Mesmer’s disciples would find their nervous disorders miraculously cured.
While the rich and fashionable – including fellow Austrian-born queen, Marie Antoinette – flocked to Mesmer’s clinic, the medical establishment were up in arms. How much of this theory, they cried, was based on scientific truth? To appease both his queen and the scientific community, King Louis XVI appointed a study commission comprised of leading French astronomers, biologists and physicists of the day – as well as Benjamin Franklin, the then US ambassador to France and famed savant of magnetism and electricity.
Observing Mesmer’s patients, the commission speculated that imagination might account for the secret of mesmerism’s success. To put their theory to the test, the commission carried out a series of what, in today’s terms, might be called sham procedures. Inviting Mesmer’s patients to engage with ‘magnetised’ objects – many of which hadn’t been magnetised at all – swoons, convulsions, and shrieks ensued. The first placebo controlled blind trial had been inaugurated.
The blind leading the blind
Today, the blinded trial is often considered the ‘gold standard’ in clinical research, as Professor Mike Clarke, director of the Northern Ireland Clinical Trials Unit at Queen’s University Belfast, explains. Why? “Because of minimising bias”, he adds. “Because [blinding] is saying that you do not know what it is you’re getting, and therefore your outcomes cannot be influenced by your knowledge or someone else’s knowledge”.
To put it simply, blinding refers to a process whereby somebody – be it the patient, clinician, outcome assessor, statistician or the research team, or a combination of these – is kept in the dark about one or more details of the trial. “The classic example”, as Matthew Sydes, professor of clinical trials and methodology at UCL’s MRC Clinical Trials Unit, explains, is “a placebo”, which he defines as “a tool for protecting treatment allocation”. In the Mesmer case, the non-magnetised objects comprised the placebo, the trial participants remaining ‘blind’ to which items had received magnetic charge. In today’s trials, the placebo tends to take the form of a dummy drug, which, as Sydes explains, “looks and tastes and feels and weighs and is packaged exactly the same [as the active drug]”.
In a placebo-controlled trial, participants will usually be split into two (or more) groups, one taking the active drug, the other taking the dummy. While certain members of the trial – say, the doctors and the patients – remain ‘blind’ as to which group has been allocated which treatment, there will still be those – the trial managers and the statisticians, for example – who remain in the know from first to last. For each trial, details will vary – dummy drugs may be substituted for ‘sham procedures’; doctors may be aware of the facts, while assessors are blinded – but the principle remains the same. “The trial is protected,” Sydes notes, “from somebody important knowing something important.”
The pros and cons
The advantages to blinding are not difficult to apprehend. By stripping out bias, the blinded trial permits researchers to “remove all the noise”, is how Clarke neatly puts it, in order to keep the trial as objective as possible. Yet both Clarke and Sydes raise legitimate concerns around the unquestioning treatment of blinding as the gold standard in clinical research. For one thing, the blinded trial does not offer an accurate reflection of medical interventions in The pros and cons The advantages to blinding are not difficult to apprehend. By stripping out bias, the blinded trial permits researchers to “remove all the noise”, is how Clarke neatly puts it, in order to keep the trial as objective as possible. Yet both Clarke and Sydes raise legitimate concerns around the unquestioning treatment of blinding as the gold standard in clinical research. For one thing, the blinded trial does not offer an accurate reflection of medical interventions in everyday life. As Clarke points out, in “routine practice, the medicine will not be rolled out in a placebocontrolled trial, and influencing factors will occur”.
Then there’s the question of cost. Because, as Sydes explains, not only does it cost “an absolute fortune” to put together a placebo, there’s also a carbon cost. “We’re making tablets that don’t do anything, and which we may not need,” he adds. “We’re putting them, maybe, in non-recyclable packaging; we’re transporting them; we’re taking up pharmacy time; we’re storing them cold. All of the ways in which carbon may be spent, we may well be doing for a placebo when maybe we didn’t need to do it at all.”
There are also further concerns around safety and ethics. In most cases, the taking of a dummy pill will not cause harm to the participant. But what about when it comes to sham procedures, like false injections or operations? As Sydes notes, asking “somebody [to] sit down for 15 minutes or an hour [to] inject saline solution into them”, or “putting somebody out for an operation, waking them up, bandaging them, and making them think they’ve [undergone surgery]” can be distressing for the patient, not to mention complicated, costly and time consuming for hospitals and staff.
“[Blinding] is saying that you do not know what it is you’re getting, and therefore your outcomes cannot be influenced by your knowledge or someone else’s knowledge.”
Professor Mike Clarke
Of course, the vast majority of sham procedures won’t come with physical risks attached, and the question of ethics, as Clarke notes, is usually “dealt with by fully informed consent”. In other words, patients are made aware before trial enrolment of which element of the trial will be blinded. Yet, while it remains ethically essential, foreknowledge of a blinded trial can have an adverse effect on recruitment. As Clarke explains, “There are a number of studies that [have] compared open control trials with placebo-controlled trials for recruitment, and placebo trials recruit worse than open control, potentially because of this sense that I’ll have to do something, or you’ll inject me, or I’ll take some pills, and there’s a 50/50 chance that the [treatment will be ineffectual].”
Is it all a sham?
All of this, however, is not to say that we don’t need to blind some clinical trials. Rather, as Sydes says, it indicates the fact that we should be “thinking carefully about whether or not [we] really need to do this”. He adds: “Is it worth the time and effort?
Is it worth the carbon cost?” Because sometimes, Sydes explains, instead of going to the organisational, financial, environmental and ethical trouble of producing dummy pills and masking information from the various agents involved in the trial, “just changing your outcome measures to something more objective could have been done quite simply”.
“[Blind trials are] making tablets that don’t do anything, and which we may not need. We’re putting them, maybe, in non-recyclable packaging; we’re transporting them; we’re taking up pharmacy time; we’re storing them cold. All of the ways in which carbon may be spent, we may well be doing for a placebo when maybe we didn’t need to do it at all.”
Matthew Sydes
Sydes’ invocation of outcome measures is a reminder, not only of the fact that trials can be designed without the need for any blinding at all, but also that the placebo is not the only means of blinding. In some cases, the outcome assessors might be blinded, a fraction of the cost of mass producing dummy drugs or undertaking sham procedures, and without the ethical and safety concerns that can accompany these methods.
While the blinding of outcome assessors comes with its own set of logistical challenges (the data itself often gives the game away), Clarke is hopeful that we will “see more blinded outcome assessment […] and less blinding where it’s practically challenging”. He adds: “If we are trying to make trials more efficient and effective, then one of the cost-cutting exercises could be to avoid that complication.”
In one recent study, however, researchers at the University of Nottingham found “in most cases, the insight that the statistician offers was deemed more important to delivery of a trial than the risk of bias they may introduce if unblinded”. Sydes echoes this conclusion, noting that there is a question about who is the statistician that does those analyses. “Some people would say you need a different statistician designing the study to the one that does the interim analyses, but I worry that you do end up with the blind leading the blind if you’ve got […] an independent committee who aren’t invested in the trial,” he adds. “That seems like a recipe for disaster to me – I feel that somebody who knows the trial well should be involved in that process.”
Part of the appeal of the blinded trial is the offering of flexibility and variability; but these are also the very things that make it problematic. Without standards and consistencies, blinding can often end up being more trouble than it’s worth. For every advantage, there seems to be an obstacle or challenge that negates the benefit. In the end, both Clarke and Sydes agree that the question isn’t so much if we should be blinding clinical trials, as when.
In future, Clarke hopes that designers and managers will harness blinding more thoughtfully, “using it where it is really needed and not just following the herd”. Similarly, Sydes is hopeful that “we will see less placebo-based trials in the future, that there will be other ways to protect treatment allocation, and that we will rationalise our way down from using [blinding] over-routinely, to using it in special instances”. Because, as the experts in the case of Franz Anton Mesmer were only too aware, no scientific theory deserves blind faith.