The Veterinary Journal 167 (2004) 127–128 www.elsevier.com/locate/tvjl Personal view
Predictive models and FMD: the emperor’s new clothes?R.P. Kitching
National Centre for Foreign Animal Disease, 1015 Arlington Street, Winnipeg, Manitoba, Canada R3E 3M4
Accepted 23 October 2003
Science is as much a victim of fashion as any other human endeavour. Research funding is frequently directed towards the most recent high profile threat to public health, safety or way of life, which in the field of biology is usually the appearance of a new pathogen or an old pathogen in a new guise. The constant stream in the last 20 years, including salmonella, listeria, Escherichia coli O157, bovine spongiform encephalopathy (BSE) and foot-and-mouth disease (FMD), has developed its following of scientists eager for grants. Having prior knowledge of these organisms is often less of an advantage than having a high scientific profile and useful political connections. But to achieve a public profile and have exposure in the press, it is necessary to have something interesting to say that is understandable and with which the reader can in some way identify.
Certain discoveries, such as the organization of the DNA molecule or Dolly the sheep have instant appeal, but these can be few and far between. In the absence of real scientific discovery, some may feel it necessary to make something ordinary look more interesting, and if that requires a little exaggeration, or not making clear that the results being reported are dependant on the reader understanding a series of sometimes hidden assumptions, this is no different to what the public is exposed to daily by the advertising industry. But of course scientific integrity must be different from commercial integrity, particularly when the funding for the research comes from the public purse.
During the 2001 FMD epidemic in the UK, and following the appearance of Professor Roy Anderson on the BBC Newsnight programme, public perception was that the outbreak was ‘‘not under control’’ (Anderson, 2002). Government reaction was to take the control policy away from the Ministry of Agriculture, Fisheries and Food (MAFF; subsequently replaced by DEFRA) and lay it in the hands of the Science Group, under the chairmanship of the Government Chief Scientist, which reported directly to the Cabinet Office Briefing Room (COBR) set up during the crisis. The Science Group had no formal organization, but was dominated by four teams of modellers, whose predictions for the future of the outbreak usually took up the majority of each meeting. There is uncertainty about whether their advice included the three kilometre cull, but the ‘‘24/48 hr’’ policy, whereby all FMD susceptible animals on a premise designated infected were to be slaughtered in 24 hr, and all susceptible animals on premises contiguous to the infected premise were to be slaughtered within 48 hr, was a direct result of their predictions, and the resulting mass slaughter is now history.
Taylor (2003) has provided a balanced critique on the deficiencies of the various models used in 2001. Those who explain the disagreements between the modellers and the veterinarians responsible for carrying out the policy as being due to different cultures (Anderson, 2002) ignore the obvious fact that if the input data into the models is wrong, then the output will also be wrong. In 2001, much of the input was based on assumptions derived from previous outbreaks of FMD, but the 2001 outbreak was very different in many ways from the last major FMD outbreak in the UK in 1967/68. However, when these differences were pointed out by those of us with experience with FMD, our views were largely ignored.
So how could the control policy for a major disease outbreak be based on models which had never been validated? If the predictions for the number of new variant Creutzfeld–Jacob disease (vCJD) cases in the UK made in the late 1990s had not been suffcient to undermine the credibility of the predictive modellers, surely the FMD experience should have made the modellers appreciate the limitations of their science and accept at least some responsibility for the misery and expense that their models initiated. Predictive modelling has become fashionable but, often without much evidence that it serves any useful purpose, is the science based too much on reputation?
Of course models have a place in all science and, particularly in medical and veterinary science, they can be extremely useful in developing scenarios, identifying bottlenecks in biological processes, directing research towards answering specific questions, addressing resource issues and in a multitude of additional applications. But all models require relevant and accurate input data and cannot be expected to be oracles. Such data are frequently not available at the start of a disease outbreak, but then it is usual, as shown in the paper by Dr. Ioan Ap Dewi and his colleagues published in this issue of The Veterinary Journal, to use a series of assumptions and, when hard data become available, the assumptions can be replaced.
If the model described by Ap Dewi et al. (2004) had been available during the 2001 FMD outbreak, could it have been used constructively to assist in decisions related to the control programme and therefore, by inference, would it be helpful in future outbreaks? Models, like any diagnostic test, require validation to show that they work before they can be used in decision support. The Ap Dewi model does indicate that the outbreaks in Cumbria and Anglesey were different in that the Cumbria outbreak was well established before it was clinically evident, whereas the Anglesey outbreak was identified early in its course. The effectiveness of the control measures in Anglesey was high compared with those in Cumbria. This is useful information and could have influenced those responsible to provide additional resources to Cumbria. But is this sufficient to validate the model, as not everything that was predicted from the initial assumptions transpired? How does the decision maker know which parts of the model are giving the correct prediction?
It would seem almost impossible to validate fully a predictive model for an FMD outbreak, as no two outbreaks are ever the same. The causative agent and the susceptible population are constantly evolving; the way the two interact changes due to different farming practices and population dynamics, and there is no possibility of anticipating chance encounters with any certainty. The Ap Dewi paper acknowledges some of the limitations and moves us forward in the debate.
In the right circumstance, and in the hands of someone who knows its limitations and understands the assumptions that have been made, a predictive model does have a place in a FMD outbreak control programme. Undoubtedly predictive models are here to stay, but with no veterinary knowledge or input to avoid the pitfalls that were so apparent in 2001, models will only serve to provide weight and justification for indefensible decisions.
Ap Dewi, I., Molina-Flores, B., Edwards-Jones, G., 2004. A generic spreadsheet model of a disease epidemic with application to the ?rst 100 days of the 2001 outbreak of foot-and-mouth in the UK. The Veterinary Journal, doi:10.1016/S1090-0233(03) 00149-7.
Anderson, 2002. Foot and mouth disease 2001: lessons to be learned inquiry report. The Stationary Office, London, 187pp.
Taylor, N., 2003. Review of the use of models in informing disease control policy development and adjustment. A report for DEFRA, 94pp. See DEFRA website: http://www.defra.gov.uk/science/ publications/2003/useofmodelsindiseasecontrolpolicy.pdf .