Key extracts edited by Alan Beat from the 98 page report:
 

http://www.defra.gov.uk/science/Publications/2003/UseofModelsinDiseaseControlPolicy.pdf

Review of the use of models in informing disease control policy
development and adjustment.

A report for DEFRA
by
Nick Taylor

Veterinary Epidemiology and Economics Research Unit (VEERU)
School of Agriculture, Policy and Development
The University of Reading
Earley Gate
P.O. Box 237
Reading, RG6 6AR
 
 
Executive summary
 
The FMD epidemic in UK in 2001 was the first situation in which models were developed in
the 'heat' of an epidemic and used to guide control policy. The engagement of modelling with
the control of the FMD epidemic was not part of the pre-arranged contingency plan, but came
about in an ad hoc way.
 
A key tactical decision made with the strong support of models was the introduction of the
contiguous culling policy. Evidence from later analyses suggest that the contiguous culling policy
may not have been necessary to control the epidemic, as was suggested by the models produced
within the first month of the epidemic.
 
If this is indeed the case then it must be concluded that the models supporting this decision were
inherently invalid
and/or used in an inappropriate way.

This conclusion was also implied by other reviewers of the models: Kao (2002) and Green and
Medley (2002) .

It is suggested that incorrect assumptions used in building the models were responsible for the
recommendation that contiguous culling was necessary to stop the epidemic. If an epidemic is
modelled with parameters which describe disease spread as being predominantly over very
short distance then such models will demonstrate a beneficial effect of local culling. In
addition, if the model has the majority of disease spread from IPs occurring before reporting
of disease, then such models will inevitably conclude that pre-emptive culling is essential
to control the epidemic.
 
The Imperial College and Cambridge/Edinburgh models were
parameterised in just such a way that favoured the use of contiguous pre-emptive culling.


However, the field data on which these parameters were based was deficient, and subsequent
analyses are suggesting that the model parameterisation in these crucial areas was incorrect -
i.e. short distance spread was not as predominant as modelled and the infectivity of infected
farms was not maximal until after disease reporting.

Field data were not being adequately collected and analysed early in the epidemic - in
other words there was a lack of 'veterinary intelligence'. The best decisions are made on
the basis of good information. In 2001, the quality of information was compromised and model-based
analysis was used as a substitute for poor information. What was perhaps not taken into
account was that models themselves are equally dependent on good information for their
validity. In truth, models were simply the tool used to analyse the data, but the novelty of
this analytical tool to decision makers at the time and the nature of model outputs to
appear more certain than perhaps they are, meant that the distinction between data and
assumption was lost.


The conclusion of this report is that the use of predictive models to support tactical decisions
is not to be recommended. Tactical decision making should be based more on real veterinary
intelligence than on predictive modelling. However, models can also play a role in
interpretation of veterinary intelligence.

1.4 Practical guidelines for the use of models

From the outset it must be understood that models are rarely universal, or reproductions of
reality in miniature.

It is extremely important that any interpretation of model output is made with reference to the
assumptions and simplifications inherent in the model. A model which is highly sensitive to parameters on
which there is little reliable data is of limited use, perhaps even dangerous, in decision
making.

Decision makers must not rely on the model to make a decision for them but be prepared to
use it as part of a process in which other factors, such as the 'riskiness' of a policy, are
weighed. This means that models cannot provide complete and unequivocal answers to a
decision making problem. Models should therefore be seen as tools for exploring some of the
issues involved, but the criteria on which the decision will be based will include other issues
not addressed by the model.

3.3 Basic considerations and modelling problems

It is important to realise that the accuracy of model predictions depends
on which components are included or excluded, the validity of any assumptions made about
them and the accuracy of modelling of the interactions between them. Modelling is indeed a
mixture of science and art, but because modelling is a quantitative discipline it can appear
entirely scientific and 'real'.


Dent and Blackie (1979) say "It will never be possible to prove a model 'true',
yet the use of the computer can lead the unwary to accord the results a greater
degree of precision than is justified. The model may contain undetected flaws, poor
data transformations and intentional and unintentional biases included by the
model-builder. All of these can seriously affect the validity of information provided
by the model."

5.3 Modelling Classical Swine Fever in The Netherlands

The Dutch have made quite extensive use of disease modelling and in particular
have demonstrated the flexibility of the InterSpread model as a basis to model other diseases.
However, regarding the use of such models to support tactical decisions during epidemics, the
view of those closely involved in this process is that:
 
"such complicated simulation models should not be used during an epidemic, but in fact between epidemics,
to be better prepared and study 'what-if' situations" (Mirjam Nielen, pers. comm.).
 
This means that models may be used to study a range of hypothetical situations, in order
to provide guidelines for contingency planning, but then tactical decisions during epidemics
are better based on field data which may rapidly indicate which modelled situation is actually being faced.

6. Use of models during the FMD epidemic in UK, 2001

The influence of mathematical modelling on disease control policy, in general, and tactical
decisions, in particular, during 2001 was both substantial and unprecedented.

6.2.2 The Imperial model

The model predicted a very large epidemic if the key control parameters (report to slaughter
interval and amount of pre-emptive culling) remained as they appeared to be on March 28.
The final size of the epidemic was estimated as 44% to 64% of population at risk;
that prediction implied an epidemic involving from 20,000 to 29,000 infected premises.

The model used important assumptions about the infectivity of infected farms. Constant
infectiousness was assumed from three days after infection until slaughter
(for an average of
eight infectious days). It is not clear how this infectious period coincides with the onset of
clinical disease on a farm, but the onset of infectivity was assumed to occur before reporting of disease.
 
The model used a parameter, rI, which was the ratio of infectiousness after disease
reporting to infectiousness before disease reporting. The researchers explored the possible
consequences of the assumption of constant infectiousness (rI = 1) by running their model
with rI = 5 (infectiousness after reporting is five times greater than before reporting).

With rI = 1, the model predicted that achieving the target of culling infected premises within
24 hours of report from March 31 would result in an epidemic in which 30% of the 45,000
farms at risk in Great Britain would be culled (i.e. 13,500).
 
If rI was set to 5, then the model predicted that culling infected premises within 24 hours
of report from March 31 would lead to rapid control, resulting in an epidemic in which
5% of the 45,000 farms at risk in Great Britain would be culled (i.e. 2,250).
 
In the final analysis the researchers chose to keep the assumption of constant infectiousness
and therefore concluded that control measures in addition to rapid culling on infected
premises were necessary.

The same group of researchers carried out a retrospective analysis
based on data up to July 16, 2001. Despite having noted previously
that farm infectivity may increase over time, the analyses and further
modelling are still carried out with the stated assumption that "infectiousness does not vary
from the day after infection until the date on which the farm was culled." (Ferguson and
others, 2001).
 
It is noteworthy that, in addition to maintaining the assumption of constant
infectiousness, the assumed time of onset of infectivity was shifted from three days after
infection to only one day after infection, which would tend to increase the apparent necessity
of a pre-emptive cull of incubating farms.


The analysis allowed the effect of culling policies on the spread of disease to be assessed. The
researchers conclude that changes in culling policies and their implementation explain less
than 50% of observed variation in transmission rates, which in turn indicates that effective
movement restrictions and rigorously maintained biosecurity were equally vital in reducing
disease spread. This would seem to suggest that the role of the contiguous cull in controlling
the epidemic was less vital than suggested by the earlier model which led to its adoption.

This aspect was explored by re-running the original model to explore 'what-if' scenarios in which
different culling policies were applied. Charts illustrating the results suggest that IP culling alone,
with slaughter delays modelled according to the recorded data and
with no non-IP culling, would have resulted in reasonably well-controlled epidemics,
. . . .  the adjusted model predicts epidemics affecting about 10% of
farms in all Great Britain and 30% of farms in Cumbria. These are much smaller epidemics
than previously predicted
(critically at the time when decisions to change policy were being
taken) for scenarios involving IP culling alone.

6.3 A closer look at the validity of the models and their use to inform decision making

One of the assumptions carried in the Imperial and Cambridge/Edinburgh models was that
infectivity of an infectious farm was constant from the time of onset to end of slaughter.
Several experts in FMD epidemiology feel that this assumption is unrealistic. It is felt that,
contrary to what is asserted by Keeling and others (2001), a 'within farm epidemic' does
occur
and therefore farm infectivity will increase as the number of clinically affected animals
on a farm increases.
 
It is accepted from experimental studies that maximum virus shedding by
infected animals normally occurs at the same time as clinical lesions appear, 5 to 14 days after
infection (Alexandersen and others, 2002). Work on dairy farms in Saudi Arabia (Hutber and
Kitching, 1996; Hutber and Kitching, 2000) and in experimental infections (Hughes and
others 2002) do indicate that within farm prevalence does increase over time and so the
amount of virus being shed will also increase over time in the first few days of a clinical
infection on a farm. This would suggest that the infectivity of an infected farm would increase
over time.
 
It was also commonly experienced in the field during the 2001 epidemic that a single animal with
old lesions could be found in an infected herd, along with several animals with fresher lesions,
i.e. the single animal would have been infected first and been the source of infection for the
others. If not culled that day, the field veterinarians would then find several more animals
with lesions on the next day; that is, development of a within farm epidemic was clearly
visible.

 
It would seem logical that the infectious challenge presented by a farm would be
increasing as the number of animals with fresh lesions (i.e. shedding virus) increased,
especially since the time when vesicles are rupturing, i.e. around 2 to 3 days into the clinical
phase, is when the greatest amounts of virus are liberated into the environment (Alex
Donaldson, pers. comm.). Alexanderson and others (2003) report on contemporary
investigations of outbreaks early in the epidemic of 2001, in which estimates of airborne
excretion of virus from infected farms were made. These clearly indicate that virus excretion
increased over time from first infection to slaughter.


The Imperial and Cambridge/Edinburgh models also assumed that infectivity of an infectious
farm began very soon after initial infection. This was also a point of contention between the
modellers and the veterinary experts in FMD epidemiology
. Analyses of sensitivity to this
parameter are not mentioned in the published papers so far reviewed. It could be expected that
if any disease is modelled where infection is transmitted before clinical signs appear, and
therefore before any IP culling can take place, then the model would suggest that prevention
of disease spread and control of the epidemic would be impossible without pre-emptive
culling. Conversely, if the model allows only limited disease transmission to occur before
clinical signs appear (i.e. infectivity may begin just before clinical signs and build up
gradually to high levels), then it would be expected that the model would show control of the
epidemic to be possible by rapid IP culling alone.

The first-hand experience of veterinarians on the ground was that infection was not rapidly
spreading off IPs to contiguous premises.
Many of them disagreed with the CP culling policy
that was implemented; namely on stock on all premises with a common boundary with an IP,
regardless of the nature of the boundary or the distance between livestock, sometimes many
days after the original IP had been culled (see submissions to the various inquiries, e.g.
Wardrope, 2002).

All the mathematical models described required infection dates of IPs that had to be
estimated according to assumed incubation periods. More critically, all required contact
tracings data to quantify a spatial transmission kernel.
A definite source of infection was established for relatively few of the IPs in 2001.
According to Gibbens and Wilesmith (2002), out of a total of 2,026 IPs, a definite source of
infection was only identified for 101 IPs (5%), and early in the epidemic the number of
sources identified would have been lower.
 
In the absence of a definite source, it was common
practice to attribute the source of infection to the nearest possible candidate IP. Therefore, as
the modellers themselves commented (Ferguson and others, 2001b; Keeling and others,
2001), the tracings data would be biased towards short distance transmission. Indeed,
Ferguson and others (2001b) found that estimating the spatial transmission kernel by
retrospectively fitting a model to the epidemic data produced a wider kernel than that derived
from the tracing data provided by MAFF. The significance of this is that a model with an
unrealistically narrow kernel (i.e., where most disease transmission is over short distances)
would tend to overestimate the efficiency of a local pre-emptive culling policy (e.g. CP
culling).
 
All modelling groups claim that their models were able to reproduce the course of the 2001
epidemic with reasonable accuracy. However, the level of proof of validity this provides is
compromised
by the fact that some of the models were parameterised using statistical
methods designed to provide a fit to the real data (Ferguson and others, 2001b; Keeling and
others, 2001).

The InterSpread model suggested that IP culling alone would fail to control the epidemic only
when slaughter was delayed to 48 hours after reporting.

The first Imperial model and the Cambridge/Edinburgh model both predicted huge epidemics,
with the order of 20,000 infected premises, if no non-IP (pre-emptive) culling was
carried out. However, when the Imperial model was re-run using a time-varying transmission
parameter, which had been fitted using the actual epidemic data, much smaller epidemics are
predicted in IP culling only scenarios. These later predictions are taken to suggest that CP
culling was less critical to controlling the epidemic than had been concluded from the earlier
modelling exercise (Ferguson and others, 2001b). This would suggest that the early Imperial
model and the Cambridge/Edinburgh model differ from real life in the fact that they both
overestimate the necessity of CP culling to control of the epidemic.

There is evidence from the epidemic itself that the disease could be controlled without high
levels of pre-emptive culling.
Figure 4, showing the epidemic and culling curves for Cumbria
alone, shows that the epidemic in Cumbria had peaked before culling intensity, and in
particular CP culling, increased. Indeed, during the period up to mid-May,
in which the main part of the epidemic raged across northern Cumbria, the total
number of non-IP premises depopulated was hardly greater than the number of IPs,
and yet the epidemic peaked and waned over a very similar time course to the epidemic in the
rest of the country, where culling of DCs vastly outnumbered IPs. The conclusion
resulting from the modelling, that rapid and complete IP CP/DC culling was necessary for
disease control and eventual elimination, appears contrary to the experience in Cumbria.

The models suggested runaway epidemics in the absence of high levels of DC/CP culling and
this did not happen in Cumbria. A possible reason for this divergence would be that the models used
assumptions about infectivity and estimates of the spatial transmission kernel that would
favour rapid and uncontrollable spread of disease if pre-emptive culling was not carried out.
In other words, the models . . . . misrepresented the effect of the pre-emptive culling.

With the benefit of hindsight, it seems that the predictions of the Imperial model,
at the time it was used to support the development of the 24/48 hour culling policy, were
pessimistic. This is apparent from the revised model outputs produced in the later work
(Ferguson and others, 2001b). This means that the model did differ significantly from the real
system, which must negate the value of its predictions.
 
Models can be usefully used to support the requisition of resources needed by well-tried
control measures by graphically demonstrating the possible development of an epidemic -
perhaps in the relatively short term - but not to drive novel, untested policies that are
unsupported by expert opinion
, and which may have serious ethical issues, as well as personal
consequences.
 
Analyses of the epidemic in Cumbria, based on field data, provide evidence that the opinion
of Dr. Kitching, that the contiguous cull as enforced was unjustified, was correct,
and that the
cull was not of major importance in controlling the epidemic
in Cumbria at least (Honhold
and others, 2003; Taylor and others, 2003).
 
ENDS
 
There are several further points to make.

First - the report comprehensively vindicates the critiques that I originally published  back in November 2001 within a few days of first being sent the descriptive scientific papers for review.

Second - these critiques were widely disseminated by means of my own e-mail circle; the warmwell and other websites; part publication in the Western Morning News; and direct e-mails to numerous politicians and scientists, including the authors of the original papers, prominent Pirbright and DEFRA personnel, and members of the EFRA Select Committee.  They were also formally submitted as evidence to the Lessons Learned,  Royal Society, Edinburgh and European Union Inquiries.

Third - to the best of my knowledge, there has been no recorded reference whatsoever to these critiques in any subsequent publication, minutes, report or review.  It is as if they do not exist.

Fourth - the central point - if I, as an informed layman, with no backgound whatever in epidemiology, was able to quickly grasp and accurately describe the inherent flaws in the computer models during 2001 - why didn't David King and so many others close to the decision-making process?  And why have my critiques, once made, been so consistently ignored by politicians, scientists and inquiries alike?  This is not an issue of my own ego, it is precisely the opposite; it is the simple fact that any ordinary person who cared to look could recognise the falsity of the modelling case for pre-emptive mass slaughter, while those in authority were not only blind to the obvious, but subsequently refused to recognise that it had been repeatedly brought to their attention.

It is this arrogance and denial, in the face of plain common sense, that defines the entire FMD crisis.
Another thought on the modelling:

What is the value of peer-review, regarded as the hallmark of scientific validity, when in the case of papers that have been published describing these computer models, the very obvious flaws were not recognised - or if they were, permitted to pass without comment?  This failure of the peer-review process has provided a
false veneer of authenticity to the models, and empowered them to deny all challenge in parliamentary and scientific debate through to the present day.

The best scientific advice?

Alan Beat