Some major changes in DNA technology and analysis have recently come on to the forensic scene – but with little fanfare to accompany those changes, busy practitioners could be forgiven for not having noticed them. This article will set out those changes and go on to explore the challenges they can present to those who have to deal with DNA issues in criminal cases.
The first, and the one which needs the closest scrutiny, is the adoption by DNA service providers (such as Cellmark and LGC) of computerised interpretation software to provide statistical evidence regarding contributors to complex DNA mixtures. The second is the general replacement of SGM Plus, the testing kit widely used for analysing DNA samples in this jurisdiction, with a new and more discriminating kit known as DNA17.
The basics
Over the last 30 years, DNA evidence has come to be regarded as the ‘gold standard’ in forensic science. With a clear DNA sample from a single contributor, the crime sample can be compared with the ‘suspect’ profile. If the comparison shows a complete match, a simple calculation can produce an impressively high match probability (RMP) – usually expressed in terms of the probability of such a matching profile originating from someone else unrelated to the suspect as being in the order of 1 in 1 billion.
Even a partial match can still produce large RMPs – the greater the number of matching peaks (or ‘alleles’) the greater the RMP. Such a comparison can also quickly rule out a suspect – if non-matching alleles appear at any point on the profile. No one argues with this type of DNA analysis, nor with simple mixtures from clear DNA samples, which are now amenable to non-controversial statistical analysis.
Crime scenes often yield only tiny amounts of DNA material for subsequent analysis. These low-level and often incomplete profiles have raised some important evidential issues, but the law is now relatively well settled. Below a certain level (the ‘stochastic threshold’), the admissibility of the profile can be challenged with expert evidence [1]. Above that level, no such challenge can be made.
Low-level complex mixtures
DNA mixtures introduce a whole new level of complexity, and a recent ‘hot topic’ has been how to interpret complex mixtures from low-level, incomplete samples [2]. Here, the conventional (and transparent) methods of analysis break down. Reporting analysts have been unable to provide any statistical basis for the possible inclusion of a match to a suspect’s profile within such mixtures.
In a (controversial) decision [3], the Court of Appeal has permitted the limited use of subjective, non-statistically based opinions – where based on their ‘experience’, analysts will suggest that due to the number of matching alleles from the suspect’s profile contained within the mixture, there is ‘some’ or ‘moderate’ support for the suspect being a contributor. This represents, some may think, a radical departure from the previous belief that DNA results had to be accompanied by a statistical weight, but as we will see later, this may prove to be no more than a temporary stopgap – as computerisation takes hold.
Practitioners may also have noticed that a new type of conclusion is appearing in DNA reports. It may be claimed that a mixed DNA sample recovered from a crime scene provides statistically based evidence against a suspect. If the report contains words and phrases such as ‘low level’, ‘incomplete’ or ‘complex mixture’, alarm bells should start to sound.
So too if, rather than giving a traditional RMP figure, the report sets up competing hypotheses (the prosecution hypothesis vs the defence hypothesis) and goes on to suggest that the former is ‘x times more likely’ than the latter. These are Likelihood Ratios (LRs), not RMPs, and require different analysis and understanding.
So if you come across any of the above, it is likely that you are now dealing with a wholly different set of challenges, arising from the use of a computerised model for interpretation.
Computer modelling
Over the last few years, evidence based on computerised analysis has been both admitted and rejected by the courts on a fairly ad hoc basis. One system, LikeLTD, pioneered by Professor David Balding at UCL, has been rejected on one occasion, but subsequently allowed in – often without challenge. Another, True Allele, a US-developed software, has certainly on one occasion been successfully challenged here [4], but has been accepted in Northern Ireland (and in certain states within the US).
Other models have also been developed, such as STRmix and DNA Resolve. At the time of writing (December 2014), none of these computerised systems have yet been considered by our appellate courts [5].
What’s the problem?
Each of these computer models is of enormous complexity, and, like all models, is seeking to best capture the biological and mathematical problems that underlie analysis of low-level DNA mixtures. There are widely varying approaches as to how, for example, important phenomena (such as drop-out, drop-in and peak height imbalance) should be modelled.
To understand (and therefore critique) these models, you need the skills of an advanced statistician, a computer scientist and a molecular biologist. Little wonder therefore that there have been few challenges to such evidence when it has come before our courts.
The real problem for those who have to advise in relation to statistics generated by such programmes is being confident that they are producing reliable and robust evidence. While it is true that there is peer-review and validation testing being carried out, these mainly show that the programmes behave as they are expected to do. Sadly, there is no ‘gold standard’. There is no fixed or definitive answer to what the correct LR in a particular case should be. There is no ‘ground truth’ [6].
Additionally, there are limitations with some of the software. DNA Resolve can apparently only allow for a hypothesis based on a maximum of two unknown contributors to a multi-person mixture. Many crime samples stretch these programmes to the limit [7]. In validation testing using samples from known contributors, there have been on occasions ‘false positives’ – that is, some statistical support for the inclusion in a mixture of someone who could not have contributed to it. So-called ‘continuous’ models (such as True Allele) may give different LR figures from the same sample if run more than once, due to the way they model probability.
A further problem lies in finding the right person to question about the robustness of the particular software being used. While until recently the scientists who developed the programmes have been made available when challenges have been raised, that is likely to change now that providers such as Cellmark and LGC have trained some of their own analysts to be able to input the relevant data, and then report the statistics that the software generates.
Scrutiny of expert evidence is today very much at the forefront of the criminal justice system [8]. New criminal procedure rules, drawing on recent court cases, came into effect last October [9]. These formalise a similar approach to the US Daubert and Frye admissibility hearings. Before expert evidence can be admitted, the court ‘must be satisfied that there is a sufficiently reliable scientific basis for (it) to be admitted’.
The courts are ‘encouraged actively to enquire into such factors’. In considering reliability, the courts should be ‘astute to identify possible flaws in such opinion which detract from its reliability’, which would include whether it is ‘based on a hypothesis which has not been subjected to sufficient scrutiny.”
It is important to note that even where computerised DNA evidence is admitted, it is still subject to certain key caveats. First, the ‘garbage in, garbage out’ principle applies here as in any computer case. If incorrect data is fed in, then the resulting statistic will similarly be incorrect. How to ‘call’ a particular profile often involves a subjective input from the analyst, which may itself be the subject of challenge.
Second, challenges can sometimes be made to the hypotheses put up by the prosecution. Changing these can radically change the statistics. Is, for example, the suggested number of unknown contributors a robust assertion, or may there be more?
Third, subjective (non-statistical) opinions can be challenged by a close examination of the expert’s claimed experience. An expert may indeed have looked at large numbers of mixed profiles over their working life, but have those numerous cases been sufficiently audited or independently peer-reviewed? So-called observer bias is also a recognised factor here [10].
Lastly, in the case of low-level DNA, the presence of a match to a suspect’s profile tells you nothing about how or when it got into the mixture. Innocent transfer (direct and indirect) and contamination remain key areas of scrutiny.
DNA 17
To obtain the DNA profile, the DNA material has to go through a polymerase chain reaction process, which culminates in the production of peaks on a graph (electropherogram) corresponding to DNA markers at certain locations along the DNA molecule. The testing kit in general use (SGM Plus) until last year has tested for DNA at 10 locations (loci).
DNA 17 replaced it last July, and it will now be used for profiles entered on to and searched on the National DNA Database. This new kit tests for six additional loci. What this means in practical terms is that that there is now more to look for in a potential match – and the statistics generated from a ‘matching’ profile will be that much more probative (that is, powerful) than before. Additionally, DNA 17 is said to be considerably more sensitive than its predecessor, so will be able to generate profiles from smaller quantities of DNA than before.
Conclusion
Things are changing fast in the world of forensic DNA testing. Samples previously thought too complex to report upon are now being used to seek to secure convictions. The introduction of computerised analysis and the use of DNA17 is likely to serve only to accelerate this process. As a result, it is increasingly important to be aware of what you are dealing with. Initially, served reports will often be brief to the point of being uninformative, so if in doubt, ask for clarification.
If subsequently there is to be a challenge to the evidence, make sure you get the right legal and scientific experts on board as early as possible.
Footnotes
[1] See R v Reed & Reed [2010] 1 Cr.App.R. 23 & R v Broughton [2010] EWCA Crim 549.
[2] Sometimes referred to as ‘low-template DNA’ or ‘LTDNA’ for short.
[3] R v Dlugosz and others [2013] 1 Cr.App.R. 32, and scientific criticism of subjective opinions, The Times, 13 November 2014, quoting Professor Peter Gill at Oslo University.
[4] R v Broughton [2010], Oxford Crown Court.
[5] Although fresh analysis of old DNA samples using LikeLTD featured in a decision to permit a ‘double jeopardy’ retrial in Scotland – see HMA v Sinclair [2014] HCJAE 131.
[6] See presentation by Tim Clayton of LGC during NIST webinar – Probablistic Genotyping and Software Programs (part 2) September 2014. (National Institute of Standards and Technology, part of the US Department of Commerce.)
[7] See presentation by Matthew Greenhalgh of Cellmark as above.
[8] See eg Lord Thomas,CJ in CBA Kalisher Lecture 14October 2014 – Expert evidence-the future of forensic science in criminal trials.
[9] CPD V Evidence Part 33A.
[10] See Subjectivity and bias in forensic DNA mixture interpretation, Dror and Hampikian, Science & Justice 51 (2011) 204-208.
David Bentley QC of Doughty Street Chambers in London is a specialist in complex crime
3 Readers' comments