Important paper in @bmj_latest from global leaders in test evaluation. Hope will improve quality of studies. 1/n
Guidance for the design and reporting of studies evaluating the clinical performance of tests for present or past SARS-CoV-2 infection bmj.com/content/372/bm…
From reading @DHSCgovuk@PHE_uk study reports and many others from organisations around the world, we are aware that the level of understanding about clinical test evaluation studies is often less than ideal.
This paper lays out clear steps to help improve.
3/n
We keep on hearing "you can't expect perfect studies in a pandemic". But this isn't time to PAN*IC it is time to make sure we do things well so that the answers are both informative and likely to be true. It requires good planning and team input as is described here.
4/n
This paper goes through 8 key points, and gives details relevant to Covid test studies. Brief summary below, much more detail and examples in the paper.
5/n
#1: Define the intended use of the test
Many evaluations do not provide accurate estimates of because the relation between the purpose of the test, the selection of the study population, and the selection of the reference standard have not been carefully mapped out
6/n
#2: Define the target condition—that is, what the test aims to detect. Potential target conditions include viral infection , covid-19 disease, infectiousness, immune response, viral clearance, past or recent infection, and immunity.
7/n
#3: Define the population in which the test will be evaluated. Clinical performance studies should be conducted in individuals sampled from the population in which the test will be used, as determined by the intended use in step.
8/n
#4: Describe the index test strategy
This may be one test, the same test repeated, or a combination of different tests. The entire testing pathway should be evaluated.
9/n
#5: If applicable, describe which tests are compared and why
Decisions need to be made regarding the comparative performance of different tests. The comparison can be between different forms of testing, different tests of the same form, or different testing strategies.
10/n
#6: Define the reference standard
The reference standard needs to clearly separate individuals who have the target condition from those who do not; those who have or have had the infection from those who do or have not, those who are infectious from those who are not
11/n
#7: Analysis and presentation of results
Poor reporting of studies evaluating SARS-CoV-2 tests has been a common methodological concern in the studies to date. Reports should follow the STARD reporting guidelines for diagnostic accuracy studies.
12/n
#8: Prospectively register the study protocol
Prospective registration is a sign of quality, provides evidence that the study objectives, test procedures, outcome measures, eligibility criteria, and data to be collected were defined prospectively, and supports transparency.
13/n
We hope that this guide will help us get better evidence so that we can make informed to help use the right tests at the right times in the right patients.
14/n
• • •
Missing some Tweet in this thread? You can try to
force a refresh
ONS just announced weekly infection rate in secondary school aged children is 0.43%.
Yesterday Test-and-Trace data showed 0.047% of LFTs were positive.
How can we get an estimate of the sensitivity of LFTs from this? I’ve come up with sensitivity=10%
Here are my workings
Three issues
#1 0.047% will include LFT false positives – 0.03% according to DHSC, so 0.017% will be LFT true positives.
#2 0.43% will include PCR false positives – lets go for 1 in 1000 (probably less) to be conservative. So 0.33% will be true cases
#3 ONS data are based on number of children, LFT on number of tests. If assume two tests per week (but only ever one positive per child) then double the rate to 0.034%
So sensitivity seems to be about 0.034/0.33 = 10%
Anybody else want to present a version of these figures?
Update included electronic searches to end of Sept, and other resource up to mid Nov 2020. Next update is already underway. Many thanks to the great crowd of people involved in putting this together.
2/12
Included both lateral flow antigen tests – data on 16 tests (of 92 with regulatory approval) in 48 studies (n=20,168)
And rapid molecular tests – data on 5 tests (of 43 with regulatory approval) in 30 studies (n=3,919)
Sorry - but there is a dreadful mistake made in computing the sensitivity and specificty of LFT in this report. If you look at Figure 32 (day 1 for example) the estimates of sens and spec are based on a subsample of the study with 364 LFT+ve and 686 LFT -ve. 34.7% are LFT+VE
Results just published to 10th March (;ast Wednesday) 2.8 million tests in secondary school kids, 1324 positives- 0.048% or 1 in 2086. Lowest rate ever observed
Government figures would have predicted around 10,000.
Will post more analysis shortly
Sens=50.1% Spec 99.7% Prevalence of 0.5%
of 2,762,775 tests we would expect 6921 true positives and 825 false positives - nearly 6 times more test positives than have been reported.
To get down to the 1324 positives actually observed, either the prevalence has to be 0.036% (1 fourteenth of the expected rate) - 36 per 100,000