@drdavideyre isn't the 89.5% figure based on the samples in symptomatic index cases taken at the test-and-trace centre? So this shows how good the test works in those who attend test-and-trace with symptoms? Pretty good idea to use them there.
Using this test (or better ones) backed with PCR in test-and-trace centres could revolutionize contact tracing- if people get their results and meet with an infection control team before they leave the centre we will be moving everything forward 3 days. Add in financial help too.
But how well they work in mass testing or contacts will depend on the distribution of viral loads in these groups - which I cannot see that you have any data on in this paper. The Cochrane review showed sensitivity much lower in those without symptoms.
There is much greater chance that viral loads will be lower when people without symptoms are tested, which means more people will be missed, including those who are infectious. Do you have any data from these groups?
(would also be really helpful if your graphics in this paper reported the observed data as well as the fitted model, like this previous figure from PHE - we will then properly be able to understand the uncertainty in the estimates - could you get it added? Its part of STARD).
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Take note - @dhscgov rapid tests are being sent out (certainly in schools) with two different (and conflicting) information sheets, one in the box and one given out separately.
(the crumpled one with the picture is in the box, the glossy one is handed out separately).
1/10
The clue to which one is most up to date is on the back page (the one in the box is the out of date version).
2/10
The important difference in on page 2, which has far more info in the box (1st) version than the version given out about who should use the test and what they should do.
Important paper in @bmj_latest from global leaders in test evaluation. Hope will improve quality of studies. 1/n
Guidance for the design and reporting of studies evaluating the clinical performance of tests for present or past SARS-CoV-2 infection bmj.com/content/372/bm…
From reading @DHSCgovuk@PHE_uk study reports and many others from organisations around the world, we are aware that the level of understanding about clinical test evaluation studies is often less than ideal.
This paper lays out clear steps to help improve.
3/n
ONS just announced weekly infection rate in secondary school aged children is 0.43%.
Yesterday Test-and-Trace data showed 0.047% of LFTs were positive.
How can we get an estimate of the sensitivity of LFTs from this? I’ve come up with sensitivity=10%
Here are my workings
Three issues
#1 0.047% will include LFT false positives – 0.03% according to DHSC, so 0.017% will be LFT true positives.
#2 0.43% will include PCR false positives – lets go for 1 in 1000 (probably less) to be conservative. So 0.33% will be true cases
#3 ONS data are based on number of children, LFT on number of tests. If assume two tests per week (but only ever one positive per child) then double the rate to 0.034%
So sensitivity seems to be about 0.034/0.33 = 10%
Anybody else want to present a version of these figures?
Update included electronic searches to end of Sept, and other resource up to mid Nov 2020. Next update is already underway. Many thanks to the great crowd of people involved in putting this together.
2/12
Included both lateral flow antigen tests – data on 16 tests (of 92 with regulatory approval) in 48 studies (n=20,168)
And rapid molecular tests – data on 5 tests (of 43 with regulatory approval) in 30 studies (n=3,919)