The @JBIEBHC Critical Appraisal Tools (CATs) are changing! This is a thread that will detail the biggest changes made and how these reflect current methodological developments in the field for #JBIMethodology month 1/7
Firstly, some background- The JBI CATs present a series of questions that ask reviewers to consider particular safeguards in the design on the study being appraised (e.g. randomisation in #RCTs). 2/7
In this format, the JBI CATs are typically completed by using these questions as a checklist (scoring each question as met, unmet etc.). However, they can also be completed as if they were a scale 3/7
The first change made to the CATs included aligning each question with a relevant “domain of bias”. This is to facilitate using the questions as signalling questions to determine whether the domain is at risk of bias or not 4/7
The second change was separating questions of internal validity from questions related to constructs such as reporting quality. This is to highlight that when we assess the risk of bias of a study, we are only interested in internal validity and not these other constructs 5/7
The final (major) change supports assessing bias at both the outcome and result level. Some safeguards (e.g. randomisation) impact at the study level, while some (blinding) impact at the outcome level. This change allows reviewers to appraise bias at the appropriate level 6/7
These changes (and many more!) will all be presented very soon in a publication in JBI Evidence Synthesis and the tools themselves will be made freely available to the public. If you have any questions, please contact us! jbi.global/contact 7/7
• • •
Missing some Tweet in this thread? You can try to
force a refresh