Dear #rstats & #systematicreview Twitter friends: I've been working hard on the {metabefor} R package these past weeks. I'll explain a bit more in the thread, but first, my question.
This is the current version of the logo.
(poll with question in thread)
What does this logo evoke?
... And to say a bit more about the package: it's meant to assist with the extraction of data from articles. Not by automating anything, mind you - but by providing a uniform, solid structure to the process and making it transparent, extensible, and machine-readable.
It does so by outlining a standard for specifying exactly what you want to extract from your sources (in a spreadsheet), in a hierarchical structure with the possibility to repeat entities and to refer to other entities, in plain text files that are ultimately R Markdown files.
Those R Markdown files can be immediately rendered, showing the structure of the extracted data, as shown below.
(I admit, this is a crappy example file 😬)
This also immediately shows which values has been extracted, allowing the extractor to check whether everything went well:
(again, this is an example file where I haven't bothered to 'extract' anything but the bare minimum, specifically 'varId' and 'associationId')
It also immediately validates all extracted values according to the validation directives specified (with R expressions) in the original extraction specification referred to above, which, by the way, can be a spreadsheet in Google Sheet or .xlsx format:
Finally, the most recent feature: it can combine extracted data from multiple files. This lends itself well to, e.g., starting with a scoping review and then, in a second phase, extracting more details about some entities to realize a systematic review.
Anyway - enough about this, I'll post a more extensive tweetorial when it's on CRAN. For now, just wanted to share my enthusiasm 🙂
• • •
Missing some Tweet in this thread? You can try to
force a refresh
@Heinonmatti@NHankonen It seems roughly right, but I'd nuance it by emphasizing that this is not about proof; it's more like an underlying ('meta' if you will) belief in progress: that changes and adjustments in theory are generally towards truth/reality, not random or systematically away from it.
@Heinonmatti@NHankonen It's not about proving that there's some truth to what I say. It about arguing that the method I use to determine waht I hold as truth (and say) is bound ('proven') to lead to truth eventually. Which is more than anybody else ('non-scientists') can reasonably claim, I think.
I think the ABCD touches on the 'joints' of why this is a valid argument quite well, because it's "a-theoretical" (like e.g. Intervention Mapping): it's a more generic ('meta, if you will) framework. So it may >
@Heinonmatti What is 'it', here? Behavior Change (BC)? Well, I believe BC _can_ work - if you follow IM rigour, and then some.
But, the 'flavour' of exercising BC that has any chance of working is too complicated to ever be sellable.
What is sold (well), therefore, is the 'tricks' approach.
@Heinonmatti Nudging are an excellent example of this. The idea that you can have a list of BCTs that can be used to change behavior, is a more sophisticated version (you know where I stand in this respect; see e.g. 'as simple as possible').
@Heinonmatti Effectiveness and "sellability" are mutually exclusive, I think.
Combined with this: efforts at behavior change, in NL at least, are mostly developed by advertising agencies. They never evaluate; they look at e.g. 'impact' instead. So you're never 'exposed'.