The first update to my long-read assembler benchmarking paper is up on F1000Research:
f1000research.com/articles/8-2138
Updates include...
(1/8)
The results now include fresh versions of some the assemblers: Flye (v2.6 -> v2.7), Raven (v0.0.5 -> v0.0.8) and Shasta (v0.3.0 -> v0.4.0)
(2/8)
I've also added a new assembler to the comparison: NECAT
github.com/xiaochuanle/NE…
(3/8)
A new supp figure (S11) shows the maximum indel error size in each assembly:
github.com/rrwick/Long-re…
Flye did well here: it was less likely than the other assemblers to make large-scale errors in its assemblies.
(4/8)
And various other fixes/enhancements resulting from peer review. Thanks to all the reviewers for their feedback!
(5/8)
None of my main conclusions have changed, and my favourite assemblers are still Flye, Miniasm/Minipolish and Raven, each for different reasons. If you were to twist my arm and make me choose one favourite, I think it would be Flye.
(6/8)
This update won't be the last! Newer versions of Canu (v2.0) and Raven (v1.1.5) have been released and will feature in the next version of the paper.
(7/8)
Thanks again to the @F1000Research team for facilitating this kind of living document. I've enjoyed working with them!
(8/8)
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
