As short-read #sequencing (SRS) costs begin to drop again, undoubtedly fueled by a resurgence in competition, I suspect many liquid biopsy providers will add blood-based whole-genome sequencing (WGS) to supplement, or replace, the deep targeted sequencing paradigm.
With a few exceptions, most clinical-stage diagnostic companies build patient-specific panels by sequencing the solid tumor, then downselecting to a few dozen mutations to survey in the bloodstream.

I don't think this approach is going anywhere anytime soon.
However useful, this deep-sequencing approach suffers from several challenges:

1. It requires access to tissue.
2. It requires the construction of patient-specific PCR panels.
3. It requires significant over-sequencing ($$$).
4. It introduces a third layer of error (PCR).
Companies have tried to solve these issues using some combination of molecular barcoding, novel primer construction, bioinformatics, or adding in other sources of 'omics data, such as #epigenetics or DNA fragmentation patterns.
Despite rosy rhetoric, I'm not convinced any group has a durable, technical advantage on deep, targeted sequencing. To that point, the reported limits of detection (LoD) for these assays tend to hit near 0.01% or roughly 1 cancer fragment amongst 10,000 healthy fragments.
It's well established that every tumor is unique, not just from patient to patient, but from cancer type to cancer type. For example, breast and prostate cancers often have very low tumor burdens, meaning they aren't highly mutated to begin with.
Since low-burden cancers begin with fewer mutations, it makes custom panel building that much more difficult and makes the deep-targeted approach even less effective. To reiterate, we don't think the 'depth' approach is going anywhere, but it's not a panacea.
For these reasons, I was blown away last summer reading @landau_lab et al's work on 'breadth over depth'. That is, using more shallow, blood-based #WGS instead of ultra-deep, targeted sequencing. I've attached my favorite figure from the paper below.

nature.com/articles/s4159… Image
With a WGS approach, one potentially no longer needs to:

1. Get access to tissue.
2. Construct patient-specific primers.
3. Oversequence using barcodes.
4. Introduce PCR artifacts.

I'll refrain from arguing that the WGS approach will be a full-out usurper of deep targeting.
I'm sure there are unique issues and challenges here too. However, now more than ever, I'm convinced that blood-based WGS is a necessary addition for the most competitive liquid biopsy players.

Liquid biopsies are limited by the number of molecules in a tube of blood.
So, it's vital that labs make the best with whatever they start with. When he was still at @freenome, @ImranSHaque gave a great presentation on the dynamics here, especially in support of a truly multi-omic approach to this problem.

ihaque.org/static/talks/2…
I'll end by cautioning that implementing blood-based WGS isn't a walk in the park.

On the front-end, even though PCR errors would disappear, SRS still generates two layers of error:

1. PCR-like, on-instrument cluster generation
2. SBS-specific systematic error
These aren't deal-killers, per se, but they're problematic nonetheless ~ potentially resulting in flipped base-calls and/or sequence context-specific coverage gaps.

On the back-end, there'd be gobs more data to interpret, store, and activate on behalf of patients.
I'm still learning about the puts-and-takes of the WGS approach and what innovations on the hardware, chemistry, and informatics sides may enable the approach and bring it (hopefully) closer to a reality.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Simon Barnett

Simon Barnett Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @sbarnettARK

16 Sep
I'd like to share my initial reaction to today's Berkeley Lights report. But first, I need to do some housekeeping. I can't comment on stock movements, share financial projections, or debate fair value.

Please see our general disclosure: ark-invest.com/terms/#twitter
Generally, I respect anyone who's put this much work into a topic. I won't pretend to have a clean rebuttal to every point. In my experience, beyond the hyperbole and hasty generalizations, there is some truth in these types of reports.

I want to soberly appraise those truths.
Also, I'd invite the subject-matter experts waiting in the wings to build off of this thread, add detail, or share their experiences. Ultimately, we're all after the same thing.

I will start with a few concessions and end with a few counterpoints to today's report:
Read 28 tweets
14 Sep
What is lead-time bias in #cancer screening?

Imagine that a meteor was hurtling through space towards the Earth. Its speed and trajectory indicate that it will destroy the planet in approximately 10 years.

Now, let's say that our best sensors are only ...
... capable of seeing said meteor 1 year in advance. So, 9 years go by and we are blissfully unaware of our impending doom. Then, at the 9-year mark, we detect the meteor and measure our remaining survival time to be just 1 year.
What if I gave you a better sensor? What if this sensor could see the meteor from 10 years away instead of just 1?

How long would our survival time be? While we may have a 10-year lead time instead of a 1-year lead time, the meteor still strikes us on the same day.
Read 9 tweets
9 Jun
We often discuss how more comprehensive and sensitive techniques improve the diagnostic yield for patients affected by rare genetic diseases. Indeed, yields have improved as we've gone from microarrays to whole genome #sequencing.

However, there's another critical component.
Case-Level Reanalysis (CLR)

By reanalyzing genomic data, as our global knowledge-base grows, we improve diagnostic yields.

We believe the broadest tests should be done first to avoid the need to re-contact and re-accession patient samples.

ncbi.nlm.nih.gov/pmc/articles/P…
The economics for both the lab (and patient) change dramatically as well in a 'generate-once-reassess-often' framework. As more is known, variant interpretation may shift from being more manual to more automated.

Still, this is a really hard technological problem.
Read 5 tweets
14 Apr
The widespread adoption of liquid biopsy seems to be 'un-commoditizing' DNA synthesis in the molecular diagnostics industry.

Recall that synthetic DNA probes, molecules that bind and pull a DNA out of solution, are a critical input for liquid biopsy.
Diagnostics companies buy probes to use in their clinical tests, oftentimes in bulk, from a synthetic DNA provider. There's been a prevailing notion recently that DNA providers only can differentiate on the basis of cost or turnaround time.

I think liquid biopsy changes this.
Firstly, a huge technical constraint in liquid biopsy is the availability of cancerous DNA in a tube of blood, which decreases exponentially with tumor size.

Remember that smaller tumors don't leak as much DNA into the bloodstream.

Sequencing can be a lossy process.
Read 10 tweets
11 Mar
@NatHarooni @snicobio I’m watching Jeopardy—will come back later tonight. Short answer—no, not competitive to PacBio. Likely friends down the road.
@NatHarooni @snicobio Alright, so in theory the QSI platform can enable DNA (or RNA) sequencing on chip. However, I think of it more like a call option and less of a near-term goal. Proteomics is the killer app enabled by the QSI platform. But, as OP alluded to, multi-omics (inc. proteins) on one ...
@NatHarooni @snicobio ... instrument could be an attractive value prop. from a capital outlay point of view, especially for $50K which is achievable for many labs w/o needing to seek a major grant (so speedy sales cycles). Now, back to the main point about sequencing. If you read the patents ...
Read 10 tweets
22 Feb
@NatHarooni @AlbertVilella In my opinion — HiFi reads are the most accurate/complete, but currently are more expensive and lower throughput. Nanopore reads are cheap, fast, and high-throughput, but have a weaker error profile. Both of these descriptors are changing and may not be the case in a few years.
@NatHarooni @AlbertVilella As far as QSI is concerned, it’s a little too early for me to calculate operating costs per run. I’ll update when I know more.
@NatHarooni @AlbertVilella Regardless of how you consider the remaining engineering obstacles, necessary R&D spend, or computational issues—I feel that long-read sequencing (as a class of tech), will outperform short reads on virtually every relevant metric by 2024-2025
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(