This is a wonderful contribution to the scientific community! Effective network/citation-based tools for exploring the literature are absolutely vital, particularly in domains (like interdisciplinary areas) that are still growing and not yet well-structured/paradigmatic.
The system leverages network-based similarity (co-citation, co-reference) instead of raw citations, and uses these metrics to yield a much more maneageble and explorable set of "leads" for each seed paper.
I should add that it builds on the open dataset provided by @SemanticScholar, who also have been adding very useful tools for exploring the citation graph through "meaningful citations" and exploring where the citation was made.
This growing suite of publicly accessible tools make it far easier to employ sophisticated strategies of chaining, which are essential in interdisciplinary and growing domains, where keywords aren't as useful.
Can #toolsforthought like RoamResearch, Obsidian, and Notion actually help us think better? If so, how?
In our #UIST2024 paper, we distill hypertext patterns from real-world usage that augment sensemaking by addressing temporal + spatial fragmentation of sensemaking
I see this work as complementary to the rich practical wisdom many have shared in the #toolsforthought community - I hope that connecting this wisdom with research on hypertext and sensemaking can enrich our understanding of how these tools can help us think better
If this sounds interesting to you, dig into the paper here:
And come chat with us during/after the "Learning to Learn" session at 3:35p on Wed 10/16 @ #UIST2024!
I get wonderful shared delight from hearing others talk abt deep insights that influence/delight them.
I'm also curious what constitutes "depth" (broad implications, surprise, difficulty, etc.).
Would you share a deep insight that influences/delights you + explain why?
Thought sparked by this snippet from @Noahpinion's interview w/ @VitalikButerin where they get into the reasoning behind the coming switch of Ethereum from Proof-of-Work to Proof-of-Stake:
#Idea - NLP to do initial clustering/classification of altmetric-aggregated discussion of preprints, to facilitate finding high-signal discussions and critiques of preprints (honestly would want this for any paper!).
Preprint servers already hook into @altmetric to expose mentions of a paper in news/blogs + social media (tweets).
But it links to an undifferentiated list of sources (not titles, citation contexts) - good for signaling "attention", missed opportunity to get real context.
If preprint servers have access to commercial API from @altmetric, classifying (and then clustering, enabling searching by) these mentions is eminently doable (e.g., can frame as a straightforward citation context classification problem, like with @scite and @SemanticScholar)
A striking example of "behind the scenes" of great research: Esther Duflo noted that a crucial enabler of her Nobel-prize-winning work was a masterful synthesis of human resource economics in a handbook chapter.
I... think I tracked down the (141-page!!!) handbook chapter?
And it is indeed wonderful!
What a synthesis indeed of a conclusion!
Clearly identifying progress, and also crucial open problems, deeply grounded in a detailed analysis of the literature.
Started as a "short thread", turned into a longer thing. Hope you find it helpful!
There's been some good discussion on effective search as a core primitive for tools for thought, and the limitations of existing tools, such as @RoamResearch wrt search.