DSRs, @DSPyOSS for Rust is here🚀
Happy to finally announce the stable release of DSRs. Over the past few months, I’ve been building DSRs with incredible support and contributions from folks Maguire Papay, @tech_optimist, and @joshmo_dev.
A big shout out to @lateinteraction and @ChenMoneyQ who were the first people to hear my frequent rants on this!! Couldn't have done this without all of them.
DSRs originally started as a passion project to explore true compilation and as it progressed I saw it becoming more. I can’t wait to see what the community builds with it.
DSRs is a 3 phase project:
1. API Stabilization. We are nearly done with this and it was mostly implementing the API design. We kept the DSPy style in mind and tried to keep it close to it so it's easier to onboard and while at it we tried to improve it and make it a bit more idiomatic and intuitive!
2. Performance Optimisation with benchmarking vs DSPy. We want to benchmark LLMs performance vs DSPy, with API design finalized we want to improve performance in every front. We'll improve the latency and improve the templates and optimizers in DSRs.
3. True Module Compilation. Why should you optimize signature when you can optimize and fuse much more? This is the idea of the final phase of DSRs. A true LLM workflow compiler. More on this after Phase 2.
Really grateful for @PrimeIntellect offering compute to drive Phase 2 and 3 experimentation for this! Big shoutout to them and @johannes_hage for this!!!
But what is DSRs? What does it offer? Let's see.
[1] Faster DataLoaders
DSRs much like DSPy using Example and Predictions as the I/O currency in workflows but is much stricter in DSRs.
To make this easier we provide dataloader to load data from CSV, JSON, Parquet and HF as Vector of Examples.
[2] Signatures
DSRs provides you 2 ways to initialize signatures: inline with macros and struct based with attribute macros. With attribute macro you define your signatures as struct in "DSPy syntax" and with macro_rules signature you define them via an einsum like notation.
Signatures are the only point of change for task structure. That means you don't have CoT predictors separately, instead you pass that as argument to the the macro.
[3] Modules
Modules in DSRs define the flow of the LLM Workflow you are designing. You can configure the evaluation and optimization individually for each module. You have traits like Evaluator and Optimizable that connect to the optimizer to define the process for that module.
[4] Predictors
Predictors are not Modules in DSRs rather they are a separate the only entity that is bounded to a single signature and invoke the LLM call via Adapters. Currently we only have Predict but we plan to add Refine and React soon.
[5] Evaluator
Evaluator is defined as a trait to be implemented by the module you wish to evaluate. You define the metric methods and call evaluate over a example vector to get the result.
[6] Optimization
Optimization is much more granular in DSRs and you can free individual components of the Module. By default everything is unoptimizable to tag a component as optimizable you mark it with `parameter` and derive Optimizable trait. We support nested parameters too.
We provide COPRO right now, as of now optimizers is quite experimental. With compute now we'll test and iterate on this more throughly and add support for more optimizers.
Stay tuned for more updates and much more frequent ones. We have examples in the repo to get you to speed but we have a docs site releasing soon!!
@DSPyOSS @tech_optimist @joshmo_dev @lateinteraction Github: github.com/krypticmouse/D…
Quickstart Examples: github.com/krypticmouse/D…
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.