 
            
              Herumb Shandilya 🦀
            
            @krypticmouse
Followers
                2K
              Following
                7K
              Media
                303
              Statuses
                2K
              Research @StanfordCRFM @HazyResearch | Building DSRs | MSCS, ColBERT, DSPy @Stanford
              
              Stanford
            
            
              
              Joined December 2013
            
            
           DSRs, @DSPyOSS for Rust is here🚀 Happy to finally announce the stable release of DSRs. Over the past few months, I’ve been building DSRs with incredible support and contributions from folks Maguire Papay, @tech_optimist, and @joshmo_dev. A big shout out to @lateinteraction and 
          
                
                11
              
              
                
                22
              
              
                
                207
              
             Trying to build good docs for DSRs(@DSPyOSS in Rust) that could bridge to understanding DSPy conceptually as well. Looking for collaborators who can drive the initiative! DM if interested! P.S. posting DSRs new release/updates on Monday! 
          
                
                1
              
              
                
                3
              
              
                
                13
              
             my first poster (!) at the pytorch conference around the state of 3d generation in the pytorch ecosystem and how you can optimize it with native pytorch tricks for training and inference let’s chat if you’re around and wanna talk about 3d generation 
          
                
                0
              
              
                
                2
              
              
                
                20
              
             Using LLM-as-Judge with GEPA for automatic feedback generation Just added a pattern to DSRs that lets you use an LLM judge to automatically generate rich textual feedback for prompt optimization instead of writing manual rules. cc @LakshyAAAgrawal @zaph0id
          
           Just implemented GEPA (the reflective prompt optimizer from  https://t.co/gRIzxz4C2G)  in Rust for DSRs. Key difference from COPRO/MIPROv2: uses rich textual feedback + per-example Pareto frontier instead of just scalar scores. Keeps diverse candidates around instead of converging 
          
                
                2
              
              
                
                9
              
              
                
                92
              
             The DSPy community is growing in Boston! ☘️🔥 We are beyond excited to be hosting a DSPy meetup on October 15th! Come meet DSPy and AI builders and learn from talks by Omar Khattab (@lateinteraction), Noah Ziems (@NoahZiems), and Vikram Shenoy (@vikramshenoy97)! See you in 
          
                
                14
              
              
                
                30
              
              
                
                104
              
            
            @DSPyOSS @tech_optimist @joshmo_dev @lateinteraction Github:  https://t.co/vJ0FRvdJq6  Quickstart Examples:
          
          
                
                0
              
              
                
                1
              
              
                
                9
              
             Stay tuned for more updates and much more frequent ones. We have examples in the repo to get you to speed but we have a docs site releasing soon!! 
          
                
                0
              
              
                
                0
              
              
                
                11
              
             We provide COPRO right now, as of now optimizers is quite experimental. With compute now we'll test and iterate on this more throughly and add support for more optimizers. 
          
                
                2
              
              
                
                1
              
              
                
                10
              
             [6] Optimization Optimization is much more granular in DSRs and you can free individual components of the Module. By default everything is unoptimizable to tag a component as optimizable you mark it with `parameter` and derive Optimizable trait. We support nested parameters too. 
          
                
                1
              
              
                
                1
              
              
                
                11
              
             [5] Evaluator Evaluator is defined as a trait to be implemented by the module you wish to evaluate. You define the metric methods and call evaluate over a example vector to get the result. 
          
                
                1
              
              
                
                1
              
              
                
                11
              
             [4] Predictors Predictors are not Modules in DSRs rather they are a separate the only entity that is bounded to a single signature and invoke the LLM call via Adapters. Currently we only have Predict but we plan to add Refine and React soon. 
          
                
                1
              
              
                
                1
              
              
                
                9
              
             [3] Modules Modules in DSRs define the flow of the LLM Workflow you are designing. You can configure the evaluation and optimization individually for each module. You have traits like Evaluator and Optimizable that connect to the optimizer to define the process for that module. 
          
                
                1
              
              
                
                1
              
              
                
                13
              
             Signatures are the only point of change for task structure. That means you don't have CoT predictors separately, instead you pass that as argument to the the macro. 
          
                
                1
              
              
                
                1
              
              
                
                13
              
             [2] Signatures DSRs provides you 2 ways to initialize signatures: inline with macros and struct based with attribute macros. With attribute macro you define your signatures as struct in "DSPy syntax" and with macro_rules signature you define them via an einsum like notation. 
          
                
                1
              
              
                
                1
              
              
                
                15
              
             [1] Faster DataLoaders DSRs much like DSPy using Example and Predictions as the I/O currency in workflows but is much stricter in DSRs. To make this easier we provide dataloader to load data from CSV, JSON, Parquet and HF as Vector of Examples. 
          
                
                1
              
              
                
                1
              
              
                
                15
              
             Much thanks to Maguire Papay(Hokyo AI,  https://t.co/rNTlkEJq9V)  for kickstarting the compile time signature initiative. Big big thanks to folks like @tech_optimist and @joshmo_dev who keep the discord alive and reply to repeated spams🥹. The discussion are extremely 
          
            
            linkedin.com
              Founder @ Hokyo AI. · Experience: Hokyo AI · Education: UCLA · Location: Los Angeles Metropolitan Area · 500+ connections on LinkedIn. View Maguire Papay’s profile on LinkedIn, a professional...
            
                
                1
              
              
                
                0
              
              
                
                5
              
             [5] Next Steps Well I aim to add caching of responses by next week and add more examples and docs to the website. I'm pretty hyped for the next phase of this but more on that next week. 
          
                
                1
              
              
                
                0
              
              
                
                5
              
             [4] LMs, Module and Settings We support full DSPy style Module building setup with settings a well. We get he usage report from LMs and have a much hassle free client interface. As for Messages and Chat we built our own much easy to use attraction on top of async-openai. 
          
                
                1
              
              
                
                0
              
              
                
                5
              
             [3] Switch to Serde Based on discussions we decided to switch most of the signature core to work with serde values. Something that has became a norm in some data abstractions too. This was mostly for QoL and dev velocity benefits. 
          
                
                1
              
              
                
                0
              
              
                
                4
              
             [2] Data Abstractions We modified Examples macros to be more self contained and added prediction inline macros for easy use! We can also use Dataloader for fast data parsing from various sources like CSV, JSON etc! 
          
                
                1
              
              
                
                0
              
              
                
                5
              
             
             
            