 
            
              Srishti
            
            @_srishtiyadav
Followers
                1K
              Following
                5K
              Media
                129
              Statuses
                2K
              @BelongieLab | @illc_amsterdam @ELLISforEurope | Board @WiCVworkshop Prev intern @IBMResearch AI Safety | Culture | Evaluations
              
              Looking FT Positions from โ26
            
            
              
              Joined February 2012
            
            
           Exciting news! The WiCV@ICCV 2025 Dinner & Mentorship event will be held in Honolulu! ๐๏ธOct 19, 2025 @ 7 PM RSVP now (spots are limited!):  https://t.co/6ORdRWS2gF  Only confirmed guests via email will be admitted. More info:  https://t.co/8z7XTToWgL 
            #WiCV #ICCV2025 #WomenInCV
          
          
            
            sites.google.com
              The workshop is scheduled on Oct 19, 2025 from 13:00 to 17:05 in Hawaii Convention Center Workshop Agenda 13:00 โ 13:10 Welcome and Introduction 13:10 โ 13:35 Keynote speaker: Angela Dai, Technical...
            
                
                1
              
              
                
                7
              
              
                
                14
              
             It is PhD application season again ๐ For those looking to do a PhD in AI, these are some useful resources ๐ค: 1. Examples of statements of purpose (SOPs) for computer science PhD programs:  https://t.co/Stz53ZiREM  [1/4] 
          
            
            cs-sop.notion.site
              cs-sop.org is a platform intended to help CS PhD applicants. It hosts a database of example statements of purpose (SoP) shared by previous applicants to Computer Science PhD programs.
            
                
                7
              
              
                
                87
              
              
                
                391
              
             2 papers each accepted by 2 reviewers out of 3. No strong arguments against the paper, AC accepting the paper and the final result reject with no additional explanation? ๐ณ Any insights about @NeurIPSConf #NeurIPS2025 decision process this year? 
          
                
                1
              
              
                
                1
              
              
                
                8
              
             "Build the web for agents, not agents for the web" This position paper argues that rather than forcing web agents to adapt to UIs designed for humans, we should develop a new interface optimized for web agents, which we call Agentic Web Interface (AWI). 
          
                
                9
              
              
                
                58
              
              
                
                197
              
             Come join us! 
           ๐ Technical practitioners & grads โ join to build an LLM evaluation hub! Infra Goals: ๐ง Share evaluation outputs & params ๐ Query results across experiments Perfect for ๐งฐ hands-on folks ready to build tools the whole community can use Join the EvalEval Coalition here ๐ 
            
                
                1
              
              
                
                3
              
              
                
                16
              
             Join:  https://t.co/PezB5nHdRR  More on the EvalEval Coalition  https://t.co/8G4WcGabhB  Or ask to understand more 
          
            
            evalevalai.com
              We are a researcher community developing scientifically grounded research outputs and robust deployment infrastructure for broader impact evaluations.
            
                
                0
              
              
                
                2
              
              
                
                10
              
             ๐ Technical practitioners & grads โ join to build an LLM evaluation hub! Infra Goals: ๐ง Share evaluation outputs & params ๐ Query results across experiments Perfect for ๐งฐ hands-on folks ready to build tools the whole community can use Join the EvalEval Coalition here ๐ 
          
                
                2
              
              
                
                14
              
              
                
                45
              
             Iโll be at my first @CVPR this week presenting our #Highlight paper on (hyperbolic) safety-awareness in VLMs along with my co-author @tobiapoppi! Friends on twitter, DM me if you are going too and letโs catch up to discuss research and other banter ๐ฅณ #CVPR2025
          
          
                
                1
              
              
                
                4
              
              
                
                31
              
             Upon graduation, I paused to reflect on what my PhD had truly taught me. Was it just how to write papers, respond to brutal reviewer comments, and survive without much sleep? Or did it leave a deeper imprint on me โ beyond the metrics and milestones? Turns out, it did. A 
          
                
                22
              
              
                
                44
              
              
                
                340
              
             We ran tens of thousands of dollars of benchmarking experiments to see whether @vllm_project or @lmsysorg SGLang was "faster". The result: across the models and sequence lengths we tried, their performance is nearly identical, with no clear patterns. ยฏ\_(ใ)_/ยฏ 
          
                
                6
              
              
                
                31
              
              
                
                341
              
             Are you working on multilingual, multicultural #LLM? Interested in diverse & inclusive language modeling? ๐ Stay tuned at our MELT workshop collocated with #COLM2025 ๐  https://t.co/fHWfh5a1AQ  ๐ซถ We welcome 2p (EA), 4p (short), 8p (long) papers as well as talented reviewers! 
           ๐ โจ Introducing Melt Workshop 2025: Multilingual, Multicultural, and Equitable Language Technologies A workshop on building inclusive, culturally-aware LLMs! ๐ง  Bridging the language divide in AI ๐
 October 10, 2025 | Co-located with @COLM_conf ๐ 
          
                
                1
              
              
                
                9
              
              
                
                40
              
             This work was done in collaboration with @mziizm @beyzaermis @stevebach and Julia Kreutzer, and took us nearly one year to complete it. ๐ We are grateful for feedback at early stage from @HellinaNigatu @sbmaruf @CriMenghini @AlhamFikri @pjox13 @OjewaleV โโ@simi_97k
          
          
                
                2
              
              
                
                1
              
              
                
                9
              
             ๐งต Multilingual safety training/eval is now standard practice, but a critical question remains: Is multilingual safety actually solved? Our new survey with @Cohere_Labs answers this and dives deep into: - Language gap in safety research - Future priority areas Thread ๐ 
          
                
                4
              
              
                
                30
              
              
                
                62
              
             This work was an amazing collaboration with Lauren(@nolauren), Maria Antoniak , Taylor Arnold, Jiaang(@jiaangli), Siddhesh(@whoSiddheshp), Antonia(@AntoniaKaramol), Stella Frank, Zhaochong( @ZhaochongAn), Negar(@negar_rz), Daniel(@daniel_hers),@SergeBelongie and @KatiaShutova
          
          
                
                0
              
              
                
                0
              
              
                
                2
              
             We find that decades of visual cultural studies offer powerful ways to decode cultural meaning in images!! Rather than proposing yet another benchmark, our goal was to revisit and re-contextualize foundational theories of culture to pave way for more inclusive frameworks. 
          
                
                1
              
              
                
                0
              
              
                
                2
              
             We propose 5 frameworks to evaluate cultures in VLMs: 1โฃProcessual Grounding-who defines culture 2๏ธโฃMaterial Culture-what is represented? 3๏ธโฃSymbolic Encoding-layered meaning 4๏ธโฃContextual Interpretation-who understands & frames meaning? 5๏ธโฃTemporality-when is culture situated? 
          
                
                1
              
              
                
                0
              
              
                
                2
              
             In this paper, we call for integrating methods from 3 fields: ๐ Cultural Studies โ how values, beliefs & identities are shaped through cultural forms like images. ๐ Semiotics โ how signs & symbols convey meaning ๐จ Visual Studies โ how visuals communicate across time & place 
          
                
                1
              
              
                
                0
              
              
                
                2
              
             Modern Vision-Language Models (VLMs) often fail at cultural understanding. But culture isnโt just recognizing things like food, clothes, rituals etc. It's how meaning is made and understood; it also about symbolism, context, and how these things evolve over time. 
          
                
                1
              
              
                
                0
              
              
                
                2
              
             New Preprint ๐: "Cultural Evaluations of Vision-Language Models Have a Lot to Learn from Cultural Theory". We review recent works on culture in VLMs and argue for deeper grounding in cultural theory to enable more inclusive evaluations. Paper ๐:  https://t.co/9AoRHTFG58 
          
          
                
                1
              
              
                
                10
              
              
                
                88
              
             Join us at CVPR 2025 for our workshop VLMs-4-All: Vision-Language Models for All! ๐โจ We're tackling the challenge of building geo-diverse, culturally aware VLMs. If you're passionate about inclusivity in AI, weโd love your participation! #CVPR2025 #VLMs4All
          
           ๐ขExcited to announce our upcoming workshop - Vision Language Models For All: Building Geo-Diverse and Culturally Aware Vision-Language Models (VLMs-4-All) @CVPR 2025! ๐  https://t.co/2eqS363p0u 
            
            
                
                0
              
              
                
                3
              
              
                
                9
              
             
             
             
             
             
             
               
             
             
             
             
               
             
            