abhijitanand
            
            @abhijit_ai
Followers
                15
              Following
                12
              Media
                0
              Statuses
                23
               🌟 Explore the future of ranking models with our groundbreaking research here:  https://t.co/zO8zvFPCTL  (3/3) 
          
                
                0
              
              
                
                0
              
              
                
                0
              
            
            #DA along with contrastive losses maximise the benefits, leading to gains between 1.3% and 10.2% across various dataset sizes. 🌐 Not just limited to in-domain success, our models showcase remarkable robustness and generalisation when transferred to out-of-domain benchmarks.(2/3)
          
          
                
                1
              
              
                
                0
              
              
                
                0
              
             Amazing value for money ;-). Join us in Delft on NOVEMBER 27 for #dir2023 register soon #tudelft_ai . And thanks #sigir #siks for the generous support 
           Deadline for #DIR2023 registration is getting closer: Nov 19! Fees are: - 25€ for senior researchers - 15€ for students - SIKS students may qualify for FREE registration Note that registration is independent from contribution. Secure your spot at 
          
                
                0
              
              
                
                2
              
              
                
                12
              
             Reminder! #DIR2023 is almost upon us 😉 Interested in presenting your published work, emerging research direction, & even resources of interest to #IR community during special sessions? Submit the contribution form by October 14. More details:  https://t.co/7Obqll9xUR 
          
          
                
                0
              
              
                
                9
              
              
                
                11
              
             Context Aware Query Rewriting for Text Rankers using LLM Proposes context-aware query rewriting with LLMs during training to improve ranking, avoiding expensive LLM inference during query processing. 📝  https://t.co/gqZFqE67ym 
          
          
                
                0
              
              
                
                12
              
              
                
                34
              
             If you want work on XAI for LLMs and fact-checking @IAI_group @UniStavanger consider applying for this position or please forward it to someone who may be interested. 
          
                
                0
              
              
                
                6
              
              
                
                8
              
             In a second step, we show that the information where the ability is best encoded can be used to train better ranking models. To do so, we devise a MTL setup where we have the ranking objective on the last layer and switch the layers for the ability (e.g., BM25) 
          
                
                0
              
              
                
                1
              
              
                
                2
              
             When probing BERT rankers for ranking abilities - such as the ability to estimate BM25 scores - we find these abilities to be best captured at intermediate layers 
          
                
                1
              
              
                
                2
              
              
                
                3
              
             Excited to be at #ECIR2023 to present our paper "Probing BERT for Ranking Abilities" with Fabian Beringer, @abhijit_ai and @run4avi! Paper: 
          
            
            link.springer.com
              Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are...
            
                
                1
              
              
                
                1
              
              
                
                24
              
             Happy to share our survey about explainable IR, comments are appreciated. @run4avi @maxidahl @JonasWallat @YumengWang13 @Joshua_Ghost
          
          
          
                
                0
              
              
                
                11
              
              
                
                23
              
             Explainable Information Retrieval: A Survey 🔗: 
          
            
            arxiv.org
              Explainable information retrieval is an emerging research area aiming to make transparent and trustworthy information retrieval systems. Given the increasing use of complex machine learning models...
            
                
                0
              
              
                
                10
              
              
                
                33
              
             Today @run4avi shares a bit of the history of #InformationRetrieval with @tudelft Web Science & Engineer students -- and I get to visit his class and take a stroll down IR memory lane 😉#aGoodDayAtTheOffice
          
          
                
                0
              
              
                
                4
              
              
                
                11
              
             Here are the slides from my talk #sigir2022
               https://t.co/w6vBwBQeIn  A real pleasure to speak about this!
            
          
                
                2
              
              
                
                9
              
              
                
                36
              
             A proud advisor moment for me 😇 ... You fully well deserved it @YumengWang13 .. Cheers to our future adventures. 
           So glad to win this prize for my master thesis! Thanks for your help and support @LijunLyu and @run4avi, it is a good start of this wonderful journey! Cheers☺️ 
            
                
                1
              
              
                
                1
              
              
                
                9
              
             And we are underway..#xaiss starts with the Keynote from @MihaelaVDS on New frontiers in ML interpretability. 
          
                
                0
              
              
                
                8
              
              
                
                24
              
             I am organising a summer school for Explainable AI. We have a session on explainable IR as well 😀. Register if you want a fun summer school with amazing talks and socials. Link:  https://t.co/hfcxfCOWtc  If you’re interested I am attending #sigir2022
          
           Don't forget to register for @SoBigData summer school on #eXplainableAI at @tudelft
               https://t.co/wiN1l6QtM8 
            
            
                
                0
              
              
                
                9
              
              
                
                21
              
             Three papers from my group in ICTIR and SIGIR .. 1/ with @Yumeng963 @LijunLyu , investigates the brittleness of neural rankers.. fun fact: recurring adversarial words like “acceptable” demotes relevant documents .. #ictir2022 @l3s_luh @tudelft
          
           Are BERT rankers robust to adversarial attacks? Check out our study on BERT rankers with adversarial document perturbations which also exposes potential biases on recurring tokens and topic preferences. paper:  https://t.co/Nta07PCaDU  repo:  https://t.co/JBI1OH3PfJ 
              #ICTIR2022
            
          
                
                1
              
              
                
                2
              
              
                
                22
              
             2/ we find that simple data augmentation schemes improve performance on a wide variety of small datasets. Interestingly our data augmentation actually only works with ranking supervised contrastive losses (SCL) 
          
                
                1
              
              
                
                1
              
              
                
                7
              
             2/ wanna train Neural re-rankers but only have small training data ? with @abhijit_ai @mr_jleo @krudra5 we propose supervised contrastive losses for data augmentation methods to training cross encoders. Paper: 
          
                
                1
              
              
                
                1
              
              
                
                6