abhijitanand Profile
abhijitanand

@abhijit_ai

Followers
15
Following
12
Media
0
Statuses
23

IR Researcher @ L3S Research Center

Joined April 2022
Don't wanna be here? Send us removal request.
@abhijit_ai
abhijitanand
2 years
📊 Exciting findings in our latest #TOIS paper! 🚀 We investigate the power of data augmentation(#DA) and contrastive losses in boosting ranking models. We introduce supervised and unsupervised augmentation methods to enhance sample efficiency. 💡 (1/3)
1
1
4
@abhijit_ai
abhijitanand
2 years
🌟 Explore the future of ranking models with our groundbreaking research here: https://t.co/zO8zvFPCTL (3/3)
0
0
0
@abhijit_ai
abhijitanand
2 years
#DA along with contrastive losses maximise the benefits, leading to gains between 1.3% and 10.2% across various dataset sizes. 🌐 Not just limited to in-domain success, our models showcase remarkable robustness and generalisation when transferred to out-of-domain benchmarks.(2/3)
1
0
0
@run4avi
Avishek Anand
2 years
Amazing value for money ;-). Join us in Delft on NOVEMBER 27 for #dir2023 register soon #tudelft_ai . And thanks #sigir #siks for the generous support
@corsi_mat
Matteo
2 years
Deadline for #DIR2023 registration is getting closer: Nov 19! Fees are: - 25€ for senior researchers - 15€ for students - SIKS students may qualify for FREE registration Note that registration is independent from contribution. Secure your spot at
0
2
12
@DrCh0le
Sole Pera
2 years
Reminder! #DIR2023 is almost upon us 😉 Interested in presenting your published work, emerging research direction, & even resources of interest to #IR community during special sessions? Submit the contribution form by October 14. More details: https://t.co/7Obqll9xUR
0
9
11
@_reachsumit
Sumit
2 years
Context Aware Query Rewriting for Text Rankers using LLM Proposes context-aware query rewriting with LLMs during training to improve ranking, avoiding expensive LLM inference during query processing. 📝 https://t.co/gqZFqE67ym
0
12
34
@vinaysetty
Vinay Setty
2 years
If you want work on XAI for LLMs and fact-checking @IAI_group @UniStavanger consider applying for this position or please forward it to someone who may be interested.
0
6
8
@JonasWallat
Jonas Wallat
3 years
In a second step, we show that the information where the ability is best encoded can be used to train better ranking models. To do so, we devise a MTL setup where we have the ranking objective on the last layer and switch the layers for the ability (e.g., BM25)
0
1
2
@JonasWallat
Jonas Wallat
3 years
When probing BERT rankers for ranking abilities - such as the ability to estimate BM25 scores - we find these abilities to be best captured at intermediate layers
1
2
3
@LijunLyu
Lijun Lyu
3 years
Happy to share our survey about explainable IR, comments are appreciated. @run4avi @maxidahl @JonasWallat @YumengWang13 @Joshua_Ghost
@arXiv_cs_ir
arXiv
3 years
Explainable Information Retrieval: A Survey 🔗:
0
11
23
@DrCh0le
Sole Pera
3 years
Today @run4avi shares a bit of the history of #InformationRetrieval with @tudelft Web Science & Engineer students -- and I get to visit his class and take a stroll down IR memory lane 😉#aGoodDayAtTheOffice
0
4
11
@julian_urbano
Julián Urbano
3 years
Here is the video from my #sigir2022 talk "your #phd and you" https://t.co/tTHJkayma3
@julian_urbano
Julián Urbano
3 years
Here are the slides from my talk #sigir2022 https://t.co/w6vBwBQeIn A real pleasure to speak about this!
2
9
36
@run4avi
Avishek Anand
3 years
A proud advisor moment for me 😇 ... You fully well deserved it @YumengWang13 .. Cheers to our future adventures.
@YumengWang13
Yumeng Wang
3 years
So glad to win this prize for my master thesis! Thanks for your help and support @LijunLyu and @run4avi, it is a good start of this wonderful journey! Cheers☺️
1
1
9
@run4avi
Avishek Anand
3 years
And we are underway..#xaiss starts with the Keynote from @MihaelaVDS on New frontiers in ML interpretability.
0
8
24
@run4avi
Avishek Anand
3 years
I am organising a summer school for Explainable AI. We have a session on explainable IR as well 😀. Register if you want a fun summer school with amazing talks and socials. Link: https://t.co/hfcxfCOWtc If you’re interested I am attending #sigir2022
@l3s_luh
L3S Research Center @L3S_Research_Center@wisskomm
3 years
Don't forget to register for @SoBigData summer school on #eXplainableAI at @tudelft https://t.co/wiN1l6QtM8
0
9
21
@run4avi
Avishek Anand
3 years
Three papers from my group in ICTIR and SIGIR .. 1/ with @Yumeng963 @LijunLyu , investigates the brittleness of neural rankers.. fun fact: recurring adversarial words like “acceptable” demotes relevant documents .. #ictir2022 @l3s_luh @tudelft
@Yumeng963
Yumeng Wang
3 years
Are BERT rankers robust to adversarial attacks? Check out our study on BERT rankers with adversarial document perturbations which also exposes potential biases on recurring tokens and topic preferences. paper: https://t.co/Nta07PCaDU repo: https://t.co/JBI1OH3PfJ #ICTIR2022
1
2
22
@run4avi
Avishek Anand
3 years
2/ we find that simple data augmentation schemes improve performance on a wide variety of small datasets. Interestingly our data augmentation actually only works with ranking supervised contrastive losses (SCL)
1
1
7
@run4avi
Avishek Anand
3 years
2/ wanna train Neural re-rankers but only have small training data ? with @abhijit_ai @mr_jleo @krudra5 we propose supervised contrastive losses for data augmentation methods to training cross encoders. Paper:
1
1
6