Krishna Acharya
@kvachai
Followers
65
Following
293
Media
2
Statuses
37
Ph.D candidate @GeorgiaTech. Recommender systems, Algorithmic Fairness, Differential privacy.
Atlanta, GA
Joined May 2018
9/9 🗓️ I’ll be at the KDD Workshop on Online and Adaptive Recommender Systems (OARS) — happy to chat about this work, online and in person in Toronto! #GLoSS #KDD2025 #OARS #LLM #RecommenderSystems #SemanticSearch #DenseRetrieval #LoRA #LLaMA3
0
0
3
8/9 Segment-wise evaluation shows: 📦 Strong gains for cold-start users in Toys and Sports 🧴 Benefits from longer user histories in Beauty This highlights GLoSS’s robustness across interaction lengths. #ColdStart #Personalization
1
0
1
4/9 Prior LLM-based recommenders often rely on lexical search methods like BM25. GLoSS instead uses dense retrieval, going beyond frequency-based token overlap to capture deeper semantic relevance. #SemanticSearch #DenseRetrieval
1
0
2
3/9 Classic ID-based approaches like SASRec, BERT4Rec, and SemanticID based models like TIGER are effective— but usually require retraining when new items are added and struggle to generalize beyond patterns seen in training data, especially without rich metadata.
1
0
2
2/9 GLoSS is a generative recommendation framework that integrates LLMs with semantic search (aka dense retrieval) for sequential recommendation. #LLM #RecommenderSystems #DenseRetrieval
1
0
2
1/9 Happy to share that our paper GLoSS: Generative Language Models with Semantic Search for Sequential Recommendation is accepted at the KDD OARS workshop! 🎉 Paper, code: https://t.co/TrgHgCnuPC This is joint work with my wonderful collaborators @asash and Juba Ziani.
github.com
GLoSS: Generative Language Models with Semantic Search for Sequential Recommendation - krishnacharya/GLoSS
1
0
6
3/3 Among these baselines, a classic retrieval approach (using BM25) based on the text of the last item performs the best. I also explore how often-overlooked steps, like failing to deduplicate exact user-item interactions, can lead to significant inflation in metrics.
0
0
0
2/3 In this post, I dive into different model types—from ID-based to fully metadata-based models, key preprocessing steps, the leave-one-item-out split, evaluation metrics, and four baselines that any trained recommender should aim to beat.
1
0
0
1/3 Happy to share that I’ve started writing! Check out my first post on generative recommendation here: 🔗 https://t.co/AKMKpbKT9S
krishnacharya.github.io
This post introduces sequential recommenders—what they are, their key types, and common challenges. We'll walk through key preprocessing steps, evaluation splits, metrics, and baselines.
1
0
1
Now listening to David Wardrope who presents our IR4Good paper (work done in Amazon; @kvachai is the lead author here). Paper link: https://t.co/36VKVU0Aep
#ECIR2025
1
1
14
The Privacy Preserving AI workshop is back! And is happening on Monday. I am excited about our program and lineup of invited speakers! I hope to see many of you there: https://t.co/FnR8lkguBP
0
7
20
Thrilled to share that my paper "Improving Minimax Group Fairness in Sequential Recommendation" has been accepted @ECIR2025! 🎉 In the IR4Good track. This is joint work with David Wardrope, Timos Korres, @asash, and @andersuhrenholt during my Amazon internship. More soon!
0
0
4
If a paper clears the bar, give it a score ≥6. Here is how I think about ratings: - Should be oral? 8/9 - Should be spotlight? 7/8 - Clears the acceptance bar? 6/7 - Could be accepted after minor revs? 4/5 - Could be accepted after major revs? 3/4 - Fundamentally flawed 2/3
The question that a reviewer should ask themselves is: Does this paper take a gradient step in the right direction? Is the community better off with this paper published? If the answer is yes, then the recommendation should be to accept.
2
7
121
Excited to share our work on data minimization for ML! The principle of data minimization is a cornerstone of global data protection regulations, but how do we implement it in ML contexts? 🧵: Let's dive into some insights. 🔗: https://t.co/jb58uIr8Uf
2
9
44