Krishna Acharya Profile
Krishna Acharya

@kvachai

Followers
65
Following
293
Media
2
Statuses
37

Ph.D candidate @GeorgiaTech. Recommender systems, Algorithmic Fairness, Differential privacy.

Atlanta, GA
Joined May 2018
Don't wanna be here? Send us removal request.
@kvachai
Krishna Acharya
5 months
9/9 🗓️ I’ll be at the KDD Workshop on Online and Adaptive Recommender Systems (OARS) — happy to chat about this work, online and in person in Toronto! #GLoSS #KDD2025 #OARS #LLM #RecommenderSystems #SemanticSearch #DenseRetrieval #LoRA #LLaMA3
0
0
3
@kvachai
Krishna Acharya
5 months
8/9 Segment-wise evaluation shows: 📦 Strong gains for cold-start users in Toys and Sports 🧴 Benefits from longer user histories in Beauty This highlights GLoSS’s robustness across interaction lengths. #ColdStart #Personalization
1
0
1
@kvachai
Krishna Acharya
5 months
7/9 GLoSS also outperforms LLM-based recommenders: P5, GPT4Rec, LlamaRec, and E4SRec, with Recall@5 gains of +4.3%, +22.8%, +29.5% respectively. #GPT4Rec #LlamaRec #P5 #E4SRec #LLM
1
0
2
@kvachai
Krishna Acharya
5 months
6/9 📈 Results on Amazon Beauty, Toys, and Sports datasets, GLoSS improves : Recall@5 by +33.3%, +52.8%, +15.2% - NDCG@5 by +30.0%, +42.6%, +16.1% over ID-based baselines.
1
0
2
@kvachai
Krishna Acharya
5 months
5/9 For query generation, we fine-tune 4-bit quantized LLaMA-3 models (1B, 3B, 8B) using LoRA— enabling efficient training on a single RTX A5000 using the Unsloth AI library. For dense retrieval, we use e5-small-v2 as the text encoder. #LoRA #LLaMA3 #Unsloth
1
0
2
@kvachai
Krishna Acharya
5 months
4/9 Prior LLM-based recommenders often rely on lexical search methods like BM25. GLoSS instead uses dense retrieval, going beyond frequency-based token overlap to capture deeper semantic relevance. #SemanticSearch #DenseRetrieval
1
0
2
@kvachai
Krishna Acharya
5 months
3/9 Classic ID-based approaches like SASRec, BERT4Rec, and SemanticID based models like TIGER are effective— but usually require retraining when new items are added and struggle to generalize beyond patterns seen in training data, especially without rich metadata.
1
0
2
@kvachai
Krishna Acharya
5 months
2/9 GLoSS is a generative recommendation framework that integrates LLMs with semantic search (aka dense retrieval) for sequential recommendation. #LLM #RecommenderSystems #DenseRetrieval
1
0
2
@kvachai
Krishna Acharya
5 months
1/9 Happy to share that our paper GLoSS: Generative Language Models with Semantic Search for Sequential Recommendation is accepted at the KDD OARS workshop! 🎉 Paper, code: https://t.co/TrgHgCnuPC This is joint work with my wonderful collaborators @asash and Juba Ziani.
Tweet card summary image
github.com
GLoSS: Generative Language Models with Semantic Search for Sequential Recommendation - krishnacharya/GLoSS
1
0
6
@kvachai
Krishna Acharya
7 months
3/3 Among these baselines, a classic retrieval approach (using BM25) based on the text of the last item performs the best. I also explore how often-overlooked steps, like failing to deduplicate exact user-item interactions, can lead to significant inflation in metrics.
0
0
0
@kvachai
Krishna Acharya
7 months
2/3 In this post, I dive into different model types—from ID-based to fully metadata-based models, key preprocessing steps, the leave-one-item-out split, evaluation metrics, and four baselines that any trained recommender should aim to beat.
1
0
0
@asash
Aleksandr V. Petrov
7 months
Now listening to David Wardrope who presents our IR4Good paper (work done in Amazon; @kvachai is the lead author here). Paper link: https://t.co/36VKVU0Aep #ECIR2025
1
1
14
@nandofioretto
Nando Fioretto
9 months
The Privacy Preserving AI workshop is back! And is happening on Monday. I am excited about our program and lineup of invited speakers! I hope to see many of you there: https://t.co/FnR8lkguBP
0
7
20
@kvachai
Krishna Acharya
10 months
Thrilled to share that my paper "Improving Minimax Group Fairness in Sequential Recommendation" has been accepted @ECIR2025! 🎉 In the IR4Good track. This is joint work with David Wardrope, Timos Korres, @asash, and @andersuhrenholt during my Amazon internship. More soon!
0
0
4
@abeirami
Ahmad Beirami
1 year
If a paper clears the bar, give it a score ≥6. Here is how I think about ratings: - Should be oral? 8/9 - Should be spotlight? 7/8 - Clears the acceptance bar? 6/7 - Could be accepted after minor revs? 4/5 - Could be accepted after major revs? 3/4 - Fundamentally flawed 2/3
@abeirami
Ahmad Beirami
4 years
The question that a reviewer should ask themselves is: Does this paper take a gradient step in the right direction? Is the community better off with this paper published? If the answer is yes, then the recommendation should be to accept.
2
7
121
@nandofioretto
Nando Fioretto
1 year
Excited to share our work on data minimization for ML! The principle of data minimization is a cornerstone of global data protection regulations, but how do we implement it in ML contexts? 🧵: Let's dive into some insights. 🔗: https://t.co/jb58uIr8Uf
2
9
44
@etash_guha
Etash Guha
2 years
I am thrilled to announce that my paper “One Shot Inverse Reinforcement Learning for Stochastic Linear Bandits” was accepted at #UAI2024! Many thanks to Professors Ashwin Pananjady and Vidya Muthukumar, @kvachai, and Jim James! Thread coming soon! Paper:
1
2
9