lcs2lab Profile Banner
LCS2 Lab Profile
LCS2 Lab

@lcs2lab

Followers
1K
Following
234
Media
145
Statuses
542

Lab. for Computational Social Systems, a group led by @Tanmoy_Chak working on #SocialComputing #GraphMining & #NLProc

New Delhi, India
Joined July 2018
Don't wanna be here? Send us removal request.
@lcs2lab
LCS2 Lab
1 day
πŸ’‘ By reframing #TaxonomyExpansion as lineage-oriented #Reasoning, #LORex moves beyond brittle pipelines. Structured knowledge deserves structured thinking. #KnowledgeGraphs #NLProc.
0
0
0
@lcs2lab
LCS2 Lab
1 day
#LORex outperforms 12 strong baselines across 4 benchmarks using TEMPORA’s lineage-aware ranking.πŸ“ˆ +12% taxonomy expansion accuracy.πŸ“ +5% Wu & Palmer similarity.🎯 +21.3% Hit@k .Without any LLM fine-tuning, #LORex is efficient, scalable, and plug-and-play across domains.
1
0
0
@lcs2lab
LCS2 Lab
1 day
🧠 Hierarchies drive recommender systems, IR, and semantic search, but taxonomy expansion remains noisy and error-prone. #LORex ranks candidates via TEMPORA. It chunks candidates for LLMs to reason over parent-child links with contextual verification.
1
0
0
@lcs2lab
LCS2 Lab
1 day
πŸ“ Rank, Chunk and Expand: Lineage-Oriented Reasoning for Taxonomy Expansion.πŸ‘₯ Sahil Mishra @sahilmishra0012, Kumar Arjun, Tanmoy Chakraborty @Tanmoy_Chak .πŸ“Œ Paper: πŸ’Ύ Code:
Tweet card summary image
aclanthology.org
Sahil Mishra, Kumar Arjun, Tanmoy Chakraborty. Findings of the Association for Computational Linguistics: ACL 2025. 2025.
1
1
1
@lcs2lab
LCS2 Lab
1 day
🚨 New #ACL2025Findings Paper 🚨.How can we accurately grow taxonomies without brittle heuristics or finetuned LLMs? Our new work introduces LORex, a plug-and-play framework that combines discriminative ranking with generative reasoning to expand taxonomies scalably & faithfully.
Tweet media one
1
0
3
@lcs2lab
LCS2 Lab
1 day
RT @Tanmoy_Chak: I am attending #ACL2025. Happy to catch up to discuss our lab's work (@lcs2lab) and opportunities at @iitdelhi. I also lo….
0
3
0
@lcs2lab
LCS2 Lab
2 days
πŸ“Œ Can we trust a model that gets the right output for the wrong reason? As KD becomes standard for compressing LMs, understanding how knowledge transfers is vital. Our findings urge a rethink of trust and interpretability in distilled models. #ACL2025 #NLProc #TrustworthyAI.
0
0
0
@lcs2lab
LCS2 Lab
2 days
🧠 Distilling into larger students yields limited gains. But more crucially, many students produce the correct answers without replicating the teacher’s reasoning. This exposes a key dissonance: KD boosts accuracy, but not necessarily reasoning fidelity.
1
0
0
@lcs2lab
LCS2 Lab
2 days
πŸ“ˆ We conduct a large-scale study distilling 7B–14B LMs into 0.5B–7B models on 14 complex zero-shot reasoning tasks. KD boosts small model performance by up to 10%. However, teacher size β‰  student performance. Task-specific expertise matters more.
1
0
0
@lcs2lab
LCS2 Lab
2 days
πŸ“ On the Generalization vs Fidelity Paradox in Knowledge Distillation.πŸ‘₯ Suhas Kamasetty Ramesh, Ayan Sengupta @ayans007, Tanmoy Chakraborty @Tanmoy_Chak .πŸ“Œ Paper: πŸ’Ύ Code:
Tweet card summary image
aclanthology.org
Suhas Kamasetty Ramesh, Ayan Sengupta, Tanmoy Chakraborty. Findings of the Association for Computational Linguistics: ACL 2025. 2025.
1
1
2
@lcs2lab
LCS2 Lab
2 days
πŸ“’ New #ACL2025Findings Paper πŸ“’.Can smaller language models learn how large ones reason, or just what they conclude? Our latest paper in #ACLFindings explores the overlooked tension in #KnowledgeDistillation - generalization vs reasoning fidelity. #NLProc
Tweet media one
1
0
2
@lcs2lab
LCS2 Lab
3 days
β˜‘οΈ HiPPrO outperforms LLMs like GPT-3.5 and GPT-4 in producing more aligned, controllable, and contextually appropriate counterspeech, while remaining #lightweight and #efficient.
0
0
0
@lcs2lab
LCS2 Lab
3 days
🧠 Using a two-stage architecture, HiPPrO generates responses that reflect both the speaker's intent (like questioning or denouncing) and the emotional tone (like anger or joy), making it far more human-like and impactful in tone-sensitive contexts like #hatemitigation.
1
0
0
@lcs2lab
LCS2 Lab
3 days
πŸš€ #ACL2025 Sneak Peek πŸš€.As online hate continues to challenge platforms and communities, our lab takes a meaningful step forward with HiPPrO, a new framework for generating controlled, constructive #counterspeech πŸ›‘οΈ.#NLProc #AIResearch
Tweet media one
1
0
3
@lcs2lab
LCS2 Lab
7 days
RT @lcs2lab: 🚨 New #TACL Paper Alert 🚨.We explore a crucial question in instruction tuning: should we weight prompt and response tokens dif….
0
2
0
@lcs2lab
LCS2 Lab
7 days
πŸ“ On the Effect of Instruction Tuning Loss on Generalization.πŸ‘₯ Anwoy Chatterjee @anwoy_, Kowndinya Renduchintala @KowndinyaR, Sumit Bhatia, Tanmoy Chakraborty @Tanmoy_Chak.πŸ’Ύ Code: #MachineLearning #NLP #LLMs #TACL.
Tweet card summary image
arxiv.org
Instruction Tuning has emerged as a pivotal post-training paradigm that enables pre-trained language models to better follow user instructions. Despite its significance, little attention has been...
0
2
7
@lcs2lab
LCS2 Lab
7 days
🚨 New #TACL Paper Alert 🚨.We explore a crucial question in instruction tuning: should we weight prompt and response tokens differently in the loss function?.Introducing Weighted Instruction Tuning - a simple idea that boosts generalization by up to +6.55% across 5 benchmarks!
Tweet media one
1
2
5
@lcs2lab
LCS2 Lab
11 days
πŸ“œ Step-by-Step Unmasking for Parameter-Efficient Fine-tuning of Large Language Models.πŸ‘₯ @AradhyeAgarwal, Ayan Sengupta, Suhas K Ramesh, @Tanmoy_Chak .πŸ’Ύ Code: πŸ“Œ Paper: #LLMs #NLProc #PEFT #LoRA #Adapters #AIResearch.
Tweet card summary image
arxiv.org
Fine-tuning large language models (LLMs) on downstream tasks requires substantial computational resources. Selective PEFT, a class of parameter-efficient fine-tuning (PEFT) methodologies, aims to...
0
0
2
@lcs2lab
LCS2 Lab
11 days
🚨 New Paper Alert 🚨. Thrilled to share that our latest work has been accepted to #TACL! 🎊. We introduce ID3, a dynamic #PEFT method that unmasks parameters incrementally, guided by a novel heuristic based on magnitude and gradient information. 🧠. πŸ”—
Tweet media one
1
0
5