harish Profile Banner
Harish Tayyar Madabushi Profile
Harish Tayyar Madabushi

@harish

Followers
2K
Following
2K
Media
132
Statuses
2K

Lecturer (~Assistant Professor) in Artificial Intelligence. Work on Deep Learning for #NLProc and Deep Contextual Meaning Representations

Bath, England
Joined December 2008
Don't wanna be here? Send us removal request.
@harish
Harish Tayyar Madabushi
1 month
RT @frankniujc: Hey this is me!.Our paper: Llama See, Llama Do: A Mechanistic Perspective on Contextual Entrainment and Distraction in LLMs….
frankniujc.github.io
0
3
0
@harish
Harish Tayyar Madabushi
1 month
RT @HaritzPuerto: I’ll be presenting today at 11:00 in hall x5 booth 209 #ACL2025NLP come and let’s talk about how to train with CoTs!.
0
2
0
@grok
Grok
6 days
Join millions who have switched to Grok.
227
452
4K
@harish
Harish Tayyar Madabushi
1 month
RT @HaritzPuerto: Excited to present Diverse Chains of Thought at #ACL2025NLP .Do you have a dataset with more than one CoT/question? Do yo….
Tweet card summary image
underline.io
On-demand video platform giving you access to lectures from conferences worldwide.
0
1
0
@harish
Harish Tayyar Madabushi
1 month
@HaritzPuerto @UKPLab @BathNLP @IGurevych We provide open access to our code, models, data, and results:. 📽️Underline: 📄Paper: 💻 Code: 🤗 Models: 📂 Data: 🌐 Website: (9/🧵).
Tweet card summary image
huggingface.co
0
1
2
@harish
Harish Tayyar Madabushi
1 month
@HaritzPuerto @UKPLab @BathNLP @IGurevych We also observed that when we generate 3 CoTs, if the first 2 CoTs are ❌ and the 3rd is ✅, the model picks the last one! 🎉 . This shows that DCoT is not an ensemble of CoTs and instead is doing self-correction 🎊 . 8/🧵.
1
0
1
@harish
Harish Tayyar Madabushi
1 month
@HaritzPuerto @UKPLab @BathNLP @IGurevych Why does it work? . DCoT attempts to generate subsequent correct CoTs. Maybe the first CoT is wrong ❌ (and the model doesn’t know it), but by trying to generate a second better CoT, the model may correct the first one ✅🤩 . 7/🧵.
1
0
1
@harish
Harish Tayyar Madabushi
1 month
@HaritzPuerto @UKPLab @BathNLP @IGurevych Generating a second CoT is enough to achieve gains. Note that DCoT@1 remains the same as the vanilla CoT, i.e., training on DCoT is a better way to train an LLM if you have more than one CoT per question. (Both methods were trained with the same CoTs) . 6/🧵
Tweet media one
1
0
1
@harish
Harish Tayyar Madabushi
1 month
@HaritzPuerto @UKPLab @BathNLP @IGurevych What did we find? . Fine-tuning LLMs with DCoT datasets significantly improves performance across all model sizes from 1.3B to 70B parameters. 🎉 . 5/🧵
Tweet media one
1
0
1
@harish
Harish Tayyar Madabushi
1 month
@HaritzPuerto @UKPLab @BathNLP @IGurevych We train CoT and DCoT models with the CoTs. The only difference is that DCoT forces the model to generate them sequentially in a single inference step. With this, we wondered whether LMs can refine their reasoning on the go. 4/🧵
Tweet media one
1
0
1
@harish
Harish Tayyar Madabushi
1 month
@HaritzPuerto @UKPLab @BathNLP @IGurevych We created a specialized DCoT dataset, where every question has multiple correct chains of thought. These alternative reasoning paths are all tied to the same answer, encouraging the model to explore diverse solutions simultaneously. 🤔➡️💡 . 3/🧵.
1
0
1
@harish
Harish Tayyar Madabushi
1 month
@HaritzPuerto @UKPLab @BathNLP @IGurevych Traditional CoT methods focus on a single chain of reasoning to arrive at a solution. DCoT, on the other hand, requires models to generate .➡️multiple reasoning paths .before producing a final answer, .🔄all in a single inference step. 2/🧵
Tweet media one
1
0
1
@harish
Harish Tayyar Madabushi
1 month
At first I was not sure🤔, but on second thought, I knew what to do!!!💡😃. 📢 Diverse Chains of Thought help LLMs refine their Reasoning!!. @haritzpuerto will be presenting our work at #ACL2025NLP 🇦🇹 on Wednesday 30th at 11:00. #NLProc . A 🧵👇
Tweet media one
1
4
16
@harish
Harish Tayyar Madabushi
1 month
RT @feralvam: The trial data has just been released to registered participants. There’s still time for your team to join! #emnlp2025 #nlproc.
0
4
0
@harish
Harish Tayyar Madabushi
1 month
📢Job Opportunity.Research Associate for Reasoning in LLMs, University of Bath, UK (Deadline 05 August 2025). We are looking to hire a highly motivated researcher to work on analysing reasoning in LLMs. For more information, see: 
0
11
24
@harish
Harish Tayyar Madabushi
2 months
RT @nedjmaou: The Cardiff #NLProc Workshop starts on Monday!.If you've registered, you should have received a confirmation email (from me).….
0
4
0
@harish
Harish Tayyar Madabushi
2 months
RT @StevenSchockae2: I am looking for a postdoctoral research associate to work on (LLM-based and neurosymbolic) reasoning for story unders….
Tweet card summary image
jobs.ac.uk
0
10
0
@harish
Harish Tayyar Madabushi
2 months
RT @tylerl404: Happy to announce our journal paper on tongue twisters, Train and Constrain (TwistList 2.0), has now been officially publish….
0
4
0
@harish
Harish Tayyar Madabushi
3 months
RT @josephimperial_: 🚨 New global collaboration & dataset paper!. UniversalCEFR: Enabling Open Multilingual Research on Language Proficienc….
0
9
0
@harish
Harish Tayyar Madabushi
3 months
Tweet media one
0
0
2
@harish
Harish Tayyar Madabushi
3 months
Starting in 15 minutes! Looking forward to this talk by @DrGeofreyHinton #NLProc
Tweet media one
1
1
6