
Harsh Jhamtani
@harsh_jhamtani
Followers
558
Following
1K
Media
6
Statuses
72
Researcher @Microsoft | NLP / ML PhD from @LTIatCMU | Previously at @UCSanDiego @allen_ai @AdobeResearch @facebookai @iitroorkee | Opinions are my own.
United States
Joined April 2014
I am growing an R&D team around Copilot Tuning, a newly announced effort that supports adaptation at a customer-specific level. Join us! https://t.co/kVocnuTrKN We collaborate with a crack team of eng and scientists that support the product, also growing! https://t.co/typyUXfQ8g
0
14
72
Our work will be presented today at ACL 2025 on Monday, July 28, 18:00-19:30 in Session 5. In this work, we created an environment to benchmark LLM agents on productivity tasks that doesn't just require tools, but also getting information from various people in the organization.
With PeopleJoin, our new benchmark, we study LM agents as coordinators to gather distributed insights and empower collaborative problem solving. LM Agents for Coordinating Multi-User Information Gathering https://t.co/u1S77OwSzH
@ben_vandurme @jacobandreas
0
2
8
@ben_vandurme @jacobandreas Such agents need to tackle several challenges to solve the assigned task effectively and efficiently -- identifying what info is already available, judiciously determining who to contact, asking precise questions, and compiling research results.
0
0
1
With PeopleJoin, our new benchmark, we study LM agents as coordinators to gather distributed insights and empower collaborative problem solving. LM Agents for Coordinating Multi-User Information Gathering https://t.co/u1S77OwSzH
@ben_vandurme @jacobandreas
2
3
14
LM agents can perform 'deep research' to generate full reports from accessible content—but in most organizations, critical info is siloed across people. Many decision-making, content creation & info-gathering tasks therefore require collecting information from multiple people.
1
0
9
🚀 Excited to share our latest work! 1. Steering Large Language Models between Code Execution and Textual Reasoning (ICLR’25) ( https://t.co/8WKFYLir8t) 2. CodeSteer: Symbolic-Augmented LLMs via Code/Text Guidance ( https://t.co/cu9VLnFnJ6) 📂 Code: https://t.co/bsPSf4RFYb
1
1
5
Excited to share our latest work, Learning to Retrieve Iteratively for In-Context Learning, at #EMNLP2024! Here’s how we’re pushing the boundaries of retrieval techniques in large language models (LLMs). 🧵👇
1
4
14
Our paper on 'Learning to Retrieve Iteratively for In-Context Learning' was named as one of the Outstanding Papers at EMNLP 2024!! https://t.co/KUrHarqS2s
8
1
48
I'm at EMNLP! Come by Poster Session B (2pm-3:30pm) if you want to say hi and/or hear about this cool trick for bootstrapping paired language+code data from raw code! Paper 🔗: https://t.co/XTLPMqbKxl
4
5
51
How to effectively customise LLMs - https://t.co/L3sa2pbTwp
🚨 New paper alert! 🚨 We all know that LLMs can be integrated with tools and they can be personalised through clever prompts or more recently "custom instructions"? But what happens when we evaluate these instructions more systematically? https://t.co/AyrT3mW3h9 (1/N)
1
1
2
“Ontologically Faithful Generation of Non-Player Character Dialogues” @Nathaniel_Weir, Ryan Thomas, Randolph d'Amore, Kellie Hill, @ben_vandurme, @harsh_jhamtani
https://t.co/QW570gsHRE
0
1
5
“Learning to Retrieve Iteratively for In-Context Learning” @YunmoChen, @ctongfei, @harsh_jhamtani,@nlpaxia, Richard Shin, @adveisner, @ben_vandurme
https://t.co/Vta3zJ0mtm
0
2
8
PhD Summer Research Internships on topics in conversational AI at Microsoft https://t.co/T8pcpiUw71
1
46
263
Glad to share my intern work in Microsoft Research. Great gratitude to my mentors @Chi_Wang_ , @harsh_jhamtani, Srinagesh Sharma and my PhD advisor Chuchu Fan. 'Steering Large Language Models between Code Execution and Textual Reasoning' 👉 Full paper: https://t.co/TDUKtwy6WB
0
2
16
"Interpreting User Requests in the Context of Natural Language Standing Instructions" to appear in #NAACL2024 Findings Paper: https://t.co/M7omPFbKbu Data: https://t.co/ClTkqv2fS0 Code: https://t.co/usHkXaMl2h
@nlpaxia @jacobandreas @adveisner @ben_vandurme @harsh_jhamtani
github.com
Contribute to nikitacs16/nlsi development by creating an account on GitHub.
🚨 New paper alert! 🚨 We all know that LLMs can be integrated with tools and they can be personalised through clever prompts or more recently "custom instructions"? But what happens when we evaluate these instructions more systematically? https://t.co/AyrT3mW3h9 (1/N)
1
11
58
Chandrayaan-3 Mission: 'India🇮🇳, I reached my destination and you too!' : Chandrayaan-3 Chandrayaan-3 has successfully soft-landed on the moon 🌖!. Congratulations, India🇮🇳! #Chandrayaan_3
#Ch3
69K
274K
828K
Natural Language Decomposition and Interpretation of Complex Utterances abs: https://t.co/g3FeuwDPSS paper page: https://t.co/bZUBlxQBtF
1
13
43
Excited to announce a new dataset–– the result of a fun summer with @harsh_jhamtani at Semantic Machines. We present KNUDGE, a dataset of _real_ dialogue trees from The Outer Worlds (@OuterWorlds) with associated biography and quest specifications. https://t.co/pLTSwcBh8Y
1
4
23
New @columbianlp #EMNLP2022 paper where we leverage the strong few-shot capabilities of LLMs 🤖 & evaluative strength of expert humans 🧑💻 to create a benchmark for Figurative Language Understanding with a focus on Explanations (1/n) Paper: https://t.co/O3FRVcSPZH
#NLProc #xAI
6
20
90
PhD students in NLP/ML/AI: apply for an in-person summer 2023 internship at Microsoft Semantic Machines! Intelligent agents helping humans, via grounded natural language dialogue and more. Read more at https://t.co/Oo0sIkOQ3G . #NLProc #ML #AI
3
85
267