Ji Ung Lee
@JiUngLee1
Followers
348
Following
1K
Media
6
Statuses
117
Postdoc@RTG Neuroexplicit Models, University of Saarland, Germany.
Joined June 2019
Job hunting at #NeurIPS2025? Passionate about Responsible AI ? Come and talk to @FranklinMatija @Arianna_Manzini . We're hiring on *all levels* (senior, PhD graduate and research students/ interns) at #GoogleDeepMind -- stay tuned for the JD coming out in the new year
17
18
208
I hope to hire a fellow through these programs. I am looking at applicants as they apply, so there's an advantage to applying early and pinging to let me know your application is in the system!
.@Cornell is recruiting for multiple postdoctoral positions in AI as part of two programs: Empire AI Fellows and Foundational AI Fellows. Positions are available in NYC and Ithaca. Deadline for full consideration is Nov 20, 2025! https://t.co/Cp5710BauU
0
5
15
📢Job Opportunity Research Associate for Reasoning in LLMs, University of Bath, UK (Deadline 05 August 2025) We are looking to hire a highly motivated researcher to work on analysing reasoning in LLMs For more information, see: https://t.co/2bYI0RglSl
0
11
24
Beautiful @GoogleResearch paper. LLMs can learn in context from examples in the prompt, can pick up new patterns while answering, yet their stored weights never change. That behavior looks impossible if learning always means gradient descent. The mechanisms through which this
63
339
3K
I am recruiting PhD students to join my lab at Harvard in Fall 2025! (deadline Dec 15) If you are interested in solving problems at the intersection of reinforcement learning, imitation learning, and NLP, pls consider applying ( https://t.co/kNhyGjrEbC)!
@hseas @KempnerInst
2
94
390
... and we're looking for PhD and post-docs to join @sardine_lab_it -- reach out if you're interested! 🐟
1
4
12
1/n🤖🧠 New paper alert!📢 In "Assessing Episodic Memory in LLMs with Sequence Order Recall Tasks" ( https://t.co/S8BZzkFVM6) we introduce SORT as the first method to evaluate episodic memory in large language models. Read on to find out what we discovered!🧵
arxiv.org
Current LLM benchmarks focus on evaluating models' memory of facts and semantic relations, primarily assessing semantic aspects of long-term memory. However, in humans, long-term memory also...
4
17
61
Interview: Prof. Kersting (@kerstingAIML) und Prof. @marcus_rohrbach über das Clusterprojekts RAI, das die Entwicklung der nächsten Generation von #KI anstrebt @CS_TUDarmstadt @Hessian_AI
1
4
27
Are you interested in working with us on modularity and continual learning? Consider applying to our open full-time RE position in NYC: https://t.co/HlSrRLHLYt
job-boards.greenhouse.io
0
7
74
@xlr8harder I've been maintaining a database of base models with detailed info about the licensing here. See the screenshot for the list of OS-licensed models sorted by size. I believe BTPM-3B /Mistral-7B / MPT-30B is "best model per VRAM" tradeoff. https://t.co/enoD29Lm0L
12
44
267
@ReviewAcl would you kindly consider sending a "reviews have been assigned notification"? 🙏
0
0
2
Just logged into my OpenReview account and noticed that the papers from the ARR October cycle have been assigned with a first (small) task due Saturday. However there was no notification at all 🫤
3
2
12
The ACL Survey on the Anonymity Period Policy is out: https://t.co/gjrrOfjeqr
0
38
68
📢📢Brief reminder that Eval4NLP submission deadline is this **Friday, 25.08.2023** Focus: Evaluation with/of LLMs, all other evaluation aspects also very welcome 2nd call4papers: https://t.co/UELSVSPtDW Webpage: https://t.co/U6oTingFlR Venue: @aaclmeeting (Bali 🏖️)
0
7
6
📢📢📢 The Eval4NLP workshop will take place this year at AACL 2023. Special focus: Evaluation of/with LLMs. Including a shared task on Prompting LLMs as Explainable Metrics. 📢📢📢 Direct submission deadline: 25.08. Webpage: https://t.co/U6oTing7wj CFP:
0
13
18