
Taewhoo Lee
@taewhoolee
Followers
22
Following
24
Media
6
Statuses
12
m.s. student in #NLP at Korea University
Joined September 2023
Excited to present ๐๐๐๐๐ at #NAACL2025 ! Looking forward to open discussions on long-context modeling, evaluation, or anything else :). ๐๏ธ Friday, May 2, 11:00-12:30.๐ Hall 3, Session K, Poster Session 8
1
2
17
This project was done in collaboration with @cw_yoon99 @TigerKyo @DonghyeonLee_KR @_MinjuSong and @hyunjae__kim. Huge thanks for their amazing support and contributions!.
0
0
2
๐คModern LLMs are known to support long text, but can they ๐๐ฎ๐ฅ๐ฅ๐ฒ ๐ฎ๐ญ๐ข๐ฅ๐ข๐ณ๐ the information available in these texts?. ๐กIntroducing ๐๐๐๐๐, a new long-context benchmark designed to assess LLMs' ability to leverage the entire given context.
arxiv.org
Recent advancements in large language models (LLM) capable of processing extremely long texts highlight the need for a dedicated evaluation benchmark to assess their long-context capabilities....
1
5
9
Had so much fun attending #EMNLP2024 ! Every conversation I had with fellow researchers was truly inspiring and insightful. Big thanks to @cw_yoon99 @HyeonHwang8 @jeongminby98858 for the amazing teamwork over the past several months. Moving on to the next one!
0
1
14
Happy to share that CompAct has been accepted to EMNLP 2024 Main ๐. Congratulations to our team @cw_yoon99 @HyeonHwang8 @jeongminby98858 , and see you in Miami !.
๐Looking for an advanced compressor for multi-hop QA tasks while leveraging increased top-k documents effectively? Introducing โจCompActโจ, a novel framework that employs an active strategy for compressing extensive documents. [1/5]. paper:
0
0
9
RT @karpathy: # On the "hallucination problem". I always struggle a bit with I'm asked about the "hallucination problem" in LLMs. Because,โฆ.
0
3K
0