hkustNLP Profile Banner
HKUST NLP Profile
HKUST NLP

@hkustNLP

Followers
250
Following
54
Media
1
Statuses
26

HKUST Natural Language Processing Research #NLProc

Joined October 2024
Don't wanna be here? Send us removal request.
@hkustNLP
HKUST NLP
2 months
Great work! @ZhaoweiWang4.
@_akhaliq
AK
3 months
MMLongBench. Benchmarking Long-Context Vision-Language Models Effectively and Thoroughly
Tweet media one
0
0
2
@hkustNLP
HKUST NLP
5 days
RT @ZhouHE777: Excited to present our work, MMBoundary, at #ACL2025!. Come chat with us at our poster session!.πŸ“ Hall 4/5, Session 12: Post….
0
8
0
@hkustNLP
HKUST NLP
5 days
RT @yqsong: If you are at ACL, please talk to our members of @HKUSTKnowComp and @hkustNLP big family
Tweet media one
Tweet media two
0
3
0
@hkustNLP
HKUST NLP
9 days
RT @ZhouHE777: 🀯 Multimodal LLMs can be confidently wrong. A single early mistake in perception can lead to a completely incorrect answer.….
0
3
0
@hkustNLP
HKUST NLP
9 days
RT @SterZhang: Find us at the poster booth ✨Wed 7/30 11am Hall 4/5 ✨.
0
4
0
@hkustNLP
HKUST NLP
9 days
RT @May_F1_: @hkustNLP @uiuc_nlp @aclmeeting [1/n] "π˜”π˜’π˜΅π˜€π˜©π˜ͺ𝘯𝘨 𝘀𝘢𝘦𝘴 𝘧𝘰𝘳 π˜ͺπ˜₯𝘦𝘯𝘡π˜ͺ𝘀𝘒𝘭 𝘰𝘣𝘫𝘦𝘀𝘡𝘴, π˜₯π˜ͺ𝘴𝘡π˜ͺ𝘯𝘀𝘡 𝘒𝘡𝘡𝘳π˜ͺ𝘣𝘢𝘡𝘦𝘴 𝘧𝘰𝘳 𝘢𝘯π˜ͺ𝘲𝘢𝘦 𝘰𝘯𝘦𝘴." Such π™˜π™§π™€π™¨π™¨-π™˜β€¦.
0
3
0
@hkustNLP
HKUST NLP
10 days
RT @aclmentorship: πŸ“’ Join us for the ACL Mentorship Session @aclmeeting #ACL2025NLP #NLProc. β€’ Session Link: β€’ Ask….
0
12
0
@hkustNLP
HKUST NLP
10 days
RT @May_F1_: Heading out to #ACL2025 in Vienna with six main/finding papers to present! πŸ‡¦πŸ‡ΉβœˆοΈπŸ€©. Would love to chat about research on multimo….
0
6
0
@hkustNLP
HKUST NLP
26 days
RT @ZeyuQin_alan: #COLM2025 Our work has been accepted to COLM 2025😊. Looking forward to discussing Scalable Oversight and Synthetic Data w….
0
6
0
@hkustNLP
HKUST NLP
1 month
RT @SuZhaochen0110: Excited to share our new survey on the reasoning paradigm shift from "Think with Text" to "Think with Image"! πŸ§ πŸ–ΌοΈ.Our w….
0
61
0
@hkustNLP
HKUST NLP
1 month
RT @yqsong: Thrilled to share a major milestone: the culmination of a 15-month project, ATLASβ€”a new benchmark in event graphs and conceptua….
0
6
0
@hkustNLP
HKUST NLP
2 months
RT @junxian_he: We studied both rule-based and model-based verifiers and found that each has unique limitations. Rule-based verifiers are o….
0
14
0
@hkustNLP
HKUST NLP
2 months
RT @May_F1_: Great to see the wonderful series of work that @WangCarrey has been leading at UIUC. We also had a fun collaboration recently….
0
7
0
@hkustNLP
HKUST NLP
2 months
RT @wyu_nd: πŸš€ We release MMLongBench: Benchmark for evaluating long-context VLMs. πŸ“Š 13,331 examples across 5 tasks:.– Visual RAG.– Many-sho….
0
30
0
@hkustNLP
HKUST NLP
2 months
RT @Tianshi_0218: πŸš€ Our new survey "From Automation to Autonomy: LLMs in Scientific Discovery" is live! . πŸ“œ Paper: .
0
5
0
@hkustNLP
HKUST NLP
3 months
RT @AlexWan40524978: Struggling with inconsistent LLM answers across languages? Meet CALM: a label-free framework to align LLM's cross-ling….
Tweet card summary image
arxiv.org
Large Language Models (LLMs) are pretrained on extensive multilingual corpora to acquire both language-specific cultural knowledge and general knowledge. Ideally, while LLMs should provide...
0
8
0
@hkustNLP
HKUST NLP
5 months
RT @SterZhang: πŸš€ Introducing VLMΒ²-Bench!. A simple yet essential ability that we use in daily life. But when tackling vision-centric tasks….
0
47
0
@hkustNLP
HKUST NLP
6 months
🀯 Still waiting for a PhD offer? Seeking research opportunities?. πŸ”Š Our HKUST NLP Group have a number of PhD openings for qualified students; we also offer many year-round remote intern opportunities!. 🌟 Fill out: Happy Year of the Snake! 🐍🧧🍊
Tweet media one
0
3
21
@hkustNLP
HKUST NLP
6 months
RT @junxian_he: We replicated the DeepSeek-R1-Zero and DeepSeek-R1 training on 7B model with only 8K examples, the results are surprisingly….
0
650
0
@hkustNLP
HKUST NLP
6 months
RT @RenjiePi: πŸ”₯Introducing Image Textualization (IT), an automatic framework for generating detailed and accurate image descriptions. We re….
0
33
0