
Jing Xiong
@_June1126
Followers
36
Following
13
Media
16
Statuses
27
Phd student in HKU. Research Direction: Efficient Natural Language Processing and Automated Theorem Proving
Joined March 2016
RT @_reachsumit: CTR-Sink: Attention Sink for Language Models in Click-Through Rate Prediction. Introduces behavior-level attention sinks t….
arxiv.org
Click-Through Rate (CTR) prediction, a core task in recommendation systems, estimates user click likelihood using historical behavioral data. Modeling user behavior sequences as text to leverage...
0
1
0
RT @_reachsumit: UncertaintyRAG: Span-Level Uncertainty Enhanced Long-Context Modeling for Retrieval-Augmented Generation. Uses SNR-based s….
0
5
0
#ICML2025 #ParrallelComp #Long-context #Length Extrapolation #Memory-bound #Efficient-inference #KV cache Compression #128K Token.
0
0
0
RT @HuiShen_umich: 📷 New Benchmark Release: PhyX - Physical Reasoning for Multimodal Models. 👉 Project Page: 👉 Gith….
0
7
0
RT @clin_tian: 🔥Thrilled to announce our Oral acceptance at #NeurIPS2024! 🚀HydraLoRA, an asymmetric LoRA architecture with a shared A matri….
0
14
0
RT @cerana99x: 🌟Excited to share LeCo's acceptance at #COLM2024! .🤔Fed up with LLMs' self-correct struggles and endless prompts?.🪄LeCo uses….
0
14
0
RT @ZhijiangG: +👋LLMs work quite well on modeling/understanding long context. What about generating long content 🤔. Check our ACL paper P….
arxiv.org
Large Language Models (LLMs) have succeeded remarkably in understanding long-form contents. However, exploring their capability for generating long-form contents, such as reports and articles, has...
0
9
0
RT @YinhongLiu2: 🔥New paper!📜.Struggle to align LLM evaluators with human judgements?🤔.Introducing PairS🌟: By exploiting transitivity, we p….
0
10
0
RT @space_discrete: 翻到一篇文章[ICLR'24]Understanding Addition in Transformers.回忆起在初学oi年代被老师问了一道题:怎么直接按从左到右的顺序直接做大整数加法,不允许读完再翻转。当时想了十分钟想到一个存9和进位….
0
14
0
RT @_June1126: Excited to announce our paper's acceptance at ICLR 2024! 🌟 Our algorithm leverages CoT for enhanced in-context exemplar sele….
0
4
0
🔗 For more exciting discoveries and in-depth analysis, please check out our paper and code! #NLP #AIResearch #LanguageModels #InContextLearning #DQLoRe 📚✨
0
0
1