peterljq Profile Banner
Jinqi Luo Profile
Jinqi Luo

@peterljq

Followers
506
Following
2K
Media
25
Statuses
225

CS PhD Student @Penn. MSR @CMU_Robotics. Previously @amazon (Rufus LLM) / @alibabagroup (Alipay). Controllable, robust, and multimodal generative models.

Pennsylvania
Joined March 2013
Don't wanna be here? Send us removal request.
@NeurIPSConf
NeurIPS Conference
1 month
Today, the NeurIPS Foundation is proud to announce a $500,000 donation to OpenReview supporting the infrastructure that makes modern ML research possible. OpenReview has been our trusted partner for years, enabling rigorous peer review at the scale and pace our field demands.
12
58
950
@peterljq
Jinqi Luo
2 months
🚀🚀🚀
@tianyuanzhang99
Tianyuan Zhang
2 months
Will be at #NeurIPS from Tues. to Sunday ✈️ Looking forward to chatting! especially if you’re interested in infinite context, test-time training and continue learning!
0
0
2
@peterljq
Jinqi Luo
2 months
So it is about which signal the model treats as the ultimate authoritative. Normally it was the reward value. Now inoculation prompting adds a higher-level instruction that disentangles the reward maximization and what the overseer really wants.
@AnthropicAI
Anthropic
2 months
Remarkably, prompts that gave the model permission to reward hack stopped the broader misalignment. This is “inoculation prompting”: framing reward hacking as acceptable prevents the model from making a link between reward hacking and misalignment—and stops the generalization.
0
0
1
@peterljq
Jinqi Luo
3 months
The surprise modeling is truly exciting 🚀
@sainingxie
Saining Xie
3 months
looking ahead, we’re prototyping something new -- we call it predictive sensing. our paper cited tons of work from cogsci and developmental psychology. the more we read, the more amazed we became by human / animal sensing. the human visual system is super high-bandwidth, yet
0
0
2
@peterljq
Jinqi Luo
4 months
Welcome back to Pennsylvania!
@CSProfKGD
Kosta Derpanis (sabbatical in Munich)
4 months
My home till the end of the year 🤗
2
0
6
@songyoupeng
Songyou Peng
4 months
📣 Announcing MUSI: 1st Multimodal Spatial Intelligence Workshop @ICCVConference! 🎙️All-star keynotes: @sainingxie, @ManlingLi_, @RanjayKrishna, @yuewang314, and @QianqianWang5 - plus a panel on the future of the field! 🗓 Oct 20, 1pm-5:30pm HST 🔗 https://t.co/wZaWKRIcYI
2
24
232
@peterljq
Jinqi Luo
4 months
Data!
@jbhuang0604
Jia-Bin Huang
4 months
Incredibly inspiring talk by Prof. Alexei (Alyosha) Efros at @umdcs today! Got to rethink the role of large visual data for visual analysis and synthesis. 🤔
0
0
4
@peterljq
Jinqi Luo
4 months
Agentic LLMs are becoming pivotal components in modern AI ecosystems. Please take a look at the FAST workshop if you are interested 🚀
@chenchenye_ccye
Chenchen Ye
5 months
🌟 Excited to co-organize the FAST workshop at AAAI 2026 in Singapore (Jan 27)! We're calling for submissions to explore why LLM-driven agentic AI systems exhibit emergent behaviors and how we can understand and guide these complex interactions. We welcome interdisciplinary
0
0
1
@peterljq
Jinqi Luo
4 months
I hope that, in another parallel universe, influencers could post: I just read Shannon’s research paper that completely broke my brain 😳 Instead of treating text as a mystery, he said: “ok, show me more of the prior text and let me predict the next symbol step by step.”
0
2
9
@_wenlixiao
Wenli Xiao
5 months
Definitely mind-blowing! It can even handle quite OOD cases when playing with Prof. Sastry 😂
@ZhiSu22
Zhi Su
5 months
🏓🤖 Our humanoid robot can now rally over 100 consecutive shots against a human in real table tennis — fully autonomous, sub-second reaction, human-like strikes.
4
21
176
@peterljq
Jinqi Luo
5 months
DynamicVoyager (ICCV 2025, https://t.co/Us84Damifk) is our latest effort on dynamic, controllable, and long-horizon scene synthesis. Please take a look if you are interested 🚀
Tweet card summary image
arxiv.org
The problem of generating a perpetual dynamic scene from a single view is an important problem with widespread applications in augmented and virtual reality, and robotics. However, since dynamic...
@tianfr1999
Fengrui Tian
5 months
How can we generate endless dynamic scenes where we can explore freely along any camera path? Check out our #ICCV2025 paper "Voyaging into Perpetual Dynamic Scenes from a Single View"! Website: https://t.co/YbZ1btG7UL Arxiv: https://t.co/5QCej7KdmS
0
0
9
@Alibaba_Qwen
Qwen
5 months
🚀 Excited to introduce Qwen-Image-Edit! Built on 20B Qwen-Image, it brings precise bilingual text editing (Chinese & English) while preserving style, and supports both semantic and appearance-level editing. ✨ Key Features ✅ Accurate text editing with bilingual support ✅
143
600
4K
@peterljq
Jinqi Luo
6 months
Impact!
@Guangxuan_Xiao
Guangxuan Xiao
6 months
The release of GPT-OSS-120B & GPT-OSS-20B models today incorporates my Attention Sink work ( https://t.co/u67QTC3rzh). Exciting to see this come to life! 🎉 Looking forward to more progress in this space. 😁
0
0
3
@peterljq
Jinqi Luo
6 months
He also translated (“broadcasted”😁) the sci-fi novel Three-Body Problem to the English community. A truly great writer!
@sainingxie
Saining Xie
6 months
yes. but you should all follow Ken Liu (@kyliu99) and read his novels. he’s my favorite sci-fi writer and just an incredible person. Ken’s been hands-on with AI for years. No surprise Pantheon clicks so hard with many researchers. I remember a few years ago he fine-tuned his own
0
0
3
@cihangxie
Cihang Xie
6 months
🚀 Excited to share GPT-Image-Edit-1.5M — our new large-scale, high-quality, fully open image editing dataset for the research community! (1/n)
3
50
219
@rasbt
Sebastian Raschka
6 months
From GPT to MoE: I reviewed & compared the main LLMs of 2025 in terms of their architectural design from DeepSeek-V3 to Kimi 2. Multi-head Latent Attention, sliding window attention, new Post- & Pre-Norm placements, NoPE, shared-expert MoEs, and more... https://t.co/oEt8XzNxik
Tweet card summary image
magazine.sebastianraschka.com
From DeepSeek-V3 to Kimi K2: A Look At Modern LLM Architecture Design
45
491
2K
@peterljq
Jinqi Luo
7 months
Feel the AGI!
@QuanquanGu
Quanquan Gu
7 months
从理论转到大模型,一路走来不讨喜。 有人不适应你的变化,有人不希望你真的做成。 Losers and haters make noise. Builders build. Feel the AGI!
0
0
0
@peterljq
Jinqi Luo
7 months
Learning to reconstruct information with minimal programs! Parsimony/sparsity reflect intelligence.
@ShivamDuggal4
Shivam Duggal
7 months
Compression is the heart of intelligence From Occam to Kolmogorov—shorter programs=smarter representations Meet KARL: Kolmogorov-Approximating Representation Learning. Given an image, token budget T & target quality 𝜖 —KARL finds the smallest t≤T to reconstruct it within 𝜖🧵
0
0
2