Jinqi Luo
@peterljq
Followers
506
Following
2K
Media
25
Statuses
225
CS PhD Student @Penn. MSR @CMU_Robotics. Previously @amazon (Rufus LLM) / @alibabagroup (Alipay). Controllable, robust, and multimodal generative models.
Pennsylvania
Joined March 2013
Today, the NeurIPS Foundation is proud to announce a $500,000 donation to OpenReview supporting the infrastructure that makes modern ML research possible. OpenReview has been our trusted partner for years, enabling rigorous peer review at the scale and pace our field demands.
12
58
950
🚀🚀🚀
Will be at #NeurIPS from Tues. to Sunday ✈️ Looking forward to chatting! especially if you’re interested in infinite context, test-time training and continue learning!
0
0
2
So it is about which signal the model treats as the ultimate authoritative. Normally it was the reward value. Now inoculation prompting adds a higher-level instruction that disentangles the reward maximization and what the overseer really wants.
Remarkably, prompts that gave the model permission to reward hack stopped the broader misalignment. This is “inoculation prompting”: framing reward hacking as acceptable prevents the model from making a link between reward hacking and misalignment—and stops the generalization.
0
0
1
The surprise modeling is truly exciting 🚀
looking ahead, we’re prototyping something new -- we call it predictive sensing. our paper cited tons of work from cogsci and developmental psychology. the more we read, the more amazed we became by human / animal sensing. the human visual system is super high-bandwidth, yet
0
0
2
Welcome back to Pennsylvania!
2
0
6
📣 Announcing MUSI: 1st Multimodal Spatial Intelligence Workshop @ICCVConference! 🎙️All-star keynotes: @sainingxie, @ManlingLi_, @RanjayKrishna, @yuewang314, and @QianqianWang5 - plus a panel on the future of the field! 🗓 Oct 20, 1pm-5:30pm HST 🔗 https://t.co/wZaWKRIcYI
2
24
232
Data!
Incredibly inspiring talk by Prof. Alexei (Alyosha) Efros at @umdcs today! Got to rethink the role of large visual data for visual analysis and synthesis. 🤔
0
0
4
Agentic LLMs are becoming pivotal components in modern AI ecosystems. Please take a look at the FAST workshop if you are interested 🚀
🌟 Excited to co-organize the FAST workshop at AAAI 2026 in Singapore (Jan 27)! We're calling for submissions to explore why LLM-driven agentic AI systems exhibit emergent behaviors and how we can understand and guide these complex interactions. We welcome interdisciplinary
0
0
1
I hope that, in another parallel universe, influencers could post: I just read Shannon’s research paper that completely broke my brain 😳 Instead of treating text as a mystery, he said: “ok, show me more of the prior text and let me predict the next symbol step by step.”
0
2
9
DynamicVoyager (ICCV 2025, https://t.co/Us84Damifk) is our latest effort on dynamic, controllable, and long-horizon scene synthesis. Please take a look if you are interested 🚀
arxiv.org
The problem of generating a perpetual dynamic scene from a single view is an important problem with widespread applications in augmented and virtual reality, and robotics. However, since dynamic...
How can we generate endless dynamic scenes where we can explore freely along any camera path? Check out our #ICCV2025 paper "Voyaging into Perpetual Dynamic Scenes from a Single View"! Website: https://t.co/YbZ1btG7UL Arxiv: https://t.co/5QCej7KdmS
0
0
9
🚀 Excited to introduce Qwen-Image-Edit! Built on 20B Qwen-Image, it brings precise bilingual text editing (Chinese & English) while preserving style, and supports both semantic and appearance-level editing. ✨ Key Features ✅ Accurate text editing with bilingual support ✅
143
600
4K
Impact!
The release of GPT-OSS-120B & GPT-OSS-20B models today incorporates my Attention Sink work ( https://t.co/u67QTC3rzh). Exciting to see this come to life! 🎉 Looking forward to more progress in this space. 😁
0
0
3
He also translated (“broadcasted”😁) the sci-fi novel Three-Body Problem to the English community. A truly great writer!
yes. but you should all follow Ken Liu (@kyliu99) and read his novels. he’s my favorite sci-fi writer and just an incredible person. Ken’s been hands-on with AI for years. No surprise Pantheon clicks so hard with many researchers. I remember a few years ago he fine-tuned his own
0
0
3
🚀 Excited to share GPT-Image-Edit-1.5M — our new large-scale, high-quality, fully open image editing dataset for the research community! (1/n)
3
50
219
From GPT to MoE: I reviewed & compared the main LLMs of 2025 in terms of their architectural design from DeepSeek-V3 to Kimi 2. Multi-head Latent Attention, sliding window attention, new Post- & Pre-Norm placements, NoPE, shared-expert MoEs, and more... https://t.co/oEt8XzNxik
magazine.sebastianraschka.com
From DeepSeek-V3 to Kimi K2: A Look At Modern LLM Architecture Design
45
491
2K
Learning to reconstruct information with minimal programs! Parsimony/sparsity reflect intelligence.
Compression is the heart of intelligence From Occam to Kolmogorov—shorter programs=smarter representations Meet KARL: Kolmogorov-Approximating Representation Learning. Given an image, token budget T & target quality 𝜖 —KARL finds the smallest t≤T to reconstruct it within 𝜖🧵
0
0
2