
Junyang Lin
@JustinLin610
Followers
34K
Following
6K
Media
135
Statuses
3K
building Qwen universe @Alibaba_Qwen ❤️ 🍵 ☕️ 🍷 🥃
Joined December 2015
RIP
1
0
38
off u go! a lot of updates. notably, we have strengthened the coding capabilities of qwen3-vl and it is now on qwen code. with qwen code, u can code with image inputs, handwritten design, screenshots, etc. plan mode, which is important for today's vibe coding. plan before u
🚀 Exciting updates in Qwen Code v0.0.12–v0.0.14! ✨ What’s new? • Plan Mode: AI proposes a full implementation plan—you approve before a single line changes. • Vision Intelligence: Auto-switch to vision models (Qwen3-VL-Plus with 256K input / 32K output!) when images
29
38
464
VL is sometimes a bit complex and I really advise you to play it with a reference to this cookbook!
Introducing Qwen3-VL Cookbooks! 🧑🍳 A curated collection of notebooks showcasing the power of Qwen3-VL—via both local deployment and API—across diverse multimodal use cases: ✅ Thinking with Images ✅ Computer-Use Agent ✅ Multimodal Coding ✅ Omni Recognition ✅ Advanced
14
22
340
Talk from Wenting Zhao of Qwen on their plans during COLM. Seems like 1 word is the plan still: scaling training up! Let’s go.
14
48
402
Qwen Image Edit 2509 is the new leading open weights image editing model, ranking #3 overall in the Artificial Analysis Image Editing Arena and introducing multi-image editing capabilities! The latest release from Alibaba Qwen trails only Gemini 2.5 Flash (Nano-Banana) and
7
33
331
in case u don't know, i set up a small team for robotics and embodied ai inside qwen. multimodal foundation models are now being transformed to foundation agents that can leverage tools and memory to perform long-horizon reasoning thanks to reinforcement learning. they should
42
74
966
I was really looking forward to be at #COLM2025 with Junyang, but visa takes forever 😞 come ask me about Qwen: how is it like to work here, what features you’d like to see, what bugs you’d like us to fix, or anything!
Sorry about missing COLM due to my failure in my VISA application. @wzhao_nlp will be there and represent Qwen to give a speech and discuss on the panel about reasoning and agents!
3
3
69
Want to hear some hot takes about the future of language modeling, and share your takes too? Stop by the Visions of Language Modeling workshop at COLM on Friday, October 10 in room 519A! There will be over a dozen speakers working on all kinds of problems in modeling language and
1
12
81
Sorry about missing COLM due to my failure in my VISA application. @wzhao_nlp will be there and represent Qwen to give a speech and discuss on the panel about reasoning and agents!
14
4
233
If you use the api remember to turn on `vl_high_resolution_images`, even small things can be understood
8
15
248
🕹️ Qwen Image Edit 2509 comes with built-in ControlNets. 👇🏻 Here are a few cases showing how to use them in Draw Things. 1️⃣ When the canvas is empty, and the control image is in the Moodboard:
9
33
269
The small VL model that you want! Smaller models are coming as well soon!
🚀 Qwen3-VL-30B-A3B-Instruct & Thinking are here! Smaller size, same powerhouse performance 💪—packed with all the capabilities of Qwen3-VL! 🔧 With just 3B active params, it’s rivaling GPT-5-Mini & Claude4-Sonnet — and often beating them across STEM, VQA, OCR, Video, Agent
32
71
697
Further interpretation of API models: We sometimes project Plus and Flash onto open-weight models, especially when we open-source them. However, API models can evolve much faster than open-weight models. Therefore, we use date-based snapshots to update the API: when we have a
2
2
33
People often ask about the relationships between the different Qwen models and the logic behind their naming. Currently, we have several distinct model families: LLM, Coder, VL, Omni, and a newer addition, Image. While our long-term goal is to unify them into a single, truly
33
54
545
Nice choice for Tinker to support our models. Qwen models are good for research as we have models in different sizes and also diverse types (including MoE and dense) models. As well, we have multimodal models like Qwen3-VL and Qwen3-Omni, and specialized models like Qwen3-Coder.
Proud to see Qwen in the first wave of supported models — and thrilled to empower researchers and developers with flexible, accessible fine-tuning tools. We’ll continue releasing powerful open models to support research, innovation, and collaboration across the community. 💡🤝
16
26
435