
Sai Rajeswar
@RajeswarSai
Followers
534
Following
1K
Media
4
Statuses
334
Staff Research Scientist @ ServiceNow. Adjunct Professor @UMontreal and Core Member @mila_quebec. Prev research scientist intern @GoogleDeepMind.
Montreal, QC
Joined September 2019
I am now an associate member at @Mila_Quebec, and am looking to co-supervise a graduate student in 2025. Kindly apply if interested, and spread the word๐!.
๐ท Meet our student community! Interested in joining Mila? Our annual supervision request process for admission in the fall of 2025 is starting on October 15, 2024. More information here
12
21
87
If you are at #CVPR, make sure to catch @joanrod_ai and hear about scalable SVG generation straight from the source!.๐ฉExhibition Hall D โ Poster #31.
StarVector poster happening now at CVPR! Come by poster #31 if you want to chat about vector graphics, image-to-code generation, or just say hi!
0
1
5
A timely and compelling read by @jxmnop on a much-needed call to focus our research efforts around deeper questions, riskier bets and bit of long-term impact. ๐ก#AIResearch.
## The case for more ambition. i wrote about how AI researchers should ask bigger and simpler questions, and publish fewer papers:
0
0
6
RT @NewInML: New to ML research? Never published at ICML? Don't miss this!. Check out the New in ML workshop at ICML 2025 โ no rejections,โฆ.
0
14
0
RT @geoffreyhinton: Congratulations to @Yoshua_Bengio on launching @LawZero_ โ a research effort to advance safe-by-design AI, especially aโฆ.
0
161
0
๐จ๐ RL closes the loop on inverse rendering! By letting a VLM see its own SVG renderings, we push sketch-to-vector generation to near-perfect fidelity and code compactness. Congrats to @joanrod_ai, who is rolling out, one reward at a time. Please read our latest preprint! ๐.
Thanks @_akhaliq for sharing our work! Excited to present our next generation of SVG models, now using Reinforcement Learning from Rendering Feedback (RLRF). ๐ง We think we cracked SVG generalization with this one. Go read the paper! More details on
0
2
9
Do current large multimodal models really โunderstandโ the structure behind a complex sketch? ๐ Starflow converts hand-drawn workflow diagrams into executable JSON flows, testing VLMs on their ability to grasp true structure understanding. #multimodalA @patricebechard @PerouzT.
๐ New paper from our team at @ServiceNowRSRCH!โฃ.โฃ.๐ซ๐๐ญ๐๐ซ๐
๐ฅ๐จ๐ฐ: ๐๐๐ง๐๐ซ๐๐ญ๐ข๐ง๐ ๐๐ญ๐ซ๐ฎ๐๐ญ๐ฎ๐ซ๐๐ ๐๐จ๐ซ๐ค๐๐ฅ๐จ๐ฐ ๐๐ฎ๐ญ๐ฉ๐ฎ๐ญ๐ฌ ๐
๐ซ๐จ๐ฆ ๐๐ค๐๐ญ๐๐ก ๐๐ฆ๐๐ ๐๐ฌโฃ.We use VLMs to turn ๐ฉ๐ข๐ฏ๐ฅ-๐ฅ๐ณ๐ข๐ธ๐ฏ ๐ด๐ฌ๐ฆ๐ต๐ค๐ฉ๐ฆ๐ด and diagrams into executable workflows.
0
0
6
RT @KevinQHLin: ๐งHow can we teach Multimodal models or Agents โWhen to Thinkโ like humans?. ๐Check Out: Think-or-Not (TON) .๐ฅSelective Reasโฆ.
0
29
0
Congrats @TianbaoX and team on this exciting work and release! ๐ Weโre happy to share that Jedi-7B performs on par with UI-Tars-72B agent on our challenging UI-Vision benchmark, with 10x fewer parameters! ๐ Incredible.๐คDataset: ๐
0
18
46
RT @jxmnop: did you know people have been training neural networks on text since 2003?. everyone talks about Attention Is All You Need. butโฆ.
0
119
0
RT @joanrod_ai: The UI-Vision Benchmark is out on HuggingFace: โ
Now accepted at ICML 2025. ๐ฅ Go test your UI Agentsโฆ.
0
2
0
RT @PShravannayak: ๐ Excited to share that UI-Vision has been accepted at ICML 2025! ๐. We have also released the UI-Vision grounding datasโฆ.
0
17
0
Still wondering who came up with the idea to sneak in a banana mustache sample in Figure-1 of the paper, cracks me up every time I see it. Iconic figure and good old research days ๐ cc: @aagrawalAA.
2
0
7
RT @tscholak: ๐จ๐คฏ Today Jensen Huang announced SLAM Lab's newest model on the @HelloKnowledge stage: AprielโNemotronโ15BโThinker ๐จ.A lean, mโฆ.
0
22
0
RT @soumyesinghal: The full report for Llama-Nemotron Nano, Super, and Ultra is out ๐ โ covering reasoning SFT, large-scale RL, and comprehโฆ.
0
9
0
RT @aagrawalAA: Check out our latest work (published at CVPR 2025) on learning language-controllable visual representations.
0
4
0
RT @sivareddyg: Incredibly proud of my students Ada Tur and Gaurav Kamath for winning a SAC award at #NAACL2025 for their work on assessingโฆ.
0
10
0
RT @Ahmed_Masry97: One of our @icmlconf papers received an incomplete, irrelevant, and dismissive review. We flagged it to the scientific iโฆ.
0
7
0
RT @joanrod_ai: Excited to be at ICLR 2025 in Singapore this week! ๐ธ๐ฌ.Want to connect? Ping me!. ๐ Main Conference Papers. ๐ BigDocs. ๐
Thuโฆ.
0
14
0