
Xiangru (Edward) Jian
@EdwardJian2
Followers
186
Following
422
Media
7
Statuses
127
CS PhD student @UWCheritonCS. Visiting Researcher @ServiceNowRSRCH. A big fan of @ManCity. Working on multimodal learning, VLM/LLM agents and data management.
Waterloo, Ontario
Joined October 2022
๐ Excited to release GraphOmni at full scale, the most comprehensive benchmark for LLMs on graph reasoning tasks. ๐ Paper: ๐ป Eval Code & Data: ๐Project Page: Huge thanks to Hao Xu @ggg43127748 for
๐ Excited to introduce GraphOmni, a comprehensive and extendable benchmark for evaluating Large Language Models (LLMs) on graph-theoretic reasoning tasks. ๐ Paper: ๐งต Key highlights below ๐
1
15
20
RT @CamelAIOrg: ๐จ CAMEL-AI Live Talk Incoming!. Join Wei Pang (Waterloo CS Masterโs, incoming CUHK-SZ PhD) as he presents Paper2Poster โ tโฆ.
0
1
0
If you want to know more about our work on how to convert your paper into poster (and are curious ๐ง why this man has so many ๐ฑ), please consider joining!.
๐จ CAMEL-AI Live Talk Alert!. Donโt miss this talk by Wei Pang(Waterloo CS Masterโs, incoming CUHK-SZ PhD) on Paper2Poster โ the first public benchmark for automated academic poster generation!. Learn how PosterAgent turns 20+ page papers into sleek, editable posters for just
0
0
3
RT @NewInML: New to ML research? Never published at ICML? Don't miss this!. Check out the New in ML workshop at ICML 2025 โ no rejections,โฆ.
0
14
0
RT @guohao_li: Great work by @real_weipang, @KevinQHLin, @EdwardJian2, Xi He, and @philiptorr! Wish I could have had this during my PhD stuโฆ.
0
7
0
๐Please check our latest work on poster generation from papers using Multi Agent System.
Thanks @_akhaliq for sharing our work!. ๐ Thrilled to introduce ๐Paper2Poster โ Automatically transform your full Paper into a polished academic Poster!. ๐ป Code: ๐ Paper: ๐ Website: Our wonderful
0
1
11
RT @real_weipang: Thanks @_akhaliq for sharing our work!. ๐ Thrilled to introduce ๐Paper2Poster โ Automatically transform your full Paper iโฆ.
0
24
0
RT @HuggingPapers: Paper2Poster just released on ๐ค!. Automatically create posters from your scientific papers. Addresses both the poster cโฆ.
0
2
0
RT @TsingYoga: Guess it's the first open-source multi-turn e2e RL for GUI Agents from academia, and it's based on UI-TARS-1.5-7B. If you wโฆ.
0
67
0
RT @KevinQHLin: Thanks @_akhaliq for sharing our work!. ๐คฉTeach multimodal models โTo think, or Not to thinkโ -- TON.๐ฅSelective Reasoning viโฆ.
0
2
0
RT @_akhaliq: Think or Not? . Selective Reasoning via Reinforcement Learning for Vision-Language Models
0
32
0
RT @RajeswarSai: Congrats @TianbaoX and team on this exciting work and release! ๐ Weโre happy to share that Jedi-7B performs on par with UIโฆ.
0
18
0
๐Please check our Benchmark on GUI Grounding, UI-Vision, which is just accepted by ICML 2025๐๐. Please reach out if you want to know more!.
๐ Excited to share that UI-Vision has been accepted at ICML 2025! ๐. We have also released the UI-Vision grounding datasets. Test your agents on it now! ๐. ๐ค Dataset: #ICML2025 #AI #DatasetRelease #Agents.
0
1
6
RT @Alibaba_Qwen: ๐ One line. A full webpage. No hassle. Introducing Web Dev โ the ultimate tool for building stunning frontend webpages &โฆ.
0
276
0
RT @_akhaliq: Alibaba just dropped ZeroSearch on Hugging Face. Incentivize the Search Capability of LLMs without Searching .
0
139
0
RT @MingchenZhuge: ๐ง๐ผ๐ฝ ๐๐ฒ๐ฐ๐ฟ๐ฒ๐. ๐๐ด๐ฒ๐ป๐-๐ฎ๐-๐ฎ-๐๐๐ฑ๐ด๐ฒ can be a great open-source #DeepWiki by just adding 2 code files. swap github โ openwikiโฆ.
0
14
0