Silin Gao Profile
Silin Gao

@silin_gao

Followers
297
Following
52
Media
31
Statuses
53

PhD @ICepfl NLP Lab, Advisor @ABosselut | Intern @TsinghuaCoAI @Zhou_Yu_AI | Prev @Tsinghua_Uni | Knowledge Intensive #NLProc | Dialogue Systems | #AI

NLP Lab, IC, EPFL, Switzerland
Joined September 2021
Don't wanna be here? Send us removal request.
@silin_gao
Silin Gao
13 days
Thanks to my internship advisors Emmanuel Abbe and Samy Bengio at @Apple, and my PhD advisor @ABosselut at @EPFL for supervising this project!. Paper:
Tweet media one
0
0
7
@silin_gao
Silin Gao
13 days
On perturbation benchmarks of grade school mathematics (GSM-Symbolic & GSM-Plus), AbstRaL almost reverts the performance drop caused by variations of input numbers, and also significantly mitigates the interference of distracting conditions added to the perturbed testing samples
Tweet media one
1
0
5
@silin_gao
Silin Gao
13 days
Results on various seed LLMs, including Mathstral, Llama3 and Qwen2.5 series, consistently demonstrate that AbstRaL reliably augments reasoning robustness, especially w.r.t. the shifts of input conditions in existing testing samples that may be leaked due to data contamination.
Tweet media one
1
0
5
@silin_gao
Silin Gao
13 days
Facing the weaknesses of in-context learning and supervised fine-tuning, AbstRaL uses reinforcement learning (RL) with a new set of rewards to closely guide the construction of abstraction in the model generation, which effectively improves the faithfulness of abstract reasoning
Tweet media one
1
0
5
@silin_gao
Silin Gao
13 days
AbstRaL adopts a granularly-decomposed abstract reasoning (GranulAR) schema, which enables LLMs to gradually construct the problem abstraction within a fine-grained reasoning chain, using their pre-learned strategies of chain-of-thought and Socratic problem decomposition.
Tweet media one
1
0
6
@silin_gao
Silin Gao
13 days
Instead of expensively creating more synthetic data to “instantiate” variations of problems, our approach learns to “abstract” reasoning problems. This not only helps counteract distribution shifts but also facilitates the connection to symbolic tools for deriving solutions.
Tweet media one
1
0
6
@silin_gao
Silin Gao
13 days
NEW PAPER ALERT: Recent studies have shown that LLMs often lack robustness to distribution shifts in their reasoning. Our paper proposes a new method, AbstRaL, to augment LLMs’ reasoning robustness, by promoting their abstract thinking with granular reinforcement learning.
Tweet media one
4
25
107
@silin_gao
Silin Gao
3 months
Thanks to my advisor @ABosselut for supervising this project, and collaborators @limi_rs, @smamooler, @SyrielleMontar1, Sheryl and @Sony for their support!. Paper: Project Page: EPFL NLP Lab:
Tweet media one
0
0
4
@silin_gao
Silin Gao
3 months
Our study of several testing cases illustrates that visual narratives generated by VLMs still suffer from obvious inconsistency flaws, even with the augmentation of knowledge constraints, which raises the call for future study on more robust visual narrative generators.
Tweet media one
1
0
3
@silin_gao
Silin Gao
3 months
We also find a positive correlation between the knowledge constraints and the output visual narrative, w.r.t. their alignment to the input textual narrative, which highlights the significance of planning intermediate constraints to promote faithful visual narrative generation.
Tweet media one
1
0
3
@silin_gao
Silin Gao
3 months
Our human evaluation (on five typical aspects) supports the results of our automatic evaluation. Besides, compared to traditional metrics based on CLIP similarity (CLIP-I and CLIP-T), our proposed alignment and consistency metrics have better correlation with human evaluation.
Tweet media one
1
0
3
@silin_gao
Silin Gao
3 months
Our evaluation results on three VLMs all show that learning with VinaBench constraints improves visual narrative consistency and alignment to input text. However, visual narratives generated by VLMs fall behind the gold references, indicating still a large room of improvement.
Tweet media one
1
0
3
@silin_gao
Silin Gao
3 months
Based on VinaBench constraints, we propose VQA-based metrics to closely evaluate the consistency of visual narratives and their alignment to the input text. Our metrics avoid skewing evaluation to irrelevant details in gold reference, and cover the checking of inconsistent flaws
Tweet media one
1
0
3
@silin_gao
Silin Gao
3 months
We prompt hybrid VLMs and LLMs to annotate the VinaBench knowledge constraints. Our expert study verifies that the annotations are reliable, with high acceptance rates for all types of constraint labels, each with a fairly low percentage of disagreement cases between the experts
Tweet media one
1
0
3
@silin_gao
Silin Gao
3 months
VinaBench augments existing visual-textual narrative pairs with discourse and commonsense knowledge constraints. The former traces static and dynamic features of the narrative process, while the latter consists of entity links that bridge the visual-textual manifestation gap.
Tweet media one
1
0
3
@silin_gao
Silin Gao
3 months
NEW PAPER ALERT: Generating visual narratives to illustrate textual stories remains an open challenge, due to the lack of knowledge to constrain faithful and self-consistent generations. Our #CVPR2025 paper proposes a new benchmark, VinaBench, to address this challenge.
Tweet media one
1
11
19
@silin_gao
Silin Gao
4 months
RT @bkhmsi: 🚨 New Preprint!!. LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this al….
0
63
0
@silin_gao
Silin Gao
5 months
RT @mismayilsoy: Are LLMs linguistically productive and systematic in morphologically-rich languages as good as humans?.No 🤨 Our new NAACL….
0
8
0
@silin_gao
Silin Gao
6 months
RT @smamooler: 🚀 Introducing PICLe: a framework for in-context named-entity detection (NED) using pseudo-annotated demonstrations. 🎯 No hum….
0
12
0
@silin_gao
Silin Gao
6 months
RT @bkhmsi: 🚨 New Paper!. Can neuroscience localizers uncover brain-like functional specializations in LLMs? 🧠🤖. Yes! We analyzed 18 LLMs a….
0
31
0