Xingwei Tan Profile
Xingwei Tan

@Xingwei__Tan

Followers
23
Following
36
Media
1
Statuses
18

Joined October 2024
Don't wanna be here? Send us removal request.
@Xingwei__Tan
Xingwei Tan
1 month
📃 Enhancing Logical Reasoning in Language Models via Symbolically-Guided Monte Carlo Process Supervision. 🌐 @akhter_elahi .@xrysoflhs .@nikaletras.
0
0
2
@Xingwei__Tan
Xingwei Tan
1 month
- Symbolic ReAct-guided trajectories also improve multi-step reasoning LLMs over unguided trajectories (1%-4%) on out-of-domain evaluation (claim verification datasets that require deductive and abductive reasoning).
1
0
1
@Xingwei__Tan
Xingwei Tan
1 month
We found that:. - Symbolic ReAct-guided trajectories improve multi-step reasoning LLMs over unguided trajectories (2%-6%) on FOLIO and LogicAsker.
1
0
1
@Xingwei__Tan
Xingwei Tan
1 month
We investigate sampling trajectories in a guided way using Symbolic ReAct format for Monte Carlo estimation and sampling the trajectories for fine-tuning multi-step reasoning LLMs.
1
0
1
@Xingwei__Tan
Xingwei Tan
1 month
Can LLMs learn to reason in a sound and accurate way by fine-tuning on automatically sampled symbolic reasoning trajectories?.
1
0
1
@Xingwei__Tan
Xingwei Tan
1 month
However, the trajectories for fine-tuning the multi-step reasoning #LLMs and PRMs are unguided, leading to #LLMs generating verbose reasoning traces in informal languages.
1
0
1
@Xingwei__Tan
Xingwei Tan
1 month
Multi-step reasoning #LLMs are often fine-tuned on trajectories that are filtered with process reward models (PRMs) automatically trained on Monte Carlo estimation-generated pseudo labels.
Tweet media one
Tweet media two
1
4
14
@Xingwei__Tan
Xingwei Tan
3 months
Additionally, the system features conversation summarization capabilities to distill critical information from lengthy exchanges, as well as persona analysis to characterize the speakers involved.
0
0
0
@Xingwei__Tan
Xingwei Tan
3 months
We also implemented source identification methodologies to pinpoint key segments within the conversation that contribute to the LLM's analytical output.
1
0
0
@Xingwei__Tan
Xingwei Tan
3 months
Leveraging Large Language Models (LLMs) for detailed conversational analysis, the system provides explanatory justifications for identified instances of harmful speech.
1
0
0
@Xingwei__Tan
Xingwei Tan
3 months
In partnership with the UK Forensic Network, we have developed a system designed to analyze conversational data and identify instances of harmful speech, with a specific focus on Violence Against Women and Girls (VAWG) content.
1
0
0
@Xingwei__Tan
Xingwei Tan
3 months
Welcome to join our oral talk on "Cascading Large Language Models for Salient Event Graph Generation" starting from 14:00 on Thursday at Albuquerque Convention Center Ruidoso room. #NAACL2025.
Tweet card summary image
arxiv.org
Generating event graphs from long documents is challenging due to the inherent complexity of multiple tasks involved such as detecting events, identifying their relationships, and reconciling...
0
1
8
@Xingwei__Tan
Xingwei Tan
4 months
We implement source identification methods to pinpoint key segments within the conversation that contribute to the LLM's analytical output. This platform also serves as a research tool, enabling users to choose datasets and integrate diverse models for performance evaluation.
0
0
0
@Xingwei__Tan
Xingwei Tan
4 months
We leverage Large Language Models (LLMs) for detailed conversational analysis, the system provides explanatory justifications for identified instances of harmful speech.
1
0
0
@Xingwei__Tan
Xingwei Tan
4 months
In partnership with the UK Forensic Network, we have developed a system designed to analyze conversational data and identify instances of harmful speech, with a specific focus on Violence Against Women and Girls (VAWG) content. Checkout our #naacl2025 DEMO.
Tweet card summary image
arxiv.org
Detecting toxic language including sexism, harassment and abusive behaviour, remains a critical challenge, particularly in its subtle and context-dependent forms. Existing approaches largely focus...
1
1
4