Tao Meng Profile
Tao Meng

@TaoMeng10

Followers
179
Following
40
Media
2
Statuses
22

PhD@UCLANLP, NLP&ML research focus: injecting constraints in NLP/ML models https://t.co/mjHcc1ouiV

Los Angeles
Joined May 2019
Don't wanna be here? Send us removal request.
@TaoMeng10
Tao Meng
3 years
Happy to share our work on Controllable Text Generation with NeurAlly-Decomposed Oracle (NADO) accepted at #Neurips2022! Looks like our model is excited about NeurIPS as much as we do. See more in
Tweet media one
Tweet media two
3
22
102
@TaoMeng10
Tao Meng
2 years
RT @HonghuaZhang2: Reliable control of large language models is a crucial problem. We propose GeLaTo (Generating Language with Tractable Co….
Tweet card summary image
arxiv.org
Despite the success of autoregressive large language models in text generation, it remains a major challenge to generate text that satisfies complex constraints: sampling from the conditional...
0
38
0
@grok
Grok
19 days
Blazing-fast image creation – using just your voice. Try Grok Imagine.
284
557
3K
@TaoMeng10
Tao Meng
3 years
RT @kaiwei_chang: ACL/ICML highlights threats of LMs like ChatGPT generating paper content. However, I'm more concerned about reviews. Exam….
0
22
0
@TaoMeng10
Tao Meng
3 years
Welcome to our controllable text generation poster this afternoon!.
@VioletNPeng
Violet Peng
3 years
Dear friends at #NeurIPS2022, #PlusLab_UCLANLP will be presenting three papers at the main conference and one at the ENLSP workshop. This is a thread of some details about our presentations. My students and I are looking forward to seeing you at NOLA! (1/5).
0
0
3
@TaoMeng10
Tao Meng
3 years
RT @sidilu_pluslab: It's an honor of mine to share our work InsNet! We propose a unified, efficient, and powerful framework for both sequen….
0
5
0
@TaoMeng10
Tao Meng
3 years
RT @VioletNPeng: I’m super excited to share several fresh off the press works on controllable (creative) generation this Thursday at Stanfo….
0
12
0
@TaoMeng10
Tao Meng
3 years
The code for reproduce the results is released at Special thanks to my collaborators Sidi Lu, Nanyun Peng and Kai-Wei Chang! @sidilu_pluslab @VioletNPeng @kaiwei_chang.
Tweet card summary image
github.com
Constrained Decoding Project. Contribute to MtSomeThree/constrDecoding development by creating an account on GitHub.
0
1
0
@TaoMeng10
Tao Meng
3 years
We conduct experiments on two applications: (1) text generation with lexical constraints and (2) machine translation with formality control, demonstrating our framework guides the base model towards the given oracles(both rule and classifier-based) while maintaining high quality.
1
1
0
@TaoMeng10
Tao Meng
3 years
The token-level guidance is approximated by a neural model trained with examples sampled from the base model. We present the closed-form optimal solution to incorporating the token-level guidance into the base model. We further provide a theoretical analysis of the approximation.
1
1
1
@TaoMeng10
Tao Meng
3 years
How to control the models to satisfy pre-defined sentence-level attributes is an open challenge. Given a pre-trained language model and a sequence-level boolean oracle function, we propose to decompose the oracle into token-level guidance to steer the base model in generation.
1
1
0
@TaoMeng10
Tao Meng
3 years
RT @LiLiunian: Happy to share our work on Object Detection in the Wild through Grounded Language Image Pre-training (GLIP) (Oral at #CVPR20….
0
41
0
@TaoMeng10
Tao Meng
3 years
RT @HonghuaZhang2: Can language models learn to reason by end-to-end training? We show that near-perfect test accuracy is deceiving: instea….
Tweet card summary image
arxiv.org
Logical reasoning is needed in a wide range of NLP tasks. Can a BERT model be trained end-to-end to solve logical reasoning problems presented in natural language? We attempt to answer this...
0
58
0
@TaoMeng10
Tao Meng
4 years
RT @uclanlp: UCLA Chang's (@kaiwei_chang ) and Plus lab (@VioletNPeng) will present papers and a tutorial on topics including Fairness & Ro….
0
29
0
@TaoMeng10
Tao Meng
4 years
RT @natarajan_prem: Thrilled to participate in the launch event of the Science Hub at UCLA. Looking forward to the many advances that this….
0
9
0
@TaoMeng10
Tao Meng
4 years
Besides, we provide a theoretical analysis about the tightness of the polytopes and the reliability of the mined constraints. Paper/Code: .4/4.
web.cs.ucla.edu
An Integer Linear Programming Framework for Mining Constraints from Data Share this page: An Integer Linear Programming Framework for Mining Constraints from Data Tao Meng and Kai-Wei Chang, in ICML,...
0
0
1
@TaoMeng10
Tao Meng
4 years
We show that our approach learns to solve 9x9 Sudoku and minimal spanning tree problems from examples without giving underlying rules. It can also integrate with a neural net to learn the hierarchical label structure of a multi-label classification task 3/4.
1
0
2
@TaoMeng10
Tao Meng
4 years
We present an integer linear programming (ILP) framework for constraint mining. We formulate inference of structured labels as ILP. Given coefficients of obj. function and corresponding solution, we mine constraints by estimating the outer&inner polytopes of the feasible set 2/4
Tweet media one
1
0
2
@TaoMeng10
Tao Meng
4 years
Can a model learn problem structure/constraints from data? E.g., given pairs of adjacent matrix and corresponding minimal spanning tree, can a model learn to solve MST? Check out our #ICML2021 paper on constraint mining with ILP w/@kaiwei_chang 1/4.
web.cs.ucla.edu
An Integer Linear Programming Framework for Mining Constraints from Data Share this page: An Integer Linear Programming Framework for Mining Constraints from Data Tao Meng and Kai-Wei Chang, in ICML,...
1
5
24
@TaoMeng10
Tao Meng
5 years
RT @kaiwei_chang: UCLA-NLP will present at #acl2020nlp. 👇Check out our ACL/TACL papers and pre-recorded talks at.Lo….
0
9
0
@TaoMeng10
Tao Meng
5 years
RT @kaiwei_chang: The problem is much more beyond data bias. ML systems can be poorly calibrated ( and bias in the….
0
48
0