zhezeng0908 Profile Banner
Zhe Zeng Profile
Zhe Zeng

@zhezeng0908

Followers
867
Following
727
Media
3
Statuses
47

Assist. Prof. @CS_UVA | Faculty fellow @NYU_Courant | CS Ph.D @UCLA | Neurosymbolic AI, Probabilistic ML, Constraints, AI4Science | https://t.co/pZJZxyzrio

Joined August 2016
Don't wanna be here? Send us removal request.
@zhezeng0908
Zhe Zeng
3 months
RT @tetraduzione: 🗓️ Deadline extended: 💥2nd June 2025!💥 . We are looking forward to your works on: . 🔌 #circuits and #tensor #networks….
0
7
0
@zhezeng0908
Zhe Zeng
9 months
📢 I’m recruiting PhD students @CS_UVA for Fall 2025!.🎯 Neurosymbolic AI, probabilistic ML, trustworthiness, AI for science. See my website for more details: 📬 If you're interested, apply and mention my name in your application:
zzeng.me
Assistant Professor
4
73
226
@grok
Grok
3 days
Join millions who have switched to Grok.
139
170
1K
@zhezeng0908
Zhe Zeng
1 year
RT @hanzhao_ml: 🚨🚨 We are hiring! RT appreciated!. Prof. Rui Song ( and I will recruit post-doc scientists through….
Tweet card summary image
amazon.science
The program offers recent PhD graduates an opportunity to advance research while working alongside experienced scientists with backgrounds in industry and academia.
0
101
0
@zhezeng0908
Zhe Zeng
1 year
RT @HonghuaZhang2: Proposing Ctrl-G, a neurosymbolic framework that enables arbitrary LLMs to follow logical constraints (length control, i….
0
101
0
@zhezeng0908
Zhe Zeng
1 year
RT @danielmisrael: Very excited about this work! If you are an LLM researcher frustrated by long wait times on generations, I highly recomm….
0
5
0
@zhezeng0908
Zhe Zeng
2 years
Using probabilistically sound objectives improves #WeaklySupervisedLearning 🤩. We’ll present this work at #NeurIPS in person and be happy to chat!.
@vinay_l_shukla
VINAY SHUKLA
2 years
Many approaches to weakly supervised learning are ad hoc, inexact, and limited in scope 😞. We propose Count Loss 🎉, a simple ✅, exact ✅, differentiable ✅, and tractable ✅ means of unifying count-based weakly supervised settings! See at NeurIPS 2023!.
0
1
8
@zhezeng0908
Zhe Zeng
2 years
RT @tetraduzione: I will be hiring through #ELLIS this year: 3 fully-funded PhD positions for troublemakers in #ML #AI who want to design t….
0
30
0
@zhezeng0908
Zhe Zeng
2 years
RT @zengola: LAFI@POPL 2024 Call for papers is out! Submit your probabilistic and/or differentiable programming extended abstracts (deadlin….
0
2
0
@zhezeng0908
Zhe Zeng
2 years
We run 👾CIBER on both regression and image classification benchmarks, where it achieves better accuracy and calibrations! Work with @guyvdb The implementation is available at
Tweet card summary image
github.com
Collapsed Inference for Bayesian Deep Learning. Contribute to UCLA-StarAI/CIBER development by creating an account on GitHub.
0
0
4
@zhezeng0908
Zhe Zeng
2 years
The way our collapsed sampler works is by limiting sampling to a subset of the parameters, _to guarantee efficiency_, and further pairing each sample with a conditional distribution whose marginalization can be closed approximated as mentioned above, _to improve estimations_.
1
0
4
@zhezeng0908
Zhe Zeng
2 years
This is inspired by an observation: BMA reduces to weighted volume computation, whose solvers can provide close approximations to marginalization. How close? The left integral is from BMA, no closed-from solution, while the right one as its approximation, can be solved exactly.
Tweet media one
1
0
5
@zhezeng0908
Zhe Zeng
2 years
Uncertainty quantification for neural networks via Bayesian model average is compelling, but uses just a few samples in practice😥 We propose 👾CIBER, a collapsed sampler to aggregate infinite NNs via volume computations, w/ better accuracy & calibration!.
Tweet card summary image
arxiv.org
Bayesian neural networks (BNNs) provide a formalism to quantify and calibrate uncertainty in deep learning. Current inference approaches for BNNs often resort to few-sample estimation for...
3
7
59
@zhezeng0908
Zhe Zeng
2 years
RT @tetraduzione: I have fully-funded PhD positions (3.5 yrs) for troublemakers in #ML #AI who want to design the next gen of #probabilisti….
0
72
0
@zhezeng0908
Zhe Zeng
2 years
Can we enforce a k-subset constraint in neural networks? 🤔 Our #ICLR work answers SIMPLE 😎 a gradient estimator that allows to differentiably learn k-subset distributions!. Sadly we don’t join #ICLR23 in person but feel free to check out our work and any thoughts are welcome!.
@KareemYousrii
Kareem Ahmed
2 years
We want a nn's output to depend on a sparse set of features, for explainability and regularization. Sampling? non-differentiable 😞 We propose SIMPLE, a gradient estimator for the k-subset distribution w/ lower bias and variance than SoTA😉. At ICLR 2023🥳
0
4
30
@zhezeng0908
Zhe Zeng
3 years
RT @tetraduzione: Get in touch if you are interested in working with me on reliable and efficient probabilistic #modeling and #reasoning of….
0
10
0
@zhezeng0908
Zhe Zeng
3 years
“Inference for hybrid programs has changed dramatically with the introduction of Weighted Model Integration.” 💥🤩💥.
@tetraduzione
antonio vergari ⚔️ not at #ICML2025
3 years
Awesome! . There's gonna be a 2nd edition of the #probabilistic #logic #programming book by @rzf! A boon for the #NeSy and #PPL communities💥. 👉. A whole new chapter about reasoning in hybrid systems, with even a primer on #weighted #model #integration!.
0
1
5
@zhezeng0908
Zhe Zeng
3 years
RT @paolo_morettin: Let's go beyond the usual inference tasks in probabilistic models. 🧐 How to compute queries like Pr(E > MC² | C)?.🤨 W….
0
7
0
@zhezeng0908
Zhe Zeng
3 years
"instead of learning to emulate the correct reasoning function, the BERT model has in fact learned to make predictions leveraging statistical features in logical reasoning problems." 💥💥💥. very interesting work!👇.
@HonghuaZhang2
Honghua Zhang
3 years
Can language models learn to reason by end-to-end training? We show that near-perfect test accuracy is deceiving: instead, they tend to learn statistical features inherent to reasoning problems. See more in @LiLiunian @TaoMeng10 @kaiwei_chang @guyvdb.
0
0
4
@zhezeng0908
Zhe Zeng
3 years
RT @e_mackevicius: Our ICML workshop (Beyond Bayes: Paths Towards Universal Reasoning Systems) is still accepting submissions until May 25.….
beyond-bayes.github.io
ICML Workshop, July 22, 2022, Baltimore Convention Center, Ballroom 2 (Level 400)
0
5
0
@zhezeng0908
Zhe Zeng
3 years
RT @tetraduzione: I have fully-funded PhD positions for troublemakers in #ML #AI at @ancAtEd @InfAtEd @EdinburghUni who want to design th….
0
68
0