
Benjie Wang
@benjiewang_cs
Followers
97
Following
199
Media
9
Statuses
27
Postdoc at UCLA StarAI Lab @UCLAComSci
Joined November 2019
Also check out the awesome paper "Sum of Squares Circuits" ( by @loreloc_, Stefan Mengel, and @tetraduzione, which concurrently showed the separation between monotone and squared circuits. Also at AAAI 2025 today poster #840!.
0
0
2
Inception PCs strictly subsume monotone and squared PCs, and are strictly more expressive than both. We show this leads to improved downstream modeling performance when normalizing for FLOPS:
1
0
1
To overcome these limitations, we propose Inception PCs, a novel tractable probabilistic model representing a deep *sum-of-square-of-sums*. Inception PCs explicitly introduce two types of latent variables into the circuit for the mixtures encoded at sum nodes.
1
0
1
We show that the reverse also holds (!!) - some tractable distributions expressed as monotone circuits cannot be compactly expressed as a square.
1
0
1
On the other hand, squared circuits ( allow use of arbitrary real parameters by *squaring* the circuit output. It was previously proven that squared circuits can be exponentially more expressive than monotone circuits!.
1
0
1
Probabilistic circuits are deep *tractable* probabilistic models that allow efficient and exact computation of marginals. Traditionally, monotone circuits enforce non-negativity by using non-negative weights. Paper:
1
0
1
Circuits use sum-product computation graphs to model probability densities. But how do we ensure the non-negativity of the output?. Check out our poster "On the Relationship between Monotone and Squared Probabilistic Circuits" at AAAI 2025 **today**: 12:30pm-14:30pm #841.
1
1
8
RT @danielmisrael: “That’s one small [MASK] for [MASK], a giant [MASK] for mankind.” – [MASK] Armstrong. Can autoregressive models predict….
0
8
0
Thanks to my amazing co-authors Denis Mauá, @guyvdb, and YooJung Choi. Hope to see you at the poster session!.
0
0
1
Along the way we also show a bunch of other cool results, like: . - More efficient algorithms for causal inference on circuits .- New circuit properties .- Separation/hardness results
1
0
2
Building upon the prior PC atlas, our algebraic atlas provides a comprehensive approach for deriving **efficient algorithms** and **tractability conditions** for arbitrary compositional queries. Try our atlas the next time you come across a new query!
1
0
3
Just as circuits serve as a unifying representation of models, we show how you can express many queries as compositions of just a few basic operations: aggregation (marginalization, max, etc.), product, and elementwise mappings.
1
0
2
Circuits are a unifying representation of probability distributions as a computation graph of sums and products. Here we consider the more general algebraic circuits, where sum/product is replaced with a semiring operation (think e.g. OR and AND for Boolean circuits).
1
0
2
You have some model/knowledge (e.g. Bayes Net, Probabilistic/Logic Program, DB) and some query (e.g. MAP, Causal Adjustment) you want to ask. When can you compute this efficiently? . Find out @ NeurIPS today in Poster Session 6 East, #3801. Paper:
1
3
14
RT @HonghuaZhang2: So excited to present Ctrl-G **Adaptable Logical Control for Large Language Models** TODAY at #NeurIPS2024 West Ballroom….
0
2
0
RT @zhezeng0908: 📢 I’m recruiting PhD students @CS_UVA for Fall 2025!.🎯 Neurosymbolic AI, probabilistic ML, trustworthiness, AI for science….
zzeng.me
Assistant Professor
0
73
0
RT @e_giunchiglia: 🚨 Exciting Opportunity! 🚨. I’m looking for PhD students to join my team @ImperialEEE and @ImperialX_AI! 🌍🔍. Research Top….
0
36
0
Excited to share our work on LLM tokenization, led by the awesome @renatogeh. We find significant boosts in downstream performance, by probabilistically interpreting the space of tokenizations of a text. A bit of probabilistic reasoning goes a long way!.
Where is the signal in LLM tokenization space?. Does it only come from the canonical (default) tokenization?. The answer is no! By looking at other ways to tokenize the same text, we get a consistent boost to LLM performance!. 1/5
0
4
8
Super cool work on discretizing probability distributions with *exponential* gains in succinctness! Recommended reading for probabilistic inference folks.
Are you looking for an inference algorithm that supports your discrete-continuous probabilistic program? Look no further! We have developed a new probabilistic programming language (PPL) called HyBit that provides scalable support for discrete-continuous probabilistic programs.
0
2
13
RT @ZhijingJin: We will organize a "Causality for LLMs" Tutorial #NeurIPS2024 @NeurIPSConf. Happy to contribute to our community an intro o….
0
57
0