Benjie Wang Profile
Benjie Wang

@benjiewang_cs

Followers
96
Following
185
Media
9
Statuses
27

Postdoc at UCLA StarAI Lab @UCLAComSci

Joined November 2019
Don't wanna be here? Send us removal request.
@benjiewang_cs
Benjie Wang
4 months
Also check out the awesome paper "Sum of Squares Circuits" ( by @loreloc_, Stefan Mengel, and @tetraduzione, which concurrently showed the separation between monotone and squared circuits. Also at AAAI 2025 today poster #840!.
0
0
2
@benjiewang_cs
Benjie Wang
4 months
Inception PCs strictly subsume monotone and squared PCs, and are strictly more expressive than both. We show this leads to improved downstream modeling performance when normalizing for FLOPS:
Tweet media one
1
0
1
@benjiewang_cs
Benjie Wang
4 months
To overcome these limitations, we propose Inception PCs, a novel tractable probabilistic model representing a deep *sum-of-square-of-sums*. Inception PCs explicitly introduce two types of latent variables into the circuit for the mixtures encoded at sum nodes.
Tweet media one
1
0
1
@benjiewang_cs
Benjie Wang
4 months
We show that the reverse also holds (!!) - some tractable distributions expressed as monotone circuits cannot be compactly expressed as a square.
Tweet media one
1
0
1
@benjiewang_cs
Benjie Wang
4 months
On the other hand, squared circuits ( allow use of arbitrary real parameters by *squaring* the circuit output. It was previously proven that squared circuits can be exponentially more expressive than monotone circuits!.
1
0
1
@benjiewang_cs
Benjie Wang
4 months
Probabilistic circuits are deep *tractable* probabilistic models that allow efficient and exact computation of marginals. Traditionally, monotone circuits enforce non-negativity by using non-negative weights. Paper:
1
0
1
@benjiewang_cs
Benjie Wang
4 months
Circuits use sum-product computation graphs to model probability densities. But how do we ensure the non-negativity of the output?. Check out our poster "On the Relationship between Monotone and Squared Probabilistic Circuits" at AAAI 2025 **today**: 12:30pm-14:30pm #841.
Tweet media one
1
1
8
@benjiewang_cs
Benjie Wang
5 months
RT @danielmisrael: “That’s one small [MASK] for [MASK], a giant [MASK] for mankind.” – [MASK] Armstrong. Can autoregressive models predict….
0
8
0
@benjiewang_cs
Benjie Wang
7 months
Thanks to my amazing co-authors Denis Mauá, @guyvdb, and YooJung Choi. Hope to see you at the poster session!.
0
0
1
@benjiewang_cs
Benjie Wang
7 months
Along the way we also show a bunch of other cool results, like: . - More efficient algorithms for causal inference on circuits .- New circuit properties .- Separation/hardness results
Tweet media one
1
0
2
@benjiewang_cs
Benjie Wang
7 months
Building upon the prior PC atlas, our algebraic atlas provides a comprehensive approach for deriving **efficient algorithms** and **tractability conditions** for arbitrary compositional queries. Try our atlas the next time you come across a new query!
Tweet media one
1
0
3
@benjiewang_cs
Benjie Wang
7 months
Just as circuits serve as a unifying representation of models, we show how you can express many queries as compositions of just a few basic operations: aggregation (marginalization, max, etc.), product, and elementwise mappings.
Tweet media one
1
0
2
@benjiewang_cs
Benjie Wang
7 months
Circuits are a unifying representation of probability distributions as a computation graph of sums and products. Here we consider the more general algebraic circuits, where sum/product is replaced with a semiring operation (think e.g. OR and AND for Boolean circuits).
Tweet media one
1
0
2
@benjiewang_cs
Benjie Wang
7 months
You have some model/knowledge (e.g. Bayes Net, Probabilistic/Logic Program, DB) and some query (e.g. MAP, Causal Adjustment) you want to ask. When can you compute this efficiently? . Find out @ NeurIPS today in Poster Session 6 East, #3801. Paper:
1
3
14
@benjiewang_cs
Benjie Wang
7 months
RT @HonghuaZhang2: So excited to present Ctrl-G **Adaptable Logical Control for Large Language Models** TODAY at #NeurIPS2024 West Ballroom….
0
2
0
@benjiewang_cs
Benjie Wang
7 months
RT @zhezeng0908: 📢 I’m recruiting PhD students @CS_UVA for Fall 2025!.🎯 Neurosymbolic AI, probabilistic ML, trustworthiness, AI for science….
0
72
0
@benjiewang_cs
Benjie Wang
9 months
RT @e_giunchiglia: 🚨 Exciting Opportunity! 🚨. I’m looking for PhD students to join my team @ImperialEEE and @ImperialX_AI! 🌍🔍. Research Top….
0
36
0
@benjiewang_cs
Benjie Wang
10 months
Excited to share our work on LLM tokenization, led by the awesome @renatogeh. We find significant boosts in downstream performance, by probabilistically interpreting the space of tokenizations of a text. A bit of probabilistic reasoning goes a long way!.
@renatogeh
Renato Lui Geh
10 months
Where is the signal in LLM tokenization space?. Does it only come from the canonical (default) tokenization?. The answer is no! By looking at other ways to tokenize the same text, we get a consistent boost to LLM performance!. 1/5
Tweet media one
Tweet media two
Tweet media three
0
4
8
@benjiewang_cs
Benjie Wang
10 months
Super cool work on discretizing probability distributions with *exponential* gains in succinctness! Recommended reading for probabilistic inference folks.
@PoorvaGarg11
Poorva Garg
10 months
Are you looking for an inference algorithm that supports your discrete-continuous probabilistic program? Look no further! We have developed a new probabilistic programming language (PPL) called HyBit that provides scalable support for discrete-continuous probabilistic programs.
0
2
13
@benjiewang_cs
Benjie Wang
11 months
RT @ZhijingJin: We will organize a "Causality for LLMs" Tutorial #NeurIPS2024 @NeurIPSConf. Happy to contribute to our community an intro o….
0
55
0