Branchini_Nic Profile Banner
Nicola Branchini Profile
Nicola Branchini

@Branchini_Nic

Followers
756
Following
6K
Media
54
Statuses
1K

🇮🇹 4th yr Stats PhD @EdinUniMaths 🏴󠁧󠁢󠁳󠁣󠁴󠁿.🤔💭 about reliable uncertainty quantification. Interested in sampling and measure transport methodologies.

Edinburgh, UK
Joined May 2015
Don't wanna be here? Send us removal request.
@Branchini_Nic
Nicola Branchini
2 months
🚨 New paper: “Towards Adaptive Self-Normalized IS”. TLDR; To estimate µ = E_p[f(θ)] when p(θ) has intractable partition, instead of doing MCMC on p(θ) or learning a parametric q(θ), we try MCMC directly on p(θ)| f(θ)-µ | - variance-minimizing proposal.
1
2
15
@Branchini_Nic
Nicola Branchini
3 days
RT @_vaishnavh: Wrote my first blog post! I wanted to share a powerful yet under-recognized way to develop emotional maturity as a research….
0
13
0
@Branchini_Nic
Nicola Branchini
6 days
Obsession with statistics or not . better make sure evaluations make sense and we're not tuning the random seed. Don't need to know all the fanciest statistical tests, but doesn't seem like that's the issue typically (to me) 🤷‍♂️.Heard of some statistical precipices . .
0
1
1
@Branchini_Nic
Nicola Branchini
7 days
.@Finnair it seems impossible to contact the customer service by chat. I've waited 2 afternoons already. I don't want to change my phone plan to have to call from abroad.
2
0
0
@Branchini_Nic
Nicola Branchini
11 days
RT @andrewgwils: You don't _need_ a PhD (or any qualification) to do almost anything. A PhD is a rare opportunity to grow as an independent….
0
103
0
@Branchini_Nic
Nicola Branchini
13 days
RT @FelineAutomaton: A great pleasure to crash two Bayesian statistics conferences with a dose of diffusion wisdom — last week in Singapore….
0
3
0
@Branchini_Nic
Nicola Branchini
24 days
RT @kfountou: That’s from 2018, a provocative title. The “problem” has only gotten worse since then. I tend to agree with his arguments. O….
0
5
0
@Branchini_Nic
Nicola Branchini
1 month
RT @chhaviyadav_: Upon graduation, I paused to reflect on what my PhD had truly taught me. Was it just how to write papers, respond to brut….
0
40
0
@Branchini_Nic
Nicola Branchini
2 months
In my reading experience, TMLR (largely) is NeurIPS/ICML etc without all the BS.
0
0
5
@Branchini_Nic
Nicola Branchini
2 months
RT @Ji_Ha_Kim: I got recommended Terence Tao's YouTube channel created in 2010, where he uploaded his first video just yesterday!.He showca….
0
46
0
@Branchini_Nic
Nicola Branchini
2 months
RT @Yoshua_Bengio: Two years ago, I've reoriented my research to try to make AI safe by design. In this @TIME op-ed, I present my team's di….
0
74
0
@Branchini_Nic
Nicola Branchini
2 months
RT @fchollet: BayesFlow 2.0, a Python package for amortized Bayesian inference, is now powered by Keras 3, with support for JAX, PyTorch, a….
0
115
0
@Branchini_Nic
Nicola Branchini
2 months
RT @NandoDF:
0
34
0
@Branchini_Nic
Nicola Branchini
2 months
(Indeed, when f(θ)>=0, almost always, you can view the problem as just estimating a ratio of normalizing constants).
0
0
0
@Branchini_Nic
Nicola Branchini
2 months
A natural comparison is with bridge sampling (I promise, I will do it in a subsequent version, this was for the workshop :D). ATo be kept in mind tho (1) bridge sampling has an asymptotic MSE higher than ratio IS (as proved in the paper above), (2) bridge works for f(θ)>=0.
1
0
0
@Branchini_Nic
Nicola Branchini
2 months
P.S.: .For the sampling nerds: the algo itself *can* be seen (although it is just a possible perspective) as an adaptive version of the highly forgotten "ratio IS" ( ) (for which adaptive versions don't exist AFAIK).
1
0
0
@Branchini_Nic
Nicola Branchini
2 months
In summary, if you wanna estimate expectations with MCMC, at least I would say try this, it can improve a lot on just MCMC on p(θ) or even p(θ) | f(θ) |, with not many changes in the code. I'm quite excited to extend the work substantially in the near future.
1
0
0
@Branchini_Nic
Nicola Branchini
2 months
A 🐔&🥚 problem. But 🐔&🥚 can be okay, really. We show you can just initialise an algo with a preliminary estimate of µ, i.e., µ₀, then run a chain on the *approximation* p(θ)| f(θ) - µ₀ | then, with θₙ ~ approx p(θ)| f(θ)- µ₀ | , estimate µ again, and keep iterating!.
1
0
0
@Branchini_Nic
Nicola Branchini
2 months
You don't know its normalizing constant, but that's ok, because you normalize the weights. What's more problematic is that you cannot even *evaluate* pointwise it for any θₙ, because it involves µ - what you are trying to estimate !.
1
0
0
@Branchini_Nic
Nicola Branchini
2 months
If you wanna minimize (asymptotic) Var(∑ₙ w̅ₙ f(θₙ)), the optimal choice of proposal q is: .not p(θ), not p(θ) |f(θ)|, but: . p(θ) | f(θ) - µ | . So great, let's do MCMC on that, right ?.
1
0
0
@Branchini_Nic
Nicola Branchini
2 months
More details. You want to estimate expectations E_p[f(θ)]. You prob have samples θₙ ~ q(θ), and if you want consistent estimates, you’ll use importance sampling. That involves weights wₙ = p(θₙ)/q(θₙ), normalized to w̅ₙ , and finally you report:. ∑ₙ w̅ₙ f(θₙ).
1
0
0