Lorenzo Loconte Profile
Lorenzo Loconte

@loreloc_

Followers
522
Following
4K
Media
29
Statuses
150

PhD Student @ University of Edinburgh

Edinburgh, Scotland
Joined March 2017
Don't wanna be here? Send us removal request.
@loreloc_
Lorenzo Loconte
9 months
We learn more expressive mixture models that can subtract probability density by squaring them. ๐ŸšจWe show squaring can reduce expressiveness. To tackle this we build sum of squares circuits๐Ÿ†˜. ๐Ÿš€We explain why complex parameters help, and show an expressiveness hierarchy around๐Ÿ†˜
3
32
164
@loreloc_
Lorenzo Loconte
22 days
RT @ema_marconato: ๐ŸงตWhy are linear properties so ubiquitous in LLM representations?. We explore this question through the lens of ๐—ถ๐—ฑ๐—ฒ๐—ป๐˜๐—ถ๐—ณ๐—ถ๐—ฎโ€ฆ.
0
61
0
@loreloc_
Lorenzo Loconte
2 months
RT @EmilevanKrieken: We propose Neurosymbolic Diffusion Models! We find diffusion is especially compelling for neurosymbolic approaches, coโ€ฆ.
0
105
0
@loreloc_
Lorenzo Loconte
2 months
RT @LennertDS_: Just under 10 days left to submit your latest endeavours in โšก#tractableโšก probabilistic modelsโ—. Join us at TPM @auai.org #Uโ€ฆ.
0
4
0
@loreloc_
Lorenzo Loconte
2 months
RT @diegocalanzone: In LoCo-LMs, we propose a neuro-symbolic loss function to fine-tune a LM to acquire logically consistent knowledge froโ€ฆ.
0
4
0
@loreloc_
Lorenzo Loconte
2 months
RT @jjcmoon: We developed a library to make logical reasoning embarrassingly parallel on the GPU. For those at ICLR ๐Ÿ‡ธ๐Ÿ‡ฌ: you can get theโ€ฆ.
0
6
0
@loreloc_
Lorenzo Loconte
2 months
RT @e_giunchiglia: ๐ŸšจNew at #ICLR: we introduce the first ever ๐ฅ๐š๐ฒ๐ž๐ซ that makes ๐š๐ง๐ฒ neural network ๐œ๐จ๐ฆ๐ฉ๐ฅ๐ข๐š๐ง๐ญ ๐›๐ฒ ๐๐ž๐ฌ๐ข๐ ๐ง with constraints exprโ€ฆ.
0
8
0
@loreloc_
Lorenzo Loconte
3 months
Tweet media one
0
6
0
@loreloc_
Lorenzo Loconte
4 months
0
5
0
@loreloc_
Lorenzo Loconte
4 months
RT @bkailkhu: Our paper "Low-rank finetuning for LLMs is inherently unfair" won a ๐›๐ž๐ฌ๐ญ ๐ฉ๐š๐ฉ๐ž๐ซ ๐š๐ฐ๐š๐ซ๐ at the @RealAAAI colorai workshop! #AAAIโ€ฆ.
0
4
0
@loreloc_
Lorenzo Loconte
4 months
You can find the speakers bios and the abstracts of presentations here: Check them out!.
0
0
1
@loreloc_
Lorenzo Loconte
4 months
The last speaker of the workshop is Alexandros Georgiou, who is giving an introduction to polynomial networks and equivariant tensor network architecture, as well as how to implement them.
Tweet media one
1
0
1
@loreloc_
Lorenzo Loconte
4 months
After lunch break, Andrew G. Wilson (@andrewgwils) is now giving his presentation on the importance of linear algebra structures in ML, as well as on how to navigate such structures in practice.
Tweet media one
1
0
3
@loreloc_
Lorenzo Loconte
4 months
After Nadav it is now the turn of Guillaume Rabusseau @grwip, who is joining us online. Guillaume guides us through interesting expressiveness relationships of families of RNNs that are parameterized through tensor factorizations techniques
Tweet media one
1
0
2
@loreloc_
Lorenzo Loconte
4 months
Live from the CoLoRAI workshop at AAAI. @nadavcohen is now giving his talk on "What Makes Data Suitable for Deep Learning?".Tools from quantum physics are shown to be useful in building more expressive deep learning models by changing the data distribution
Tweet media one
1
1
6
@loreloc_
Lorenzo Loconte
4 months
RT @jjcmoon: We all know backpropagation can calculate gradients, but it can do much more than that!. Come to my #AAAI2025 oral tomorrow (1โ€ฆ.
0
3
0
@loreloc_
Lorenzo Loconte
4 months
RT @benjiewang_cs: Circuits use sum-product computation graphs to model probability densities. But how do we ensure the non-negativity of tโ€ฆ.
0
1
0
@loreloc_
Lorenzo Loconte
4 months
We are going to present our poster "Sum of Squares Circuits" at AAAI in Philadelphia today. Hall E 12:30pm-14:00pm poster #840. We trace expressiveness connections of different types of additive and subtractive deep mixture models and tensor networks. ๐Ÿ“œ
@loreloc_
Lorenzo Loconte
9 months
We learn more expressive mixture models that can subtract probability density by squaring them. ๐ŸšจWe show squaring can reduce expressiveness. To tackle this we build sum of squares circuits๐Ÿ†˜. ๐Ÿš€We explain why complex parameters help, and show an expressiveness hierarchy around๐Ÿ†˜
0
6
25
@loreloc_
Lorenzo Loconte
4 months
RT @GabVenturato: ๐Ÿ”ฅ Can AI reason over time while following logical rules in relational domains? We will present Relational Neurosymbolic Mโ€ฆ.
0
4
0
@loreloc_
Lorenzo Loconte
5 months
RT @piermonn: These last three weeks, I had the chance to be back at Universitร  degli Studi di Bari as an invited researcher, to continue wโ€ฆ.
0
1
0
@loreloc_
Lorenzo Loconte
5 months
RT @kerstingAIML: ๐Ÿš€ Meet NeST, the first neuro-symbolic transpiler! It converts SPLL, a novel probabilistic language, into code across #AIโ€ฆ.
0
4
0