
Lorenzo Loconte
@loreloc_
Followers
522
Following
4K
Media
29
Statuses
150
PhD Student @ University of Edinburgh
Edinburgh, Scotland
Joined March 2017
RT @ema_marconato: ๐งตWhy are linear properties so ubiquitous in LLM representations?. We explore this question through the lens of ๐ถ๐ฑ๐ฒ๐ป๐๐ถ๐ณ๐ถ๐ฎโฆ.
0
61
0
RT @EmilevanKrieken: We propose Neurosymbolic Diffusion Models! We find diffusion is especially compelling for neurosymbolic approaches, coโฆ.
0
105
0
RT @LennertDS_: Just under 10 days left to submit your latest endeavours in โก#tractableโก probabilistic modelsโ. Join us at TPM @auai.org #Uโฆ.
0
4
0
RT @diegocalanzone: In LoCo-LMs, we propose a neuro-symbolic loss function to fine-tune a LM to acquire logically consistent knowledge froโฆ.
0
4
0
RT @e_giunchiglia: ๐จNew at #ICLR: we introduce the first ever ๐ฅ๐๐ฒ๐๐ซ that makes ๐๐ง๐ฒ neural network ๐๐จ๐ฆ๐ฉ๐ฅ๐ข๐๐ง๐ญ ๐๐ฒ ๐๐๐ฌ๐ข๐ ๐ง with constraints exprโฆ.
0
8
0
RT @PMinervini: @chaitjo @benfinkelshtein @ffabffrasca @mmbronstein @phanein @michael_galkin @chrsmrrs hi, we found problematic benchmarksโฆ.
0
5
0
After lunch break, Andrew G. Wilson (@andrewgwils) is now giving his presentation on the importance of linear algebra structures in ML, as well as on how to navigate such structures in practice.
1
0
3
Live from the CoLoRAI workshop at AAAI. @nadavcohen is now giving his talk on "What Makes Data Suitable for Deep Learning?".Tools from quantum physics are shown to be useful in building more expressive deep learning models by changing the data distribution
1
1
6
RT @benjiewang_cs: Circuits use sum-product computation graphs to model probability densities. But how do we ensure the non-negativity of tโฆ.
0
1
0
We are going to present our poster "Sum of Squares Circuits" at AAAI in Philadelphia today. Hall E 12:30pm-14:00pm poster #840. We trace expressiveness connections of different types of additive and subtractive deep mixture models and tensor networks. ๐
We learn more expressive mixture models that can subtract probability density by squaring them. ๐จWe show squaring can reduce expressiveness. To tackle this we build sum of squares circuits๐. ๐We explain why complex parameters help, and show an expressiveness hierarchy around๐
0
6
25
RT @GabVenturato: ๐ฅ Can AI reason over time while following logical rules in relational domains? We will present Relational Neurosymbolic Mโฆ.
0
4
0
RT @kerstingAIML: ๐ Meet NeST, the first neuro-symbolic transpiler! It converts SPLL, a novel probabilistic language, into code across #AIโฆ.
0
4
0