Andrea Boscutti Profile
Andrea Boscutti

@ABoscutti

Followers
66
Following
2K
Media
2
Statuses
48

Joined May 2019
Don't wanna be here? Send us removal request.
@PTenigma
Paul Thompson
8 days
@FitFounder Sorry I found the original quote, which I had lost: “No one imagines that a symphony is supposed to improve in quality as it goes along or that the whole object of playing it is to reach the finale. The point of music is discovered in every moment of playing and listening to it.
0
2
4
@ItaiYanai
Itai Yanai
4 months
Science doesn’t need to go according to plan; it just needs to lead to a discovery. If doesn’t have to be done alone or together with a buddy; there just needs to be a discovery. It doesn’t need to happen fast or slow; just as long as there’s a discovery, then everybody is happy.
15
263
1K
@videodrome
Robbie Barrat
7 years
I'm laughing so hard at this slide a friend sent me from one of Geoff Hinton's courses; "To deal with hyper-planes in a 14-dimensional space, visualize a 3-D space and say 'fourteen' to yourself very loudly. Everyone does it."
21
675
2K
@kyleichan
Kyle Chan
8 months
All Americans should think about this chart
475
1K
11K
@t_andy_keller
Andy Keller
9 months
In the physical world, almost all information is transmitted through traveling waves -- why should it be any different in your neural network? Super excited to share recent work with the brilliant @mozesjacobs: "Traveling Waves Integrate Spatial Information Through Time" 1/14
145
911
7K
@getjonwithit
Jonathan Gorard
1 year
Moths are attracted to lights because of the same mathematics that underlies twistor theory and compactification in theoretical physics: projective geometry. It all starts from a simple observation: translations are just rotations whose center is located "at infinity". (1/11)
78
677
5K
@_TheTransmitter
The Transmitter
1 year
With neuroscience datasets and scientific collaborations growing in size, Gaelle Chapuis and Olivier Winter explain why neuroscience needs to create a career path for software engineers. https://t.co/y8VpwO5R4p
Tweet card summary image
thetransmitter.org
Few institutions have mechanisms for the type of long-term positions that would best benefit the science.
2
61
180
@divyansha1115
Divyansha
1 year
Excited to share our Graph Foundation Model, 🌐 GraphFM, trained on 152 datasets with over 7.4 million nodes and 189 million edges spanning diverse domains. 🚨 Check out our preprint for GraphFM where we test how our model scales with data and model size, and show efficient
13
114
528
@jaysonjeg
Jayson Jeganathan
1 year
Do you use surface fMRI? We found spurious correlations in surface fMRI, with potentially serious implications for test-retest reliability, fingerprinting, functional parcellations and brain-behaviour associations (1/n) https://t.co/Sv98S53P7x
3
79
166
@fleetwood___
Fleetwood
1 year
Best tiled matmul animation I've found on the internet. Thanks @wentasah
12
203
2K
@karpathy
Andrej Karpathy
2 years
The killer app of LLMs is Scarlett Johansson. You all thought it was math or something
314
964
11K
@ABoscutti
Andrea Boscutti
2 years
AlphaFold 3 predicts the structure and interactions of all of life’s molecules @google
0
0
0
@danintheory
Dan Roberts
2 years
Do LLMs really need to be so L? That's a rejected title for a new paper w/ @Andr3yGR, @kushal_tirumala, @Hasan_Shap, @PaoloGlorioso1 on pruning open-weight LLMs: we can remove up to *half* the layers of Llama-2 70B w/ essentially no impact on performance on QA benchmarks. 1/
16
57
345
@cognition
Cognition
2 years
Today we're excited to introduce Devin, the first AI software engineer. Devin is the new state-of-the-art on the SWE-Bench coding benchmark, has successfully passed practical engineering interviews from leading AI companies, and has even completed real jobs on Upwork. Devin is
4K
10K
43K
@ylecun
Yann LeCun
2 years
* Language is low bandwidth: less than 12 bytes/second. A person can read 270 words/minutes, or 4.5 words/second, which is 12 bytes/s (assuming 2 bytes per token and 0.75 words per token). A modern LLM is typically trained with 1x10^13 two-byte tokens, which is 2x10^13 bytes.
@prmshra
Parmita Mishra
2 years
This is an essential point people seem to misrepresent.
559
2K
9K
@paulg
Paul Graham
2 years
I just moved the ChatGPT tab over to the left end of my main browser window, where I keep the tabs of things I use all the time, like GMail and Google Calendar.
378
91
4K
@ylecun
Yann LeCun
2 years
@Ciqax Convolution is equivariant to translations. Self-attention is equivariant to permutations. They both have a role to play. Conv is efficient for signals with strong local correlations and motifs that can appear anywhere. SelfAtt is good for "object-based" representations where
10
31
242
@karpathy
Andrej Karpathy
2 years
# on shortification of "learning" There are a lot of videos on YouTube/TikTok etc. that give the appearance of education, but if you look closely they are really just entertainment. This is very convenient for everyone involved : the people watching enjoy thinking they are
689
3K
17K
@ylecun
Yann LeCun
2 years
Meta has always tried to do the Right Thing. Meta has always practiced open research in AI. Meta has been promoting open source AI platforms. After numerous discussions over the last year (sometimes contentious) a consensus is emerging that open source AI platforms are
@toouufii
Toufi Saliba 🌐
2 years
@ylecun @wef @AndrewYNg @DaphneKoller @aidangomez My respect for Meta went a lot higher especially after watching live @ylecun in Davos at this panel chaired by Max Tegmark and fabulous panelists including Stuart Russel
183
133
2K