boardsofdata Profile Banner
martin palazzo Profile
martin palazzo

@boardsofdata

Followers
1K
Following
19K
Media
585
Statuses
6K

science & startup worker. AI, biotech, manufacturing. @stammbio @udesa

high dimensional space
Joined November 2011
Don't wanna be here? Send us removal request.
@boardsofdata
martin palazzo
10 months
Molecular biology 🧬 wet lab iteration for cell therapies implies intense R&D resources. By leveraging AI, computational biology algorithms and multi-omics assay data @StammBio presents MoNA: a cell representation atlas designed to accelerate bio-innovation cycles. Take a look👇
@StammBio
Stämm
10 months
1/ 🚨 Uncertainty in cell & gene therapy development? Enter MoNA: our Multi-Omic Network Atlas technology!
0
3
9
@boardsofdata
martin palazzo
5 days
Is this a circular RNA reference ? 😅
@zzznah
Alex Mordvintsev
5 days
wait for it
0
0
0
@jboysen0
Jacob Boysen
9 days
AI ppl: we basically solved bio with the thinking sands Bio ppl: after decades of lab toil, we solved how to stably express GFP in stem cells
@GeneticsMike7
Mike Gallagher
10 days
1/ Happy to share important work done with my co-author Andrew Khalil in the labs of Rudolf Jaenisch @WhiteheadInst @MITBiology @MIT and Dave Mooney @Harvard @wyssinstitute trying to assess and fix the major problem of transgene silencing in human ESC/iPSC based work
3
15
267
@ron_alfa
Ron Alfa
18 days
We’re moving biology from the wet lab to the gpu cluster.
36
29
363
@konstmish
Konstantin Mishchenko
22 days
I find it fascinating that momentum in standard convex optimization is just about making convergence faster, but in nonconvex problems, it's sometimes the only way a method can work at all. Just saw a new example of this phenomenon in the case of difference-of-convex functions.
3
15
133
@probnstat
Probability and Statistics
26 days
Graphons are mathematical objects that model the structure of massive networks. In machine learning, they provide a powerful framework for analyzing and generating large graphs. They are used to estimate the underlying structure of a network, predict missing links, and understand
6
87
574
@kellerjordan0
Keller Jordan
26 days
Theorem: The maximum possible duration of the computational singularity is 470 years. Proof: The FLOPs capacity of all computers which existed in the year 1986 is estimated to be at most 4.5e14 (Hilbert et al. 2011). Based on public Nvidia revenue and GPU specs, this capacity
64
51
622
@mweber_PU
Melanie Weber
26 days
How does neural feature geometry evolve during training? Modeling feature spaces as geometric graphs, we show that nonlinear activations drive transformations resembling discrete Ricci flow - revealing how class structure emerges and suggesting geometry-informed training
0
37
286
@QuantaMagazine
Quanta Magazine
27 days
The simplex method is an algorithm that turns an optimization problem, like setting up an investment portfolio, into a geometry problem. Recently, the scientists Sophie Huiberts (left) and Eleon Bach reduced the runtime of the simplex method. https://t.co/52maQuGbA4
11
143
1K
@ProfFeynman
Prof. Feynman
1 month
Do more math.
95
433
3K
@druv_pai
Druv Pai
1 month
🚨 We wrote a new AI textbook "Learning Deep Representations of Data Distributions"! TL;DR: We develop principles for representation learning in large scale deep neural networks, show that they underpin existing methods, and build new principled methods.
5
44
173
@NonEuclideanDr1
Non-Euclidean Dreamer
1 month
Two sources, two frequencies #mathart
6
37
234
@VraserX
VraserX e/acc
1 month
A 7 million parameter model from Samsung just outperformed DeepSeek-R1, Gemini 2.5 Pro, and o3-mini on reasoning benchmarks like ARC-AGI. Let that sink in. It’s 10,000x smaller yet smarter. The secret is recursion. Instead of brute-forcing answers like giant LLMs, it drafts a
57
277
2K
@s_scardapane
Simone Scardapane
2 months
*The Origins of Representation Manifolds in LLMs* by Modell et al. They study the presence of "interpretable features" in LLMs embedded as manifolds and how their geometry connects to the internal representations of the models. https://t.co/htskNJCllc
15
158
1K
@askalphaxiv
alphaXiv
2 months
This new research showed empirically that KL divergence is the most accurate predictor of catastrophic forgetting in LLMs! And since forgetting tracks KL drift from the base, methods that keep KL small tend to forget less too.
7
53
518
@repligate
j⧉nus
2 months
HOW INFORMATION FLOWS THROUGH TRANSFORMERS Because I've looked at those "transformers explained" pages and they really suck at explaining. There are two distinct information highways in the transformer architecture: - The residual stream (black arrows): Flows vertically through
@repligate
j⧉nus
2 months
KV caching overcomes statelessness in a very meaningful sense and provides a very nice mechanism for introspection (specifically of computations at earlier token positions) the Value representations can encode information from residual streams of past positions without
66
342
3K
@ch4rleston
Carlos Sarraute ⚡️
2 months
💡 Me encantaron las sesiones de pósters en el Simposio Científico de IA y Aplicaciones #SCIAA2025! Había muchos trabajos interesantes. En particular quiero destacar dos proyectos que me llamaron la atención: 🔹 Impacto del Prompt Engineering en el rendimiento de LLMs: estudio
2
4
11
@boardsofdata
martin palazzo
2 months
Que capo
@QuantaMagazine
Quanta Magazine
2 months
At 26, during the Reign of Terror in France, Jean-Baptiste Joseph Fourier narrowly avoided the guillotine. A decade later, he made a discovery that changed mathematics forever. @shalmawegs reports:
0
0
1
@DouglasYaoDY
Douglas Yao
2 months
The most complex biological system that we meaningfully understand is a single virion. Everything more complex than that (including bacteria, single human cells, animals, and of course, humans) is totally beyond our comprehension.
19
33
204
@boardsofdata
martin palazzo
3 months
Llego el dia de decidir entre: - Bosch - Makita - DeWalt - Black & Decker
1
0
1
@fchollet
François Chollet
3 months
Saying that deep learning is "just a bunch of matrix multiplications" is about as informative as saying that computers are "just a bunch of transistors" or that a library is "just a lot of paper and ink." It's true, but the encoding substrate is the least important part here.
131
295
3K