fabianbaumann Profile
fabianbaumann

@ftw_baumann

Followers
12
Following
7
Media
0
Statuses
28

Joined January 2020
Don't wanna be here? Send us removal request.
@Scott_E_Page
Scott Page
1 year
University of Michigan has an open assistant professor position in Complex Systems. Amazing opportunity to do join an interdisciplinary community of faculty and students. Looking for discipline thinkers who transcend disciplines.
3
64
164
@MohammadAtari90
Mohammad Atari
1 year
🔔 New paper out in @PNASNews 🔔 “Large Language Models based on historical text could offer informative tools for behavioral science” W/ Michael Varnum, Nicolas Baumard, & @kurtjgray
5
84
314
@criedl
Christoph Riedl
1 year
Large study shows humans can learn from AI feedback but access to AI also amplifies existing inequalities by increasing the skill gap and reduces intellectual diversity: everyone learns to specialize in the same areas https://t.co/igSIj33n8r
1
25
72
@karpathy
Andrej Karpathy
1 year
Haha we've all been there. I stumbled by this tweet earlier today and tried to write a little utility that auto-generates git commit message based on the git diff of staged changes. Gist: https://t.co/CNqxyUXcqM So just typing `gcm` (short for git commit -m) auto-generates a
Tweet card summary image
gist.github.com
Git Commit Message AI. GitHub Gist: instantly share code, notes, and snippets.
@IsMrRobot
Mr. Robot
1 year
The hardest decision
187
339
5K
@ricard_sole
Ricard Solé
1 year
Our project for experimentally testing planetary regulation (Lovelock & Margulis) on the test tube, using a synthetic microbial Gaian system (+1 PhD), has been funded by @AgEInves. A great adventure that started at @sfiscience 2 yrs ago with @VictorVmaull & @jordiplam
4
38
190
@eplantec
erwan plantec
1 year
Super excited to share our new work (and the first of my PhD) : "Evolving Self-Assembling Neural Networks: From Spontaneous Activity to Experience-Dependent Learning" We propose Lifelong Neural Developmental Programs for continually self-organizing artificial neural networks !
9
69
490
@alainbarrat
Alain Barrat
1 year
Deadline shifted to June 15!
@alainbarrat
Alain Barrat
2 years
Postdoc position in Marseilles in an interdisciplinary project with @BrovelliAndrea https://t.co/bncT5GZYgd Deadline for applications: June 5, 2024
0
8
10
@ThosVarley
Thomas F. Varley
1 year
I've gotten quite a few cold emails from students who found this long review I wrote useful and I'd like to do...something with it. It's way too long to submit as a stand-alone paper to, say, Entrop tho. Does anyone publish big tutorial reviews like this? https://t.co/9TKcNQ3Qmm
Tweet card summary image
arxiv.org
In the 21st century, many of the crucial scientific and technical issues facing humanity can be understood as problems associated with understanding, modelling, and ultimately controlling complex...
12
78
430
@tobigerstenberg
Tobias Gerstenberg
1 year
Now out in Trends in Cognitive Sciences!
@tobigerstenberg
Tobias Gerstenberg
2 years
🚨 New preprint 🚨 In "Beyond the here and now: Counterfactual simulation in causal cognition", I discuss what role counterfactual simulation plays for how people judge causation and assign responsibility. 📰 https://t.co/1i9vkqct5I
2
60
281
@karpathy
Andrej Karpathy
1 year
Anyone else find themselves estimating the "GPT grade" of things you hear/read? When something is poorly written or generic, it's "GPT-2 grade" content. When something is lit, you can complement it as being "GPT-7 grade" etc. This reminds me of a fun side project I had saved for
68
75
1K
@theryanliu
Ryan Liu
2 years
Honesty and helpfulness are two central goals of LLMs. But what happens when they are in conflict with one another? 😳 We investigate trade-offs LLMs make, which values they prioritize, and how RLHF and Chain-of-Thought influence these trade-offs: https://t.co/wXJ9hFs3Vc [1/3]
1
13
61
@JurgSpaak
Jurg Spaak
2 years
In our new paper in @Ecology_Letters we apply modern coexistence theory to higher order species interactions where we compute niche and fitness differences as changin over time Thanks to Agnieszka Majer, Anna Skoracha (not on twitter) and @L_Kuczynski https://t.co/kAs6ygg29b 1/7
1
9
26
@ylelkes
Yphtach Lelkes
2 years
the divide on perceived ideology of Target but not Walmart is pretty striking
@PRL_Tweets
Polarization Research Lab
2 years
Our latest Path to 2024 report examines attitudes toward corporate political activism. Only 27.8% of Americans support corporations taking stances on social issues, with more Dems (39%) expressing support than Reps (23.3%). Read the full report: https://t.co/MlJd0AnUXF
4
7
25
@ricard_sole
Ricard Solé
2 years
How can we model the collective behavior of complex systems with many dimensions? Is it possible 2 find a model reduction that captures the key components? Check this great paper in @PhysRevLett & how to use it in many different contexts (networks+Ising) https://t.co/hhUasBIuXu
1
33
147
@icouzin
Iain Couzin
2 years
It’s amazing how many empirical features of interactions and collective response can emerge from a simple imperative: to minimize surprise. Out today in PNAS @PNASNews - with @conorheins @richardpmann Karl Friston and colleagues! @CBehav @maxplanckpress https://t.co/gZcyDuG285
7
111
408
@karpathy
Andrej Karpathy
2 years
Highly amusing update, ~18 hours later: llm.c is now down to 26.2ms/iteration, exactly matching PyTorch (tf32 forward pass). We discovered a bug where we incorrectly called cuBLAS in fp32 mathmode 🤦‍♂️. And ademeure contributed a more optimized softmax kernel for very long rows
@karpathy
Andrej Karpathy
2 years
A few new CUDA hacker friends joined the effort and now llm.c is only 2X slower than PyTorch (fp32, forward pass) compared to 4 days ago, when it was at 4.2X slower 📈 The biggest improvements were: - turn on TF32 (NVIDIA TensorFLoat-32) instead of FP32 for matmuls. This is a
157
543
6K
@karpathy
Andrej Karpathy
2 years
THE REVENGE OF PYTORCH just kidding :) @cHHillee (from PyTorch team) was kindly able to help improve the PyTorch baseline, done by 1) upgrading to nightly, 2) using the "compound" F.sdpa (scaled dot product attention) layer directly, and turning on a torch compile flag:
29
46
1K
@karpathy
Andrej Karpathy
2 years
Have you ever wanted to train LLMs in pure C without 245MB of PyTorch and 107MB of cPython? No? Well now you can! With llm.c: https://t.co/w2wkY0Ho5m To start, implements GPT-2 training on CPU/fp32 in only ~1,000 lines of clean code. It compiles and runs instantly, and exactly
Tweet card summary image
github.com
LLM training in simple, raw C/CUDA. Contribute to karpathy/llm.c development by creating an account on GitHub.
291
2K
13K
@emollick
Ethan Mollick
2 years
Interesting paper argues that scale is all you need to explain humanity’s unique levels of intelligence. “We propose that global, genetic differences in learning and memory are sufficient to account for uniquely human reasoning across domains,” https://t.co/Kiq88lKLDb
16
97
446