ZakASHussain Profile Banner
Zak Hussain Profile
Zak Hussain

@ZakASHussain

Followers
96
Following
78
Media
2
Statuses
35

Computational Cognitive Scientist, Center for Cognitive and Decision Sciences, University of Basel.

Basel, Switzerland
Joined April 2020
Don't wanna be here? Send us removal request.
@dirkuwulff
Dirk Wulff
4 months
🚨 New publication 🚨 Excited to see this published in Findings of #ACL2025, led by @ZakASHussain. We critically evaluate claims that #LLMs are "just next token predictors" or "just machines" and call for a more measured discussion on LLM cognition. https://t.co/4GvArPZjlo
2
1
12
@marcel_binz
Marcel Binz
5 months
Excited to see our Centaur project out in @Nature. TL;DR: Centaur is a computational model that predicts and simulates human behavior for any experiment described in natural language.
4
57
182
@dirkuwulff
Dirk Wulff
5 months
Happy to have concluded another iteration of our 5-day open LLMs course, together with @ZakASHussain. If you are interested in LLMs for behavioral and social sciences, check out our... Tutorial: https://t.co/DDQ2ydc5P0 Open materials: https://t.co/lH6qIQwdHI
1
13
36
@dirkuwulff
Dirk Wulff
8 months
How can we reduce conceptual clutter in the psychological sciences? @rui__mata and I propose a solution based on a fine-tuned 🤖 LLM ( https://t.co/i3rbR2m7Ce) and test it for 🎭 personality psychology. The paper is finally out in @NatureHumBehav: https://t.co/lvesG6v7x9
0
17
42
@dirkuwulff
Dirk Wulff
9 months
Excited to share the materials for our course on open LLMs for science of science research @ZakASHussain and I offered at the recent meeting of @euroscisci at @LMU_Muenchen. https://t.co/7XqejupoBs
github.com
Materials "LLMs for science of science research" training, LMU, 2025 - Zak-Hussain/LLM4SciSci
0
12
33
@dirkuwulff
Dirk Wulff
9 months
Do LLMs think? Excited to share our updated preprint critically discussing two "Justaic" stances claiming that LLMs lack cognition because they are "just" next-token predictors or "just" machines. Led by @ZakASHussain and with @rui__mata. https://t.co/K3KukjzMlw
2
6
32
@dirkuwulff
Dirk Wulff
10 months
The potential of LLMs in social & behavioral science is enormous—but how can we leverage them? @ZakASHussain & I just taught a 5-day course at #GSERM Ljubljana on this. Check out our open materials (cc-by-sa) on using open LLMs with @huggingface: https://t.co/4dHX9OtkJZ
Tweet card summary image
github.com
The course introduces the use of open-source large language models (LLMs) from the Hugging Face ecosystem for research in the behavioral and social sciences. - Zak-Hussain/LLM4BeSci_Ljubljana2025
0
11
47
@dirkuwulff
Dirk Wulff
11 months
🚨 New preprint 🚨 Excited to share this preprint led by @ZakASHussain (with @rui__mata and Ben Newell), comparing the semantic contents of embedding models from text, brain, and behavior data. We find that behavior captures key psychological dimensions better than text,
1
10
32
@ZakASHussain
Zak Hussain
1 year
https://t.co/ukCDQQyPDV This was fun! Looking forward to seeing more research on continual learning (in addition to the "transient" status quo)
0
0
2
@marcel_binz
Marcel Binz
1 year
Excited to announce Centaur -- the first foundation model of human cognition. Centaur can predict and simulate human behavior in any experiment expressible in natural language. You can readily download the model from @huggingface and test it yourself:
Tweet card summary image
huggingface.co
41
244
1K
@samuelaeschbach
Samuel Aeschbach
1 year
Preprint: Individual-level semantic networks are a critical ingredient for building realistic cognitive models. In our simulation, @rui__mata, @dirkuwulff, and I show when measurements of individual semantic networks are accurate and when they are not. https://t.co/MrWWmuR7vc
0
15
34
@ZakASHussain
Zak Hussain
1 year
https://t.co/hH4t0CwuIp Cool to see the conceptual tools of cognitive science applied directly to challenges in artificial neural network interpretability - especially the distinction between *semantic* and *algorithmic* interpretation.
0
0
1
@marcel_binz
Marcel Binz
1 year
New preprint in which use methods from mech interp to reveal that Llama-3-70B implements TD-learning in-context.
@TankredSaanum
Tankred Saanum
1 year
Can LLMs do reinforcement learning in-context - and if so, how do they do it? Using Sparse Autoencoders, we find that Llama 3 relies on representations resembling TD errors, Q-values and even the SR to learn in three RL tasks in-context! Co-lead with the inimitable @can_demircann
1
9
37
@ZakASHussain
Zak Hussain
1 year
https://t.co/B65A28tRdf Really impressed by the clarity of Solm's explanations here. Also cool to hear a new (at least to me), affect-centered perspective on consciousness.
0
0
2
@dirkuwulff
Dirk Wulff
1 year
Great to see this perspective led by @jason_w_burton finally out in @NatureHumBehav. Article: https://t.co/7hNoNKqqBh
@mpib_berlin
Max Planck Institute for Human Development
1 year
How can we make the best possible use of large language models (LLMs) for a smarter and more inclusive society? New article @NatureHumBehav outlines the ways LLMs can help and hurt collective intelligence and proposes recommendations for action. https://t.co/mmH9lVh8Z2
0
15
116
@MelMitchell1
Melanie Mitchell
1 year
Important antitrust lawsuit against academic publishers. We don't have to accept the status quo, in which these companies make huge profits on the backs of scholars, and damage science in the process. https://t.co/DKKsLToULY
7
80
234
@dirkuwulff
Dirk Wulff
1 year
🚨 New preprint 🚨 I am excited to share a piece with @rui__mata and @ZakASHussain in which we advocate for a greater reliance on open LLMs in the behavioral and social sciences. Despite the availability of powerful, more reproducible open alternatives (e.g., via @huggingface),
5
21
79
@dirkuwulff
Dirk Wulff
1 year
🚨 New article 🚨 Open LLMs are powerful and reproducible alternatives to closed models like GPT or Gemini. In this tutorial, we (@ZakASHussain, @marcel_binz, @rui__mata) explain how LLMs work and show how to apply open LLMs to behavioral science questions using the
0
37
124
@dirkuwulff
Dirk Wulff
1 year
🚨Call for help🚨 For our piece 'Are LLMs "just" next-token predictors?" ( https://t.co/cT0mRPNI50), we (@ZakASHussain, @rui__mata) are looking for quotes of unsubstantiated deflationary claims about LLM cognition in science or media. See https://t.co/KuqfU7UXud. 🙏Thank you!!
Tweet card summary image
github.com
Contribute to Zak-Hussain/againstJustaism development by creating an account on GitHub.
1
5
13
@memovocab
memovocab
1 year
[Review] "Modern language models and vector-symbolic architectures show that vector-based models are capable of handling the compositional, structured, and symbolic properties required for human concepts" https://t.co/6ZzjyzPVOF
0
17
76