lucaschubu Profile Banner
Luca Schulze Buschoff Profile
Luca Schulze Buschoff

@lucaschubu

Followers
51
Following
28
Media
6
Statuses
15

phd student at @cpilab

Joined November 2023
Don't wanna be here? Send us removal request.
@lucaschubu
Luca Schulze Buschoff
13 days
RT @marcel_binz: Excited to see our Centaur project out in @Nature. TL;DR: Centaur is a computational model that predicts and simulates hum….
0
56
0
@lucaschubu
Luca Schulze Buschoff
1 month
Check out our updated pre-print here:
0
0
1
@lucaschubu
Luca Schulze Buschoff
1 month
Excited to say our paper got accepted to ICML! We added new findings including this: models fine-tuned on a visual counterfactual reasoning task do not generalize to the underlying factual physical reasoning task, even with test images matched to the fine-tuning data set.
Tweet media one
@lucaschubu
Luca Schulze Buschoff
5 months
In previous work we found that VLMs fall short of human visual cognition. To make them better, we fine-tuned them on visual cognition tasks. We find that while this improves performance on the fine-tuning task, it does not lead to models that generalize to other related tasks:
Tweet media one
1
3
5
@lucaschubu
Luca Schulze Buschoff
5 months
Check out our pre-print here: This is joint work with @KozzyVoudouris, @elifakata, @MatthiasBethge, Josh Tenenbaum, and @cpilab.
0
1
2
@lucaschubu
Luca Schulze Buschoff
5 months
Finally, we fine-tuning a model on human responses for the synthetic intuitive physics dataset. We find that this model not only shows a higher agreement with human observers, but that it also generalizes better to the real block towers.
Tweet media one
1
0
1
@lucaschubu
Luca Schulze Buschoff
5 months
Models fine-tuned on intuitive physics also do not robustly generalize to an almost identical but visually different dataset (Lerer columns below). They are fine-tuned on synthetic block towers, while the dataset by @adamlerer features pictures of real block towers.
Tweet media one
1
0
1
@lucaschubu
Luca Schulze Buschoff
5 months
We fine-tuned models on tasks from intuitive physics and causal reasoning. Models fine-tuned on intuitive physics (first two rows) do not perform well on causal reasoning and vice versa. Models fine-tuned on both perform well in either domain, showing models can learn both.
Tweet media one
1
0
1
@lucaschubu
Luca Schulze Buschoff
5 months
In previous work we found that VLMs fall short of human visual cognition. To make them better, we fine-tuned them on visual cognition tasks. We find that while this improves performance on the fine-tuning task, it does not lead to models that generalize to other related tasks:
Tweet media one
1
5
10
@lucaschubu
Luca Schulze Buschoff
6 months
Happy to say we made the cover too! (@elifakata, @cpilab)
Tweet media one
@lucaschubu
Luca Schulze Buschoff
6 months
Our paper (with @elifakata, @MatthiasBethge, @cpilab) on visual cognition in multimodal large language models is now out in @NatMachIntell. We find that VLMs fall short of human capabilities in intuitive physics, causal reasoning, and intuitive psychology.
0
3
16
@lucaschubu
Luca Schulze Buschoff
6 months
Our paper (with @elifakata, @MatthiasBethge, @cpilab) on visual cognition in multimodal large language models is now out in @NatMachIntell. We find that VLMs fall short of human capabilities in intuitive physics, causal reasoning, and intuitive psychology.
1
10
42
@lucaschubu
Luca Schulze Buschoff
9 months
RT @marcel_binz: Excited to announce Centaur -- the first foundation model of human cognition. Centaur can predict and simulate human behav….
0
245
0
@lucaschubu
Luca Schulze Buschoff
9 months
RT @TankredSaanum: Object slots are great for compositional generalization, but can models without these inductive biases learn composition….
0
15
0
@lucaschubu
Luca Schulze Buschoff
1 year
Come work with @can_demircann and me on semantic label smoothing (it's cool, I promise)!.
@marcel_binz
Marcel Binz
1 year
We currently have several openings for Bachelor and Master students. Topics range from mostly experimental to mostly computational, and everything in between. Please reach out if you are interested. Physical presence in Munich is required. More details:
0
2
5
@lucaschubu
Luca Schulze Buschoff
2 years
RT @cpilab: 🚨Pre-print alert:🚨.Have we built machines that think like people?.In new work, led by @lucaschubu and @elifakata and together w….
0
35
0