Sriram Padmanabhan Profile
Sriram Padmanabhan

@SriramPad05

Followers
9
Following
0
Media
6
Statuses
10

Undergraduate Student at UT Austin majoring in CS & Math

Joined June 2024
Don't wanna be here? Send us removal request.
@SriramPad05
Sriram Padmanabhan
1 month
RT @kanishkamisra: News🗞️. I will return to UT Austin as an Assistant Professor of Linguistics this fall, and join its vibrant community of….
0
19
0
@SriramPad05
Sriram Padmanabhan
3 months
For more results (e.g., experiments on models’ parametric knowledge about various hypotheses as well as results on the city game) check out the paper here:
0
0
0
@SriramPad05
Sriram Padmanabhan
3 months
LMs’ zero-shot behavior shows little to no sensitivity to suspicious coincidences. But the results change when the knowledge of the hypothesis space is activated either implicitly (Chain-of-Thought) or explicitly (Knowledge) - sometimes even consistent with humans (qualitatively)
Tweet media one
1
0
0
@SriramPad05
Sriram Padmanabhan
3 months
We test sensitivity in three environments: zero-shot, Chain-of-Thought, and a “Knowledge” prompt that provides the model with explicit access to the possible hypotheses the input and target could be sampled from.
Tweet media one
1
0
0
@SriramPad05
Sriram Padmanabhan
3 months
We focus on two domains:  the number game from Tenenbaum (1999) with human judgments collected by Eric Bigelow and @spiantado, and a world-cities domain (but with no human judgements).
Tweet media one
1
0
0
@SriramPad05
Sriram Padmanabhan
3 months
Given the LM’s yes/no responses, we calculate the F1 scores for members of each hypothesis that fits both the input and target and determine whether the smallest such hypothesis is favored by the model.
Tweet media one
1
0
0
@SriramPad05
Sriram Padmanabhan
3 months
To test model sensitivity to suspicious coincidences, we provide the model with an input that could be sampled from multiple hypotheses (e.g. “16, 8, 2, 64”) and ask it whether a given target value (e.g “32”) is compatible with the input.
1
0
0
@SriramPad05
Sriram Padmanabhan
3 months
This is known as the suspicious coincidence effect – if you were to convey “odd” numbers then it is highly suspicious that you chose those numbers. Humans show this sensitivity across a wide range of contexts: here, smaller hypotheses are favored over more general ones.
Tweet media one
1
0
0
@SriramPad05
Sriram Padmanabhan
3 months
Humans readily show sensitivity to the way data is generated when reasoning inductively. E.g.,if some program generated “93, 43, 83, 53” – it’s likely producing numbers ending in 3, even though it’s not the only applicable hypothesis (e.g., they’re all odd numbers).
1
0
0
@SriramPad05
Sriram Padmanabhan
3 months
Are LMs sensitive to suspicious coincidences? Our paper finds that, when given access to knowledge of the hypothesis space, LMs can show sensitivity to such coincidences, displaying parallels with human inductive reasoning. w/@kanishkamisra, @kmahowald, @eunsolc
Tweet media one
1
6
26