Shibani Santurkar Profile
Shibani Santurkar

@ShibaniSan

Followers
3K
Following
1K
Media
5
Statuses
134

@OpenAI

Joined September 2014
Don't wanna be here? Send us removal request.
@ShibaniSan
Shibani Santurkar
1 year
πŸ“ excited!
@OpenAI
OpenAI
1 year
We're releasing a preview of OpenAI o1β€”a new series of AI models designed to spend more time thinking before they respond. These models can reason through complex tasks and solve harder problems than previous models in science, coding, and math.
2
0
15
@gdb
Greg Brockman
2 years
we are so back
2K
4K
49K
@ShibaniSan
Shibani Santurkar
2 years
πŸ’™πŸ’™πŸ’™πŸ’™πŸ’™πŸ’™πŸ’™
@OpenAI
OpenAI
2 years
We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo. We are collaborating to figure out the details. Thank you so much for your patience through this.
0
2
55
@ShibaniSan
Shibani Santurkar
2 years
🚒 🚒 🚒
@OpenAI
OpenAI
2 years
ChatGPT with voice is now available to all free users. Download the app on your phone and tap the headphones icon to start a conversation. Sound on πŸ”Š
0
1
14
@ShibaniSan
Shibani Santurkar
2 years
πŸ’›
@ilyasut
Ilya Sutskever
2 years
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
1
0
20
@ShibaniSan
Shibani Santurkar
2 years
OpenAI is nothing without its people
21
43
789
@ShibaniSan
Shibani Santurkar
2 years
❀️
@sama
Sam Altman
2 years
i love the openai team so much
1
1
68
@adi7sant
aditya sant
3 years
Dear Embassy team, I am an Indian citizen studying in San Diego. I misplaced my passport while travelling from US to Greece via Canada. I am in contact with the consulate in Vancouver but desperate for help. @IndiainToronto @IndiaPassportDC @IndianDiplomacy @DrSJaishankar
6
11
16
@OpenAI
OpenAI
3 years
We're launching ten $100,000 grants for building prototypes of a democratic process for steering AI. Our goal is to fund experimentation with methods for gathering nuanced feedback from everyone on how AI should behave. Apply by June 24, 2023:
Tweet card summary image
openai.com
Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow,...
531
1K
4K
@msbernst
Michael Bernstein
3 years
Very cool work by @tatsu_hashimoto and colleagues: ask LLMs questions from Pew Surveys in order to measure whose opinions the model's outputs most closely reflects.
@tatsu_hashimoto
Tatsunori Hashimoto
3 years
We know that language models (LMs) reflect opinions - from internet pre-training, to developers and crowdworkers, and even user feedback. But whose opinions actually appear in the outputs? We make LMs answer public opinion polls to find out: https://t.co/wv3F6TOnwe
0
4
27
@percyliang
Percy Liang
3 years
I would not say that LMs *have* opinions, but they certainly *reflect* opinions represented in their training data. OpinionsQA is an LM benchmark with no right or wrong answers. It's rather the *distribution* of answers (and divergence from humans) that's interesting to study.
@tatsu_hashimoto
Tatsunori Hashimoto
3 years
We know that language models (LMs) reflect opinions - from internet pre-training, to developers and crowdworkers, and even user feedback. But whose opinions actually appear in the outputs? We make LMs answer public opinion polls to find out: https://t.co/wv3F6TOnwe
0
19
98
@tatsu_hashimoto
Tatsunori Hashimoto
3 years
We know that language models (LMs) reflect opinions - from internet pre-training, to developers and crowdworkers, and even user feedback. But whose opinions actually appear in the outputs? We make LMs answer public opinion polls to find out: https://t.co/wv3F6TOnwe
4
97
410
@aleks_madry
Aleksander Madry
3 years
As ML models/datasets get bigger + more opaque, we need a *scalable* way to ask: where in the *data* did a prediction come from? Presenting TRAK: data attribution with (significantly) better speed/efficacy tradeoffs: w/ @smsampark @kris_georgiev1 @andrew_ilyas @gpoleclerc 1/6
4
68
307
@ShibaniSan
Shibani Santurkar
3 years
Auto data selection is comparable to expert curated data for pretraining LMs! The leverage: n-gram overlap between pretrain and downstream predicts downstream acc well (r=0.89). But it's not the whole story - lots to uncover on the effect of pretrain data on downstream tasks.
@sangmichaelxie
Sang Michael Xie
3 years
Data selection typically involves filtering a large source of raw data towards some desired target distribution, whether it's high-quality/formal text (e.g., Wikipedia + books) for general-domain LMs like GPT-3 or domain-specific data for specialized LMs like Codex.
0
7
36
@percyliang
Percy Liang
3 years
I have 6 fantastic students and post-docs who are on the academic job market this year. Here is a short thread summarizing their work along with one representative paper:
11
59
503
@tsiprasd
Dimitris Tsipras
3 years
Our #NeurIPS2022 poster on in-context learning will be tomorrow (Thursday) at 4pm! Come talk to @shivamg_13 and me at poster #928 πŸ”₯
@tsiprasd
Dimitris Tsipras
3 years
LLMs can do in-context learning, but are they "learning" new tasks or just retrieving ones seen during training? w/ @shivamg_13, @percyliang, & Greg Valiant we study a simpler Q: Can we train Transformers to learn simple function classes in-context? 🧡 https://t.co/3aQ0XWWPV9
0
6
37
@RishiBommasani
rishi@NeurIPS
3 years
In August 2021, we launched CRFM with our report on foundation models. 15 months to the day, we now have launched HELM on the holistic evaluation of language models. Blog: https://t.co/ShKztgMMQ4 Website: https://t.co/N65Lb0Fj9N Paper: https://t.co/RiYXWLU1qV 1/n 🧡
3
20
68
@percyliang
Percy Liang
3 years
Language models are becoming the foundation of language technologies, but when do they work or don’t work? In a new CRFM paper, we propose Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of LMs. Holistic evaluation includes three elements:
13
199
760
@tsiprasd
Dimitris Tsipras
3 years
LLMs can do in-context learning, but are they "learning" new tasks or just retrieving ones seen during training? w/ @shivamg_13, @percyliang, & Greg Valiant we study a simpler Q: Can we train Transformers to learn simple function classes in-context? 🧡 https://t.co/3aQ0XWWPV9
Tweet card summary image
arxiv.org
In-context learning refers to the ability of a model to condition on a prompt sequence consisting of in-context examples (input-output pairs corresponding to some task) along with a new query...
8
104
493
@ShibaniSan
Shibani Santurkar
3 years
Based on our findings, we design simple interventions to improve CLIP’s ability to leverage web-scraped captions: by filtering them and using GPT-J to perform text data augmentations via paraphrasing.
0
0
10