Alice Oh Profile
Alice Oh

@aliceoh

Followers
4K
Following
6K
Media
114
Statuses
2K

Professor of Computer Science, KAIST

Daejeon, Korea
Joined September 2008
Don't wanna be here? Send us removal request.
@aliceoh
Alice Oh
14 hours
RT @mrl_workshop: If you would like to sign up to be a reviewer, please fill in this form:
Tweet card summary image
docs.google.com
0
1
0
@aliceoh
Alice Oh
14 hours
RT @mrl_workshop: The submission deadline for the 5th Workshop on Multilingual Representation Learning is coming up! See details below! htt….
0
1
0
@aliceoh
Alice Oh
5 days
RT @ActInterp: Join us at 09:10 for Been’s talk on Agentic Interpretability and Neologism: what LLMs can offer us.
0
2
0
@aliceoh
Alice Oh
5 days
RT @NeurIPSConf: We're excited to announce a second physical location for NeurIPS 2025, in Mexico City. By expanding our physical locations….
0
77
0
@aliceoh
Alice Oh
5 days
@ancadianadragan Q&A: one Q about how to incorporate the values of marginalized people, and @ancadianadragan answered “tackling value pluralism with value archetypes to represent everyone and lean on deliberation (lots of research on social change”. Great talk!
Tweet media one
Tweet media two
0
0
0
@aliceoh
Alice Oh
5 days
@ancadianadragan How do we incorporate the plurality of human values? This is a rough sketch; curious to know how RL and robotics would actually implement this idea.
Tweet media one
1
0
0
@aliceoh
Alice Oh
5 days
@ancadianadragan There are still challenges and some potential ideas to overcome those challenges.
Tweet media one
1
0
0
@aliceoh
Alice Oh
5 days
@ancadianadragan But Gemini actually knows what humans want!
Tweet media one
Tweet media two
1
0
0
@aliceoh
Alice Oh
5 days
@ancadianadragan Plus human feedback is not actually what humans want. RLHF leading to worse performance #ICML2025
Tweet media one
1
0
0
@aliceoh
Alice Oh
5 days
@ancadianadragan It’s just difficult to learn the reward model that reflects what humans want.
Tweet media one
1
0
1
@aliceoh
Alice Oh
5 days
If it’s not in your training data that overshooting the person’s mouth is bad, then it’ll overshoot! #ICML2025
Tweet media one
1
0
1
@aliceoh
Alice Oh
5 days
Behind the scenes of a coffee pouring robot (which wasn’t able to do it), humans helping to fake it for a photo #ICML2025
Tweet media one
1
0
0
@aliceoh
Alice Oh
20 days
RT @NeurIPSConf: NeurIPS is seeking additional ethics reviewers this year. If you are able and willing to participate in the review process….
0
12
0
@aliceoh
Alice Oh
3 months
Great job @official_naver for making these models publicly available. Here's a graph showing a larger version of their model doing quite well on our BLEnD dataset.
Tweet media one
@kchonyc
Kyunghyun Cho
3 months
.@official_naver has been working large scale language models already from 2019, constantly training large models and even using them commercially. it has been a great pleasure for me to see their work closely over the past years. i am so glad to see their hyperclovax models are
Tweet media one
0
1
20
@aliceoh
Alice Oh
3 months
RT @Cohere_Labs: Honoured to have 2 of our datasets recognized by the @StanfordHAI AI Index Report as some of the most significant releases….
0
11
0
@aliceoh
Alice Oh
3 months
RT @CohereForAI: Are you working on a peer-reviewed paper, dataset, code, or another research output and want to leverage Cohere’s models?….
0
3
0
@aliceoh
Alice Oh
3 months
RT @CohereForAI: 🚀 We are excited to introduce Kaleidoscope, the largest culturally-authentic exam benchmark. 📌 Most VLM benchmarks are En….
0
23
0
@aliceoh
Alice Oh
3 months
Starting my sabbatical at Google!
Tweet media one
Tweet media two
10
2
309
@aliceoh
Alice Oh
4 months
SKorea upholding democracy, respecting the Constitution, confirming that the voice of the citizens will not be silenced by political agenda. via @NYTimes.
Tweet card summary image
nytimes.com
The nation’s top court unanimously upheld the impeachment of Yoon Suk Yeol, clearing the way for the election of a new president.
0
0
1
@aliceoh
Alice Oh
4 months
RT @LChoshen: Many AI models struggle with less common languages—leaving us with awkward, broken responses. But what if we could fix it tog….
0
1
0