josephcc Profile Banner
Joseph Chee Chang Profile
Joseph Chee Chang

@josephcc

Followers
708
Following
876
Media
42
Statuses
4K

😉💻🔄 Research Scientist @ AI2/Semantic Scholar | Artisanal small-batch handcrafted tweets with no added llms. @[email protected] @josephc.bsky.social

Seattle, originally Taiwan
Joined November 2007
Don't wanna be here? Send us removal request.
@vishakh_pk
Vishakh Padmakumar
4 days
Last year I worked at @Adobe @AdobeResearch and @allen_ai, exploring how we can help users read, organize and understand long documents. This piece covers what we learned on modelling user intent and combining LLMs with principled tools when building complex pipelines for it!
@NYUDataScience
NYU Center for Data Science
4 days
CDS PhD alum Vishakh Padmakumar (@vishakh_pk), now at @Stanford, tackled the hard part of summarization — deciding what matters. At @Adobe, he built diversity-aware summarizers; at AI2 (@allen_ai), intent-based tools for literature review tables. https://t.co/pNhEjhlUhV
2
5
50
@allen_ai
Ai2
3 months
Introducing Asta—our bold initiative to accelerate science with trustworthy, capable agents, benchmarks, & developer resources that bring clarity to the landscape of scientific AI + agents. 🧵
10
50
220
@mosh_levy
Mosh Levy
3 months
Producing reasoning texts boosts the capabilities of AI models, but do we humans correctly understand these texts? Our latest research suggests that we do not. This highlights a new angle on the "Are they transparent?" debate: they might be, but we misinterpret them. 🧵
8
28
141
@cmalaviya11
Chaitanya Malaviya
3 months
People at #ACL2025, come drop by our poster today & chat with me about how context matters for reliable language model evaluations! Jul 30, 11:00-12:30 at Hall 4X, board 424.
@cmalaviya11
Chaitanya Malaviya
1 year
Excited to share ✨ Contextualized Evaluations ✨! Benchmarks like Chatbot Arena contain underspecified queries, which can lead to arbitrary eval judgments. What happens if we provide evaluators with context (e.g who's the user, what's their intent) when judging LM outputs? 🧵↓
1
6
23
@allen_ai
Ai2
4 months
Ai2 is excited to be at #ACL2025 in Vienna, Austria this week. Come say hello, meet the team, and chat about the future of NLP. See you there! 🤝📚
8
10
60
@arnaik19
Aakanksha Naik
4 months
In Vienna for #ACL2025NLP this week! @josephcc, @aps6992 and I will present the Ai2 ScholarQA scientific QA system on Wed. I’ll also be at @sdpworkshop on Thurs! Hit me up if you’d like to chat about agents for science and post-training, or explore cafes in Vienna 🥐
6
3
28
@tongshuangwu
Sherry Tongshuang Wu
4 months
We all agree that AI models/agents should augment humans instead of replace us in many cases. But how do we pick when to have AI collaborators, and how do we build them? Come check out our #ACL2025NLP tutorial on Human-AI Collaboration w/ @Diyi_Yang @josephcc, 📍7/27 9am@ Hall N!
1
23
119
@allen_ai
Ai2
4 months
In our new paper, “Contextualized Evaluations: Judging Language Model Responses to Underspecified Queries,” we find that adding just a bit of missing context can reorder model leaderboards—and surface hidden biases. 🧵👇
5
29
160
@allen_ai
Ai2
4 months
This new ScholarQA capability works for most openly licensed papers. It’s part of our commitment to transparency in science and making it easier to verify, trace, and build trusted AI.
1
2
17
@josephcc
Joseph Chee Chang
4 months
You can now jump from Scholar QA answers to highlighted evidence in the source paper's pdf : )
@allen_ai
Ai2
4 months
We’ve upgraded ScholarQA, our agent that helps researchers conduct literature reviews efficiently by providing detailed answers. Now, when ScholarQA cites a source, it won’t just tell you which paper it came from–you’ll see the exact quote, highlighted in the original PDF. 🧵
0
1
16
@allen_ai
Ai2
4 months
Introducing SciArena, a platform for benchmarking models across scientific literature tasks. Inspired by Chatbot Arena, SciArena applies a crowdsourced LLM evaluation approach to the scientific domain. 🧵
12
64
407
@ai2_s2research
Semantic Scholar Research @ AI2
6 months
@allen_ai @SemanticScholar is hiring an #ml #nlp #ai reasoning researcher for a Research Scientist, Agents for Science position with target start dates in 2025. Excited about developing AI systems with deep reasoning capabilities for science? Send an application our way!
1
10
21
@PhilippeLaban
Philippe Laban
6 months
🆕paper: LLMs Get Lost in Multi-Turn Conversation In real life, people don’t speak in perfect prompts. So we simulate multi-turn conversations — less lab-like, more like real use. We find that LLMs get lost in conversation. 👀What does that mean? 🧵1/N 📄 https://t.co/xt2EfGRh7e
4
38
129
@kevpjk
Kevin Pu
7 months
I am presenting IdeaSynth in the last paper session at #CHI2025 right now! Feel free to come by G314-315 to learn about how we utilize LLM to provide literature-grounded assistance for research idea development! The talk is happening at around 10:12 AM.
@kevpjk
Kevin Pu
9 months
🔬Research ideation is hard: After the spark of a brilliant initial idea, much work is still needed to further develop it into a well-thoughtout project by iteratively expanding and refining the initial idea and grounding it to relevant literature. How can we better support this?
1
8
45
@RuotongWang1
Ruotong Wang
7 months
AI agents are entering online social spaces, but often their messages feel generic or intrusive. In our #CHI25 paper, we introduce Social-RAG, a workflow that grounds AI generations in the specific group context by retrieving from the group’s interaction history. 🧵(1/9)
2
21
85
@allen_ai
Ai2
8 months
Meet Ai2 Paper Finder, an LLM-powered literature search system. Searching for relevant work is a multi-step process that requires iteration. Paper Finder mimics this workflow — and helps researchers find more papers than ever 🔍
19
217
1K
@rayrayfok
Raymond Fok
8 months
We are looking for CS researchers to participate in a study exploring how AI can change the way we do literature reviews. 📚🧑‍🎓 Time: ~90 min, remote Compensation: $60 USD Sign up here: https://t.co/5P0hDpCUMQ @dsweld @amyxzh @josephcc @marissa_rad @Siangliulue @turingmusician
Tweet card summary image
docs.google.com
We are researchers from the University of Washington and AI2, and are currently recruiting participants for a user study that explores how AI tools can support scientific literature review. During...
0
12
28
@allen_ai
Ai2
8 months
We’re excited to share some updates to Ai2 ScholarQA: 🗂️ You can now sign in via Google to save your query history across devices and browsers. 📚 We added 108M+ paper abstracts to our corpus - expect to get even better responses! ✨ The backbone model has been updated to the
3
37
167
@kylelostat
Kyle Lo
9 months
working with PDFs for LMs, errors in extracting content can be soul-crushing we need not just clean OCR but also correct reading order and structured info reprsentation our latest in tools for this, olmOCR is incredible! here's page 5 of our tech report as plain text w/ table🤯
@allen_ai
Ai2
9 months
Introducing olmOCR, our open-source tool to extract clean plain text from PDFs! Built for scale, olmOCR handles many document types with high throughput. Run it on your own GPU for free—at over 3000 token/s, equivalent to $190 per million pages, or 1/32 the cost of GPT-4o!
1
15
84