Chao Zhang Profile
Chao Zhang

@chaozhangcs

Followers
612
Following
28
Media
11
Statuses
38

Assistant Professor @ Georgia Tech CSE LLM, Uncertainty, AI for science

Atlanta
Joined August 2020
Don't wanna be here? Send us removal request.
@GTCSE
GaTech CSE
9 months
CSE's B. Aditya Prakash, Chao Zhang, and Xiuwei Zhang are among the 1⃣6⃣ #GTComputing faculty awarded promotion and tenure this year! Be sure to check out the full list in the story πŸ‘‡, and join us in congratulating all our esteemed colleagues!πŸ₯³πŸŽ‰ https://t.co/dZOKZKNrQ3
0
5
18
@yuchen_zhuang
Yuchen Zhuang
1 year
Excited to present HYDRA πŸ‰ at #NeurIPS2024! πŸš€ Our novel model-factorization framework combines personal behavior patterns πŸ‘€ with global knowledge 🌐 for truly personalized LLM generation. Achieves 9%+ gains over SOTA across 5 tasks πŸ† using personalized RAG. Learn more:
2
16
85
@konglingkai_AI
Lingkai Kong
1 year
Time-MMD: A New Multi-Domain Multimodal Dataset for Time Series Analysis ⏰: Friday, December 13 This is in collaboration with @HessianLiu, Shangqing Xu, @leozhao_zhiyuan, @Harsha_64, Aditya B. Sasanur, Megha Sharma, @jiamingcui1997, @qingsongedu, @chaozhangcs, @badityap
1
3
3
@konglingkai_AI
Lingkai Kong
1 year
Aligning Large Language Models with Representation Editing: A Control Perspective. ⏰: Thursday, December 12 This is in collaboration with: @Haorui_Wang123*, Wenhao Mu*,@YuanqiD, @yuchen_zhuang, @YifeiZhou02, Yue Song, @rongzhi_zhang, @kaiwang_gua, @chaozhangcs.
2
4
10
@GTCSE
GaTech CSE
1 year
Day 1 of #NeurIPS2024 kicks off today! Check out the GT @ NeurIPS 2024 website πŸ”—πŸ‘‡ for a deep dive of @GeorgiaTech's 162+ researchers and their 84 papers being presented this week in Vancouver! https://t.co/JEPbCJvAHt @gtcomputing @GTResearchNews @ICatGT @gatech_scs #NeurIPS
0
4
14
@yue___yu
Yue Yu
1 year
πŸ” Reward modeling is a reasoning taskβ€”can self-generated CoT-style critiques help? πŸš€ Check out my intern work at Llama Team @AIatMeta, 3.7-7.3% gains on RewardBench vs. RM & LLM judge baselines, with better generalization & data efficiency! https://t.co/Mcv3NvS4lf #rlhf #LLM
5
50
197
@konglingkai_AI
Lingkai Kong
1 year
Happy to share RE-Control is accepted to #NeurIPS2024!
@konglingkai_AI
Lingkai Kong
1 year
1/n Want to align your LLM with human objectives but lack the computing resources for fine-tuning?😭😭 We propose RE-Control, which aligns LLMs through representation editing (RE) from a control perspective!🀩🀩 Arxiv: https://t.co/wiLuOAMHFs Code: https://t.co/2rY6AlLNT9
0
2
17
@_weiping
Wei Ping
1 year
Introducing RankRAG, a novel RAG framework that instruction-tunes a single LLM for the dual purposes of top-k context ranking and answer generation in RAG. For context ranking, it performs exceptionally well by incorporating a small fraction of ranking data into the training
4
40
152
@konglingkai_AI
Lingkai Kong
1 year
1/n Want to align your LLM with human objectives but lack the computing resources for fine-tuning?😭😭 We propose RE-Control, which aligns LLMs through representation editing (RE) from a control perspective!🀩🀩 Arxiv: https://t.co/wiLuOAMHFs Code: https://t.co/2rY6AlLNT9
1
17
92
@chaozhangcs
Chao Zhang
1 year
πŸŽ‰ Congrats, Lingkai! πŸŽ“ I am incredibly proud of your achievements and the impact you've made at @GTCSE. Wishing you all the best as you embark on your exciting new journey. Keep soaring! πŸš€ #ProudMentor
@konglingkai_AI
Lingkai Kong
1 year
I'll cherish my time at @GTCSE! Also looking forward to my new journey at @HCRCS working with @MilindTambe_AI!
1
0
18
@YuanqiD
Yuanqi Du
1 year
🧡1/n LLMs significantly improve Evolutionary Algorithms for molecular discovery! For 18 different molecular optimization tasks, we demonstrate how to achieve SOTA performance by incorporating different LLMs! Learn more in our new paper! Website: https://t.co/S0zw97Ialr(w/ Code)
2
26
97
@victorxfung
Victor Fung
1 year
We recently explored if LLMs can be used to accelerate materials discovery. Instead of using LLMs to predict properties or generating new materials directly, we ask it to iteratively modify a starting material towards a given property target. Details here: https://t.co/kBXYHV2chE
Tweet card summary image
arxiv.org
Discovering new materials can have significant scientific and technological implications but remains a challenging problem today due to the enormity of the chemical space. Recent advances in...
3
6
47
@haotiansun014
Haotian Sun
2 years
Having troubles with blind domain adaptation for GPTs through OpenAI or Azure πŸ€”? We are excited to introduce BBox-Adapter πŸ”Œβ€” Lightweight Adapting for Black-Box #LLMsπŸ“¦. BBox-Adapter offers a transparent, privacy-conscious, and cost-effective solution for customizing
1
4
11
@luoyunan
Yunan Luo
2 years
Big thanks to @anthonygitter for the nice summary of our #RECOMB2024 paper!
@anthonygitter
Anthony Gitter
2 years
"Contrastive Fitness Learning: Reprogramming Protein Language Models for Low-N Learning of Protein Fitness Landscape" by @luoyunan and team. https://t.co/4fZqmxGWQD 1/
0
2
20
@chaozhangcs
Chao Zhang
2 years
Want smarter LLM agents? πŸ€– Join Haotian's @haotiansun014 poster on AdaPlanner tomorrow! πŸ“… It enables LLMs to think ahead & plan adaptively based on feedback. #NeurIPS2023 #LLMs #LLMagent https://t.co/byl5Stx2uD
@haotiansun014
Haotian Sun
2 years
Excited to introduce AdaPlanner, our LLM agent for solving embodied tasks via closed-loop planning. Key features: 1) Adaptively refines LLM-generated plan from environment feedback, with both in-plan and out-of-plan refining strategies 2) A code-style LLM prompt structure to
0
0
16
@chaozhangcs
Chao Zhang
2 years
#NeurIPS2023 How can #LLMs generate synthetic data for training smaller models? Join Yue's @yue___yu poster session at NeurIPS2023 (Session 6) & find out more! https://t.co/ptfyBc0tRO
@yue___yu
Yue Yu
2 years
πŸš€ Checkout our new preprint! πŸ” LLM-as-training-data-generator is a recent paradigm for zero-shot learning. We design attributed prompts to generate diverse training data from #LLMs automatically. 1/n Link: https://t.co/Ka1HIF8WB0 Code: https://t.co/4wM30DzqkY
0
1
14
@chaozhangcs
Chao Zhang
2 years
Yuchen will present this work at today's poster session 1 #423. Don't miss out if you are at #NeurIPS2023
@yuchen_zhuang
Yuchen Zhuang
2 years
πŸ”§Thrilled to introduce #ToolQA, a new dataset to evaluate the capabilities of #LLMs in answering challenging questions with external tools. It offers two levels (easy/hard) across eight real-life scenarios. πŸš€ More details below: 🧡(1/5)
0
4
8
@chaozhangcs
Chao Zhang
2 years
"May the force be with you" at #NeurIPS2023! Interested in ML forcefield & MD simulation? Don't miss today's poster session 1 #1919. My student Rui will share our work on unified force-centric pre-training over 3D molecular conformations! https://t.co/tCzOTIu5g6
0
2
12
@chaozhangcs
Chao Zhang
2 years
#KDD2023 If you care about uncertainty qualification and trustworthy AI, don't miss our tutorial tomorrow 1pm at Room 202A
@konglingkai_AI
Lingkai Kong
2 years
We will present our #KDD tutorial on β€œUncertainty Quantification in Deep Learning” at 1-4pm, Aug. 6th. We will discuss recent progress in uncertainty-aware DNNs and their applications across various domains. Welcome to attend & engage with us! ( https://t.co/S33HjlGrn0) @kdd_news
0
1
23
@Marktechpost
Marktechpost AI Dev News ⚑
2 years
1/4 🧡 A new research introduces AttrPrompt, a Language Model as Training Data Generator. This is a game-changer for Zero-Shot Learning, a paradigm that allows AI to understand tasks it's never seen before. πŸš€ @yue___yu Quick Read:
Tweet card summary image
marktechpost.com
A New AI Research Introduces AttrPrompt: A LLM-as-Training-Data-Generator for a New Paradigm in Zero-Shot Learning
3
33
83