Shuaicheng Zhang Profile
Shuaicheng Zhang

@zshuai8_az

Followers
54
Following
118
Media
2
Statuses
53

PhD candidate at VT #GNN#LLMonGRPAHS

Blacksburg, VA
Joined March 2015
Don't wanna be here? Send us removal request.
@zshuai8_az
Shuaicheng Zhang
1 year
This is just absurd, how can this happen in Neurips?
@sunjiao123sun_
Jiao Sun
1 year
Mitigating racial bias from LLMs is a lot easier than removing it from humans! Canโ€™t believe this happened at the best AI conference @NeurIPSConf We have ethical reviews for authors, but missed it for invited speakers? ๐Ÿ˜ก
0
0
0
@zshuai8_az
Shuaicheng Zhang
2 years
Best attention visualization ever
@3blue1brown
Grant Sanderson
2 years
The next chapter about transformers is up on YouTube, digging into the attention mechanism: https://t.co/TWNXiWM2az The model works with vectors representing tokens (think words), and this is the mechanism that allows those vectors to take in meaning from context.
0
0
0
@_akhaliq
AK
2 years
Vision-Flan Scaling Human-Labeled Tasks in Visual Instruction Tuning Despite vision-language models' (VLMs) remarkable capabilities as versatile visual assistants, two substantial challenges persist within the existing VLM frameworks: (1) lacking task diversity in pretraining
2
41
138
@jaschasd
Jascha Sohl-Dickstein
2 years
Have you ever done a dense grid search over neural network hyperparameters? Like a *really dense* grid search? It looks like this (!!). Blueish colors correspond to hyperparameters for which training converges, redish colors to hyperparameters for which training diverges.
272
2K
10K
@OpenAI
OpenAI
2 years
Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. https://t.co/7j2JN27M3W Prompt: โ€œBeautiful, snowy
9K
30K
132K
@gdb
Greg Brockman
2 years
Announcing Sora โ€” our model which creates minute-long videos from a text prompt: https://t.co/SZ3OxPnxwz
1K
3K
20K
@WenhuChen
Wenhu Chen
2 years
Wishing everyone a happy Chinese New Year!
3
21
181
@sainingxie
Saining Xie
2 years
๐ŸŒŽ ๐•ค๐•’๐•ช ๐•™๐•–๐•๐•๐•  ๐•ฅ๐•  ๐•ง๐•š๐•ฃ๐• ๐ŸŒ https://t.co/eBq55NxcbJ
10
64
324
@jiank_uiuc
Jian Kang
2 years
โ—โ—โ—Workshop Call for Paperโ—โ—โ— Interested in trustworthy learning on graphs? We are inviting contributions to the 2nd Workshop on Trustworthy Learning on Graphs (TrustLOG), colocated with WWW2024. Workshop website: https://t.co/uYElfQIBtH Deadline: February 13, 2024
1
9
23
@ysu_nlp
Yu Su
2 years
Thrilled to share that we @osunlp have 5 papers accepted to #ICLR2024 (2 spotlights/3 posters), covering LLM knowledge conflicts, math LLMs, language agents, interpretable transformer, and instruction tuning. Interestingly it's my first ICLR papers. Glad to get 5 firsts! ๐Ÿงต
2
11
90
@BowenJin13
Bowen Jin
2 years
New ๐ฌ๐ฎ๐ซ๐ฏ๐ž๐ฒ paper about "๐‹๐‹๐Œ๐ฌ ๐จ๐ง ๐†๐ซ๐š๐ฉ๐ก๐ฌ"! We provide a comprehensive overview of LLMs on graphs. We systematically summarize scenarios where LLMs are utilized on graphs and discuss specific techniques. paper:
3
25
109
@xiangyue96
Xiang Yue
2 years
๐Ÿš€ Introducing MMMU, a Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI. https://t.co/vPw4beOeha ๐Ÿง Highlights of the MMMU benchmark: > 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks >
29
179
739
@NeurIPSConf
NeurIPS Conference
2 years
We published the list of papers accepted to #NeurIPS2023 here: https://t.co/TWLudQGKJE
5
89
366
@YifanJiang17
Yifan Jiang
2 years
3
1
42
@zhiyangx11
zhiyang xu
2 years
Our new work โœจThe Art of SOCRATIC QUESTIONING: Recursive Thinking with Large Language Modelsโœจ is accepted to #EMNLP2023. Inspired by the human cognitive process, we propose SOCRATIC QUESTIONING, a divide-and-conquer style algorithm that mimics the ๐Ÿค”recursive thinking process.
2
36
162
@zshuai8_az
Shuaicheng Zhang
2 years
Nice work by Zhiyang, please give it a try if you want to benchmark multimodal instruction tuning!
@zhiyangx11
zhiyang xu
2 years
Today we officially release โœจVision-Flanโœจ, the largest human-annotated visual-instruction tuning dataset with ๐Ÿ’ฅ200+๐Ÿ’ฅ diverse tasks. ๐ŸšฉOur dataset is available on Huggingface https://t.co/XqFrpudysl ๐Ÿš€ For more details, please refer to our blog
0
0
1
@zhiyangx11
zhiyang xu
2 years
Today we officially release โœจVision-Flanโœจ, the largest human-annotated visual-instruction tuning dataset with ๐Ÿ’ฅ200+๐Ÿ’ฅ diverse tasks. ๐ŸšฉOur dataset is available on Huggingface https://t.co/XqFrpudysl ๐Ÿš€ For more details, please refer to our blog
Tweet card summary image
huggingface.co
2
16
48
@he_xiaoxin
Xiaoxin He
2 years
Dive into ICML 2023 papers effortlessly! ๐Ÿง Introducing a handy toy tool for paper collection and filtering. Whether you want the full list or targeted topics, this GitHub repo has got you covered:
Tweet card summary image
github.com
List of papers on ICML2023. Contribute to XiaoxinHe/icml2023_learning_on_graphs development by creating an account on GitHub.
1
12
62
@acbuller
Ziniu Hu
2 years
๐Ÿค” How to let Large Language Models (LLMs) agent utilize diverse tools via Tree Search ๐Ÿ”? In AVIS, we enable LLM Agent to dynamically traverse a transition graph with self-critic (when one path is not informative, backtrack to previous state). This achieves SOTA VQA result.
@GoogleAI
Google AI
2 years
Today on the blog, read all about AVIS โ€”ย Autonomous Visual Information Seeking with Large Language Models โ€”ย a novel method that iteratively employs a planner and reasoner to achieve state-of-the-art results on visual information seeking tasks โ†’ https://t.co/LJuewikzJG
2
24
125