Satya Narayan Shukla Profile
Satya Narayan Shukla

@ImSNShukla

Followers
428
Following
773
Media
1
Statuses
34

Senior Research Scientist @MetaAI | PhD @UMassAmherst | Prev @MSFTResearch, @facebookai and @Bosch_AI | BTech @IITKgp

New York
Joined November 2013
Don't wanna be here? Send us removal request.
@ImSNShukla
Satya Narayan Shukla
4 months
Due to popular demand, we have open-sourced the code and data for MetaQuery.
@xichen_pan
Xichen Pan
4 months
The code and instruction-tuning data for MetaQuery are now open-sourced! Code: https://t.co/VpHrt5POSH Data: https://t.co/EvpCEPDGFN Two months ago, we released MetaQuery, a minimal training recipe for SOTA unified understanding and generation models. We showed that tuning few
0
0
4
@ImSNShukla
Satya Narayan Shukla
6 months
Check out our latest work on training unified understanding and generation models. We show that frozen MLLMs can seamlessly transfer knowledge, reasoning, and in-context learning from text to pixel output.
@xichen_pan
Xichen Pan
6 months
We find training unified multimodal understanding and generation models is so easy, you do not need to tune MLLMs at all. MLLM's knowledge/reasoning/in-context learning can be transferred from multimodal understanding (text output) to generation (pixel output) even it is FROZEN!
0
0
4
@sainingxie
Saining Xie
6 months
Our take on a 4o-style AR + diffusion unified model: Transferring knowledge from an AR LLM to generation is easier than expected--you don't even need to touch the LLM. The right bridge between output modalities can unlock cool capabilities like knowledge-augmented generation!
@xichen_pan
Xichen Pan
6 months
We find training unified multimodal understanding and generation models is so easy, you do not need to tune MLLMs at all. MLLM's knowledge/reasoning/in-context learning can be transferred from multimodal understanding (text output) to generation (pixel output) even it is FROZEN!
5
29
263
@LucasBandarkar
Lucas Bandarkar
1 year
We presented Belebele at ACL 2024 this week! (Thx to @LiangDavis and @ImSNShukla) A year on from its release, it’s been really cool to see the diversity of research projects that have used it. The field is in dire need of more multilingual benchmarks !
@AIatMeta
AI at Meta
1 year
The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants ➡️ https://t.co/i48ufZEeYU
0
6
23
@ImSNShukla
Satya Narayan Shukla
1 year
Check out our recent CVPR paper on improving spatial reasoning in Visual-LLMs.
@kahnchana
Kanchana Ranasinghe
1 year
Check out “Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs” to be presented at @CVPR 2024. Project during @Meta internship. Drop by our poster on the morning of June 20th at CVPR. Poster Link: https://t.co/HioAF8Ehg0 Arxiv: https://t.co/ztLBlpWtuV 1/6
0
0
7
@kahnchana
Kanchana Ranasinghe
2 years
Our paper on “Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs” accepted to CVPR ‘24. Arxiv: https://t.co/ztLBlpVVFn
3
5
41
@ImSNShukla
Satya Narayan Shukla
2 years
We released Belebele today, a multilingual reading comprehension dataset that spans 122 language variants. The paper also presents full results across all languages for Llama (1&2), Falcon, and Chat-GPT (3.5turbo) + multilingual encoders XLMR, InfoXLM, and XLMV.
@AIatMeta
AI at Meta
2 years
Announcing Belebele, a first-of-its-kind multilingual reading comprehension dataset. This dataset is parallel for 122 language variants, enabling direct comparison of how well models understand different languages. Dataset ➡️ https://t.co/5smUz8c977
1
0
11
@ImSNShukla
Satya Narayan Shukla
3 years
Come join us today for the ‘Learning from time series for health’ workshop at #NeurIPS2022 Room 392
@tom_hartvigsen
Tom Hartvigsen
3 years
I'm so excited to finally co-host the workshop on Learning from Time Series for Health at #NeurIPS2022! We've got an exciting program (5 speakers+49 posters+panel+mentorship), so come chat about what's new in the world of time series for health! https://t.co/6GituxZi8O
0
1
3
@AIatMeta
AI at Meta
3 years
We’re pleased to introduce Make-A-Video, our latest in #GenerativeAI research! With just a few words, this state-of-the-art AI system generates high-quality videos from text prompts. Have an idea you want to see? Reply w/ your prompt using #MetaAI and we’ll share more results.
822
2K
8K
@tom_hartvigsen
Tom Hartvigsen
3 years
By popular demand, we've extended the submission deadline for the #NeurIPS2022 Workshop on Learning from Time Series for Health to September 30th!
0
3
12
@ImSNShukla
Satya Narayan Shukla
3 years
10 days to submit to our #NeurIPS workshop "Learning from Time Series for Health". Checkout our website: https://t.co/WqJl7T6hW2 Can't wait to see your amazing work and meet you in person in December!
0
2
11
@ImSNShukla
Satya Narayan Shukla
3 years
Excited to share that our workshop proposal on "Learning from Time Series for Health" got accepted to @NeurIPSConf 2022.
@tom_hartvigsen
Tom Hartvigsen
3 years
Our workshop on "Learning from Time Series for Health" was accepted to @NeurIPSConf 2022! 🎉📈 Looking forward to exploring this frontier of machine learning and health applications with some amazing speakers in person!
0
0
3
@ImSNShukla
Satya Narayan Shukla
4 years
I'll be presenting this work at #KDD2021 in the research paper session on 08/17 at 1:30 pm EST and in the poster session on 08/18 at 5:30 pm EST. Please feel free to drop by if you're attending, or ping me if you have any questions!
@ImSNShukla
Satya Narayan Shukla
4 years
Happy to share that our paper "Simple and Efficient Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes" has been accepted at ACM KDD 2021. Paper: https://t.co/cftQ8HuHoT Github: https://t.co/DbQvi0nJ7L with Anit Kumar Sahu, Devin Willmott, @zicokolter.
0
0
2
@ImSNShukla
Satya Narayan Shukla
4 years
This work was partially done during my internship at @Bosch_AI.
0
0
0
@ImSNShukla
Satya Narayan Shukla
4 years
Our proposed method uses Bayesian optimization for finding adversarial perturbations in low dimension subspace and maps it back to the original input space. We obtain improved performance in both untargeted and targeted attack settings. and on L_\infty and L_2 threat models.
1
0
1
@ImSNShukla
Satya Narayan Shukla
4 years
Happy to share that our paper "Simple and Efficient Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes" has been accepted at ACM KDD 2021. Paper: https://t.co/cftQ8HuHoT Github: https://t.co/DbQvi0nJ7L with Anit Kumar Sahu, Devin Willmott, @zicokolter.
Tweet card summary image
github.com
Code for "Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes" - satyanshukla/bayes_attack
2
2
10
@ImSNShukla
Satya Narayan Shukla
5 years
Heartbreak for @anishgiri!! Played so well the whole tournament only to lose a completely winning game in armageddon on time
@MagnusCarlsen
Magnus Carlsen
5 years
As my father likes to say, 58 is not quite 60 yet #TataSteelChess
0
0
0
@ImSNShukla
Satya Narayan Shukla
5 years
Our approach achieves SOTA performance on interpolation and classification tasks on PhysioNet and MIMIC-III datasets while providing significantly reduced training times than competitive methods. 3/4
1
0
1