
Sweta Agrawal
@swetaagrawal20
Followers
1K
Following
9K
Media
6
Statuses
357
Research Scientist @Google Translate | Past: Postdoc Researcher @itnewspt | Ph.D. @ClipUmd, @umdcs #nlproc
Lisbon, Portugal
Joined June 2014
RT @sundarpichai: Excited to make our best AI tools free for college students in the US + other select countries for a year - and to provid….
0
868
0
📢Shared task deadline extended: You now have a whole week to go (until August 6 AoE) to register and send us your submissions!!.
The 2025 MT Evaluation shared task brings together the strengths of the previous Metrics and Quality Estimation tasks under a single, unified evaluation framework. The following tasks are now open (deadline July 31st but participation has never been easier 🙂).
0
3
10
RT @markuseful: Our Google Translate team is bringing a strong presence to #ACL2025 in Vienna this week! 🇦🇹 My group is excited to present….
0
5
0
RT @zouharvi: The 2025 MT Evaluation shared task brings together the strengths of the previous Metrics and Quality Estimation tasks under a….
0
6
0
RT @GoogleDeepMind: An advanced version of Gemini with Deep Think has officially achieved gold medal-level performance at the International….
0
778
0
RT @GoogleIndia: If you’re a student in India - you’ve just been granted access to a FREE Gemini upgrade worth ₹19,500 for one year 🥳✨. Cla….
0
1K
0
RT @aclmeeting: 🎉A reminder from ACL 2025: 🗣️ #InvitedTalk by Professor Luke Zettlemoyer. He'll be presenting on "Rethinking Pretraining….
2025.aclweb.org
ACL 2025 Conference Overview.
0
2
0
RT @gui_penedo: We have finally released the 📝paper for 🥂FineWeb2, our large multilingual pre-training dataset. Along with general (and ex….
0
98
0
RT @ReviewAcl: Dear ACL community, We are seeking emergency reviewers for the May cycle. Please indicate your availability (ASAP) if you ca….
0
16
0
RT @ManosZaranis: 🚨Meet MF²: Movie Facts & Fibs: a new benchmark for long-movie understanding!.🤔Do you think your model understands movies?….
0
24
0
RT @PontiEdoardo: 🚀 By *learning* to compress the KV cache in Transformer LLMs, we can generate more tokens for the same compute budget.….
0
31
0
RT @xwang_lk: 𝘏𝘶𝘮𝘢𝘯𝘴 𝘵𝘩𝘪𝘯𝘬 𝘧𝘭𝘶𝘪𝘥𝘭𝘺—𝘯𝘢𝘷𝘪𝘨𝘢𝘵𝘪𝘯𝘨 𝘢𝘣𝘴𝘵𝘳𝘢𝘤𝘵 𝘤𝘰𝘯𝘤𝘦𝘱𝘵𝘴 𝘦𝘧𝘧𝘰𝘳𝘵𝘭𝘦𝘴𝘴𝘭𝘺, 𝘧𝘳𝘦𝘦 𝘧𝘳𝘰𝘮 𝘳𝘪𝘨𝘪𝘥 𝘭𝘪𝘯𝘨𝘶𝘪𝘴𝘵𝘪𝘤 𝘣𝘰𝘶𝘯𝘥𝘢𝘳𝘪𝘦𝘴. But current reasoning….
0
138
0
RT @MohitIyyer: GRPO + BLEU is a surprisingly good combination for improving instruction following in LLMs, yielding results on par with th….
0
5
0
RT @psanfernandes: MT metrics excel at evaluating sentence translations, but struggle with complex texts. We introduce *TREQA* a framework….
0
11
0
RT @Saul_Santos1997: 🚀 New paper alert! 🚀. Ever tried asking an AI about a 2-hour movie? Yeah… not great. Check: ∞-Video: A Training-Free….
0
5
0
RT @ArtidoroPagnoni: 🚀 Introducing the Byte Latent Transformer (BLT) – An LLM architecture that scales better than Llama 3 using byte-patch….
0
144
0
RT @tozefarinhas: If you're in Vancouver attending #NeurIPS2024, stop by our spotlight poster on Thu 12 Dec 11am-2pm PST (East Exhibit Hall….
openreview.net
To ensure large language models (LLMs) are used safely, one must reduce their propensity to hallucinate or to generate unacceptable answers. A simple and often used strategy is to first let the LLM...
0
1
0
RT @andre_t_martins: Heading to Vancouver soon to attend #NeurIPS2024! Stop by our tutorial and posters 👇.
0
4
0