
Avshalom Manevich
@AvshalomM
Followers
369
Following
5K
Media
67
Statuses
649
Inference Engineer @hcompany_ai working on Multimodal LLM serving. Previously: @biunlp, @AI21Labs, @AmazonScience, @Bosch_AI
Joined November 2011
Started a new role as Core Team Engineer at @hcompany_ai working on multimodal LLM inference. Today we're launching Runner H, Holo-1, and Tester H!.Grateful for my time at @AI21Labs on model serving. Up next: moving to Paris with my family 🇫🇷.
Today, we’re thrilled to announce 3 major steps forward in bringing our vision of Agentic AI to life:. 1️⃣ Runner H : Public Beta is now live!. Imagine having your AI agent execute entire workflows across web apps, documents, spreadsheets, and more with a single prompt.
0
0
4
RT @AI21Labs: Excited to share that our paper, Jamba: Hybrid Transformer-Mamba Language Models, has been accepted to ICLR 2025!. We’re hono….
0
7
0
RT @UrikaUri: Attending 🌴#EMNLP2024 or interested in what people are working on these days? We organized it all for you with Knowledge Nav….
0
23
0
RT @RoyiRassin: 🚨🚨🚨cool new paper alert 😎.We study the ability of text-to-image models (SD) to learn copyrighted concepts and find the Imit….
0
16
0
RT @ShirAshuryTahan: Excited to share our new paper on building an ontology from a coreference graph! 🎉 .We leverage cross-document corefer….
0
9
0
RT @UrikaUri: 🐳Introducing Knowledge Navigator🐳: A new way to explore scientific literature! Our paper shows how LLMs can transform informa….
0
27
0
For more on our work (with @rtsarfaty) on Contrastive Decoding for LVLMs, including code, check out
github.com
Contribute to avshalomman/lcd development by creating an account on GitHub.
0
0
2
#ACL2024.A belated highlights list:. - Presenting our work and engaging in discussions.- Bangkok with my wife and 1-year-old daughter.- Amazing food.- A much needed break from challenging times at home. Grateful for the well-organized conference and the opportunity to connect!
1
1
19
RT @AI21Labs: We released the #Jamba 1.5 open model family:. - 256K #contextwindow .- Up to 2.5X faster on #longcontext in its size class.-….
0
101
0
RT @mosh_levy: Honored to share that our work received an outstanding paper award at ACL2024! 🎉 .Co-authored with my brilliant collaborator….
0
4
0
RT @paul_roit: Proud to present our latest paper, in which we revisit semantic argument detection (a sub-task of SRL), this time focusing o….
0
14
0
ACL folks: come check out our poster for LCD: Language Contrastive Decoding, today @ ACL Findings poster session ll 🐻.
arxiv.org
Large Vision-Language Models (LVLMs) are an extension of Large Language Models (LLMs) that facilitate processing both image and text inputs, expanding AI capabilities. However, LVLMs struggle with...
0
0
9
RT @YairGolan1: שירות הביטחון הכללי צריך לפתוח בחקירה לגבי מידת המעורבות של צמרת המשטרה, ח"כים ושרים בממשלה בניסיון ההפיכה המשטרית שבוצע את….
0
664
0
RT @ravfogel: Our paper on description-based similarity has (finally) been accepted to @COLM_conf! A joint work with @valentina__py, Amir D….
0
6
0
RT @ravfogel: I will be in Vienna at @icmlconf next week! DM or email me if you want to chat (:. Together with @shashwat_s19, I will presen….
0
8
0
RT @clu_avi: 🚀 Excited to be part of this extremely cool benchmark! SEAM 🤐 is setting the new standard for evaluating LLMs on multi-documen….
0
1
0