SivanDoveh Profile Banner
Sivan Doveh Profile
Sivan Doveh

@SivanDoveh

Followers
242
Following
535
Media
34
Statuses
237

Context-Aware VLMs; CS Ph.D. Candidate @ Weizmann; Student Researcher @Google

Joined January 2018
Don't wanna be here? Send us removal request.
@SivanDoveh
Sivan Doveh
5 days
RT @ramoscsv: Deadline Extended!!!. Submit your paper to the LongVid-Foundations Workshop @ICCVConference and make part of the discussion!โ€ฆ.
0
5
0
@SivanDoveh
Sivan Doveh
8 days
RT @ramoscsv: ๐Ÿ”” Last 24 hours!! ๐Ÿ””. Donโ€™t shelve that great idea! .Submit your paper to the LongVid-Foundations Workshop @ICCVConference andโ€ฆ.
0
4
0
@SivanDoveh
Sivan Doveh
13 days
IPLOC accepted to ICCV25 โ˜บ๏ธ.Thanks to all the people that were part of it ๐Ÿฉท. The idea for this paper came by a lake during a visit to Graz for a talk. It has traveled with me through too many countries and too many wars, and itโ€™s now a complete piece of work.
@SivanDoveh
Sivan Doveh
8 months
Ever wanted to locate your cat in a database of images using just one reference image? Probably notโ€”but this highlights a gap in VLMs. They struggle to localize specific objects given in-context examples, often copying the last sample's location instead of learning from it.
Tweet media one
2
2
15
@SivanDoveh
Sivan Doveh
14 days
Working on videos that are longer than 8 seconds? Want to visit Hawaii? Consider submitting to this workshop ๐Ÿ˜. LongVid-Foundations @ICCVConference!. Proceedings: July 1, 2025 .No Proceedings: Aug 30, 2025. Link: .#ICCV2025
Tweet media one
0
3
16
@SivanDoveh
Sivan Doveh
1 month
RT @ramoscsv: โš ๏ธ NEW DATES + NEW TRACK for LongVid-Foundations @ICCVConference!. Submit work & learn from leading experts: Katerina Fragkiaโ€ฆ.
0
3
0
@SivanDoveh
Sivan Doveh
2 months
๐Ÿคฉ๐Ÿคฉ๐Ÿคฉ.
@ramoscsv
Vasco Ramos
2 months
๐Ÿ“ข Announcing our 1st Workshop on Long Multi-Scene Video Foundations @ #ICCV2025 (@ICCVConference) in Honolulu, Hawaii!. Co-organized by Regev Cohen, @SivanDoveh, @hila_chefer , Jehanzeb Mirza, @hbXNov , @inbar_mosseri , Joao Magalhaes and me. website:
Tweet media one
0
0
1
@SivanDoveh
Sivan Doveh
3 months
Nimrod will present liveXiv - an evolving dataset that tackles contamination - in Two days at @iclr_conf - super cool work and a super cool presenter ๐Ÿคฉ.
@NimrodShabtay
Nimrod Shabtay
3 months
LiveXiv will be "live" on #ICLR2025 - Friday April 25th 10:00-12:30 Poster #356.@RGiryes @felipemaiapolo @LChoshen @WeiLinCV @jmie_mirza @leokarlin @ArbelleAssaf @SivanDoveh.
0
0
5
@SivanDoveh
Sivan Doveh
5 months
3rd workshop on multimodal at @CVPR ๐Ÿคฉ.
@MMFMWorkshop
#3 MMFM Workshop
5 months
๐Ÿš€ Call for Papers โ€“ 3rd Workshop on Multi-Modal Foundation Models (MMFM) @CVPR! ๐Ÿš€. ๐Ÿ” Topics: Multi-modal learning, vision-language, audio-visual, and more!.๐Ÿ“… Deadline: March 14, 2025.๐Ÿ“ Submit your paper: ๐ŸŒ More details:
0
1
7
@SivanDoveh
Sivan Doveh
5 months
LiveXiv accepted to ICLR :) .It dynamically generates evolving benchmark from ArXiv to mitigate data contamination, ensuring ML models are evaluated on truly unseen data.
@NimrodShabtay
Nimrod Shabtay
6 months
I am happy to share that LiveXiv accepted to ICLR 2025 ๐Ÿฅณ.
0
1
6
@SivanDoveh
Sivan Doveh
7 months
RT @MMFMWorkshop: Excited to share that our 3rd Multimodal Workshop has been accepted to CVPR 2025 in Nashville! ๐ŸŽ‰ Looking forward to advanโ€ฆ.
0
1
0
@SivanDoveh
Sivan Doveh
7 months
RT @MMFMWorkshop: I'm back ๐Ÿ˜Ž.
0
2
0
@SivanDoveh
Sivan Doveh
7 months
RT @EylonALevy: Itโ€™s the middle of the night, day 440 of the October 7 War, and weโ€™re still getting shot at by Iranโ€™s pirate terrorists inโ€ฆ.
0
77
0
@SivanDoveh
Sivan Doveh
7 months
Just back from NeurIPS where we presented 'ConMe', exploring how VLMs handle compositional reasoning. Loved catching up with old friends and making new connections. A perfect reminder that I should start planning for the May deadline! ๐Ÿ˜Š
Tweet media one
Tweet media two
0
0
2
@SivanDoveh
Sivan Doveh
8 months
Results show better personalized localization without hurting prior performance. Annotate 1 image, and the model can annotate others!. Many thanks to all the exceptional collaborators!.Project page (paper & code):
Tweet media one
1
1
5
@SivanDoveh
Sivan Doveh
8 months
We train models (e.g., LLaVA-OV, Qwen2VL) with instruction-tuning to improve context-aware localization. Using specific regularizations and personalised instructions generated from video tracking datasets, we shift VLMsโ€™ focus from zero-shot to context-based learning.
1
0
3
@SivanDoveh
Sivan Doveh
8 months
Our work, "Teaching VLMs to Localize Specific Objects from In-context Examples," identifies this behaviour and takes steps towards correction. Given in-context samples of a specific object, the model will localize it in other (query) images.
1
0
3
@SivanDoveh
Sivan Doveh
8 months
Ever wanted to locate your cat in a database of images using just one reference image? Probably notโ€”but this highlights a gap in VLMs. They struggle to localize specific objects given in-context examples, often copying the last sample's location instead of learning from it.
Tweet media one
1
7
39
@SivanDoveh
Sivan Doveh
8 months
ืชืงื•ืคื” ืฉืœ ื ื™ืกื™ื ๐ŸŒŸ.
@Shaulirena
ืฉ๐•ื•ืœื™ ๐ŸŽ—๏ธ๐Ÿณ๏ธโ€๐ŸŒˆ Sh๐•uLi
8 months
ืžืขืจื›ื•ืŸ ืืจืฅ ื ื”ื“ืจืช ืžืืžืฉ ืขืœ "ื”ืคื˜ืจื™ื•ื˜ื™ื". ื”ื ืื” ืžื•ื‘ื˜ื—ืช ๐Ÿ˜Ž. ืงืจื“ื™ื˜: Roni Harel
0
0
0