[LG] Poly-View Contrastive Learning
The paper presents a novel framework called Poly-View Contrastive Learning, challenging the conventional belief that contrastive learning requires large sample sizes and extensive training epochs to enhance…
@fouriergalois
@fly51fly
seems reasonable. I actually did a similar experiment with similar conclusion in my thesis long time ago, see below :)
Above paper is contrastive self-supervised, not image-text. I doubt it carries over as-is, but something like SILC uses the insight
@fouriergalois
@fly51fly
yeah exactly, it's mostly for improving on dense downstream. I believe it's DINO that started it, at least we informally call it "adding the dino trick" :)
However, I'm not sure if it's beneficial when controlling for the increased pre-train cost? Need to see a plot of that.