qubvelx Profile Banner
Pavel Iakubovskii Profile
Pavel Iakubovskii

@qubvelx

Followers
284
Following
548
Media
8
Statuses
101

ML Engineer @ 🤗 | Kaggle Competition Master | Creator of segmentation_models.pytorch

Lisbon, Portugal
Joined May 2020
Don't wanna be here? Send us removal request.
@qubvelx
Pavel Iakubovskii
11 days
RT @joao_gante: LET'S GO! Cursor using local 🤗 transformers models! You can now test ANY transformers-compatible LLM against your codebase.….
0
40
0
@qubvelx
Pavel Iakubovskii
11 days
RT @NielsRogge: New model alert in Transformers: EoMT!. EoMT greatly simplifies the design of ViTs for image segmentation 🙌. Unlike Mask2Fo….
0
52
0
@qubvelx
Pavel Iakubovskii
14 days
RT @LysandreJik: BOOOM! transformers now has a baked-in http server w/ OpenAI spec compatible API. Launch it with `transformers serve` and….
0
29
0
@qubvelx
Pavel Iakubovskii
20 days
RT @NielsRogge: Another classic has made it into the Transformers library: LightGlue (ICCV '23)🔥. A deep neural network that learns to matc….
0
43
0
@qubvelx
Pavel Iakubovskii
27 days
🔧 Easy Fine-Tuning Notebook .Colab notebook that shows you how to fine-tune V-JEPA 2 on your video clips to create a custom video classification model.
0
0
0
@qubvelx
Pavel Iakubovskii
27 days
👋 Live Webcam Demo on HF Spaces . This is the super fun part, you can use your own webcam to see the model in action. Fire it up, do something in front of the camera, and watch as the V-JEPA 2 tries to guess what you're doing in real-time.
1
0
2
@qubvelx
Pavel Iakubovskii
27 days
🎬 4 Video Classification Models . We've converted to 🤗 Transformers 4 pretrained video classification models. The star of the show is a model fine-tuned on the huge "Something-Something-v2" dataset.
1
0
1
@qubvelx
Pavel Iakubovskii
27 days
V-JEPA 2 is an update of the first world model trained on video that achieves state-of-the-art visual understanding, enabling zero-shot robot control in new environments. Now, we've made it even easier to play with and build on. Here’s what’s new:.
1
0
1
@qubvelx
Pavel Iakubovskii
27 days
💥 💥 💥 We've just rolled out an update for @AIatMeta's V-JEPA 2 📹 models on @huggingface Hub 🤗
1
5
25
@qubvelx
Pavel Iakubovskii
29 days
RT @soumikRakshit96: ✨ CVPR 2025 highlight: A Distractor-Aware Memory for Visual Object Tracking with SAM2. the authors propose a new distr….
0
56
0
@qubvelx
Pavel Iakubovskii
1 month
🔧 This is a small step forward, focused on `pipeline()`, but there's more work to do across the board. Improving typings throughout the library is definitely on our radar. We hope this change makes things a bit smoother for you and we'd love your help and feedback!.
0
0
1
@qubvelx
Pavel Iakubovskii
1 month
Thanks to the magic of `overload` in Python, we can give you a much better experience: . ✅ Correctly inferred types for instantiated `pipeline()`, including matching docstrings and input signatures .✅ Smarter return types depending on your input (single vs. batch)
1
0
2
@qubvelx
Pavel Iakubovskii
1 month
But behind the scenes, `pipeline()` can return 20+ different pipeline classes, each with unique arguments and return types. Until now, that made working with them in IDEs not ideal - you often had to rely on external docs or code examples to figure things out.
1
0
1
@qubvelx
Pavel Iakubovskii
1 month
🚀 Better type annotations for 🤗 Transformers pipeline()!. `pipeline()` is a powerful and very flexible high-level API - just one line, and you're up and running with anything from object detection to text generation.
3
6
23
@qubvelx
Pavel Iakubovskii
1 month
V-JEPA 2 - video embeddings model in HF transformers from day zero. See We will release video classification pretrained models soon!.
@AIatMeta
AI at Meta
1 month
Our vision is for AI that uses world models to adapt in new and dynamic environments and efficiently learn new skills. We’re sharing V-JEPA 2, a new world model with state-of-the-art performance in visual understanding and prediction. V-JEPA 2 is a 1.2 billion-parameter model,
2
7
19
@qubvelx
Pavel Iakubovskii
2 months
RT @LysandreJik: The Transformers library is undergoing it's largest pivot to date 🙌. It now cements its role as the central model definiti….
0
59
0
@qubvelx
Pavel Iakubovskii
2 months
RT @ariG23498: Fine tune Gemma 3 for object detection. I am a big fan of PaliGemma (@giffmana et. al.). They showed us how image specific t….
0
14
0
@qubvelx
Pavel Iakubovskii
2 months
RT @mervenoyann: VLMS 2025 UPDATE 🔥. We just shipped a blog on everything latest on vision language models, including.🤖 GUI agents, agentic….
0
123
0
@qubvelx
Pavel Iakubovskii
2 months
RT @ariG23498: I have seen that Hugging Face really values ideas. Today @mervenoyann pointed out that while reading the Qwen 2.5 VL paper….
0
15
0
@qubvelx
Pavel Iakubovskii
2 months
RT @NielsRogge: New model alert in @huggingface Transformers! 🔥. D-FINE (@iclr_conf '25 spotlight) was just added. SOTA real-time object de….
0
53
0