Explore tweets tagged as #pretrained
🧵[1/8] Excited to share our NeurIPS 2025 Spotlight paper “Does Object Binding Naturally Emerge in Large Pretrained Vision Transformers?” ✨ To add to the broader discussion of binding in neural networks, we ask whether and how Vision Transformers perform object binding (the
13
100
716
When a product really becomes a must-have for its users... that's the sweet spot for an engineer. Pierce has found it with his most recent company. #Startups #ProductMarketFit #Engineering #Entrepreneurship #FounderLife
0
0
0
Ilya Sutskever: Humans aren’t AGI; they know little and learn constantly. Pretrained AGI on the other hand, overshoots by skipping that trial-and-error phase. Real intelligence comes from ongoing learning, not from starting out “finished.” https://t.co/VBzrs761ez
45
116
1K
This week I'll be at #NeurIPS presenting our latest work "Elastic ViTs from Pretrained Models without Retraining"! TL;DR: You can make pretrained ViTs elastic in under 5 minutes on a single A100 via structured pruning, without re-training! Photo: in-context semantic
2
15
101
Robotics has a missing layer. Robotics needs its HuggingFace and sandbox for training. So we’re building it. Mechanet: a marketplace + simulation stack for: • upload and sell any robotics assets (e.g. control code & pretrained models) • tokenized ownership for all assets •
45
116
625
Apple's new paper is mindblowing They showed that one attention layer is enough to turn pretrained vision features into SoTA image generators! This dramatically simplifies diffusion models while keeping the top-tier quality
22
252
2K
AI/ML Engineers, this course is more than a gold mine. Don't miss The knowledge of LLMs is essential for AI/ML Engineers. This course offered by Indian Institute of Technology, Madras covers all foundation concepts related to Transformer model, Pretrained Language Models and
1
6
9
Nous Founding Engineer discusses the structural gap in American AI. "The big part of the challenge is the incentive scheme. How do you actually finance creating a pretrained model? China has solved this to some extent where they can release these open foundation models. We have
1
0
6
Building your own time series models takes ages. And most of us don't have that kind of time. Time Series Forecasting Using Foundation Models shows how to skip to accurate, flexible forecasting with pretrained models. 50% off with code peixeiro2pb thru 12/13:
2
15
181
Neurips 2025 was such a blast! We snuck a grand piano into the CreativeAI Track to demo Aria, our pretrained chat-style music model:
23
47
399
GlycanGT: A Foundation Model for Glycan Graphs with Pretrained Representation and Generative Learning https://t.co/1CUOhC9Jtk
#biorxiv_bioinfo
0
1
1
GlycanGT: A Foundation Model for Glycan Graphs with Pretrained Representation and Generative Learning 1. GlycanGT is a novel foundation model for glycans, leveraging a graph transformer architecture to address the complexity and diversity of glycan structures. This model
0
0
2
Pretrained using 335,645 whole-slide images, a foundation model is developed to provide representations for slide- and patient-level tasks. It is capable of performing clinical tasks and generating reports even in data-scarce scenarios, such as rare cancer diagnosis and survival
1
39
166
Selective, Controlled and Domain-Agnostic Unlearning in Pretrained CL.. https://t.co/aSkLyN1KmH --- Newsletter https://t.co/lLfwtmvXkM More story https://t.co/THDNHTvkQS LinkedIn https://t.co/2KUI3AHTZN
#AINewsClips #AI #ML #ArtificialIntelligence #MachineLearning #ComputerVision
0
0
0
Ok, this is really cool! EgoX: Generate immersive first-person video from any third-person clip Contributions: • We propose a novel framework, EgoX, for synthesizing high-fidelity egocentric video from a single exocentric video by effectively exploiting pretrained video
2
2
12
Turn High-Volume PDFs into LLM-Ready data with Vision-First Agentic Document AI! LandingAI has released Agentic Document Extraction (ADE) DPT-2 Mini, a lightweight variant of the Document Pretrained Transformer 2 (DPT-2) designed for high-volume document workflows. It’s ideal
8
40
220
Our code, pretrained weights, and pseudo-labeled internet stereo data will be released. Welcome to watch and star our repo! (8/N) Code: https://t.co/dVR31nz4Fh Webpage: https://t.co/CUa7hsK0q1 Paper:
0
1
3
🚨We converted pretrained LLMs into looped LLMs that can crank up performance by looping for more iterations. Our looped models surpass the performance of the pretrained models we started out with, showing that existing models benefit from increased computational depth. 📜1/9
10
26
152
👁️ Geometry Meets Vision: Revisiting Pretrained Semantics in Distilled Fields We find that visual-only features (DINO) outperform visual-geometry features (VGGT) in spatial tasks! 👇
7
31
244