Princeton Vision & Learning Lab Profile
Princeton Vision & Learning Lab

@PrincetonVL

Followers
2K
Following
1
Media
16
Statuses
27

https://t.co/Bd2gisj8hY

Princeton, NJ
Joined March 2023
Don't wanna be here? Send us removal request.
@PrincetonVL
Princeton Vision & Learning Lab
25 days
RT @ShmuelBerman: Can Visual Language Models (VLMs) do non-local visual reasoning, i.e., piecing together scattered visual evidence? Humans….
0
4
0
@PrincetonVL
Princeton Vision & Learning Lab
26 days
(6/n) Code for our assets is released! For more details and results, check out our website and paper. 📝ArXiv: 💻GitHub: 🔗Project:
0
0
6
@grok
Grok
19 days
Blazing-fast image creation – using just your voice. Try Grok Imagine.
283
567
3K
@PrincetonVL
Princeton Vision & Learning Lab
26 days
(5/n) Scaling the number of training assets leads to real world gains on the door opening task, as well as improvements in movable part segmentation and policy generalization.
1
0
2
@PrincetonVL
Princeton Vision & Learning Lab
26 days
(4/n) Assets have tunable physical properties to help bridge the sim-to-real gap. Provided assets come with defaults for standard values such as friction, density, stiffness, etc.
1
0
2
@PrincetonVL
Princeton Vision & Learning Lab
26 days
(3/n) Infinigen-Sim provides native exporters to the MJCF, URDF, and USD file formats.
Tweet media one
1
1
6
@PrincetonVL
Princeton Vision & Learning Lab
26 days
(2/n) Our assets feature accurate joints and are highly detailed. Our procedural pipeline generates infinite variations in asset geometry, giving broad coverage without manual per-instance modeling.
Tweet media one
1
0
3
@PrincetonVL
Princeton Vision & Learning Lab
26 days
(1/n) Robots must learn to interact with articulated objects. While training in simulation is promising, high-quality articulated assets remain scarce. We present Infinigen-Sim: procedurally generated articulated simulation assets for robot learning. 🧵
2
34
229
@PrincetonVL
Princeton Vision & Learning Lab
2 months
Get started with RRT camera trajectories here: OcMesher Infinigen v1.14 release notes 🧵3/3.
0
0
2
@PrincetonVL
Princeton Vision & Learning Lab
2 months
We also added ultra-detailed house meshes: this adapts our OcMesher octtree marchingcubes implementation to work with any mesh, including whole houses!. code and docs below 🧵2/3
Tweet media one
Tweet media two
1
0
2
@PrincetonVL
Princeton Vision & Learning Lab
2 months
Infinigen v1.14 added new camera trajectories & options for ultra-detailed house meshes:. Camera trajectories can now use RRT* dynamic camera motions (inspired by TartainAir!). Here it is on "hard mode", you can customize the difficulty (see video). More demos below!🧵1/3
1
1
9
@PrincetonVL
Princeton Vision & Learning Lab
2 months
Infinigen v1.11 release notes: Infinigen on pypi: Blender 4.2: 🧵2/2.
Tweet card summary image
pypi.org
Infinite Photorealistic Worlds using Procedural Generation
0
0
2
@PrincetonVL
Princeton Vision & Learning Lab
2 months
Infinigen v1.11 overhauled the system to use Blender 4.2, and you can also now pip install infinigen.See thread for info 🧵1/2.
1
1
2
@PrincetonVL
Princeton Vision & Learning Lab
2 months
Material segmentation image was generated with . Documentation on camera setups is here .(v1.10.0 was first released 2024-10-28, see the changelog here 🧵3/3.
Tweet card summary image
github.com
Add Configuring Cameras documentation Add initial config for multiview cameras surrounding a point of interest Add MaterialSegmentation output pass Add passthrough mode to direct manage_jobs stdout...
0
0
2
@PrincetonVL
Princeton Vision & Learning Lab
2 months
Material segmentation tells you what material name & implementation created every pixel. 🧵2/3
Tweet media one
1
0
2
@PrincetonVL
Princeton Vision & Learning Lab
2 months
Infinigen v1.10 added new tools & docs for creating camera setups (including multi-view!), and new material id ground truth. See thread for more! 🧵1/3
Tweet media one
1
1
3
@PrincetonVL
Princeton Vision & Learning Lab
2 months
Install Infinigen here Use external assets with the infinigen placement system: docs here v1.8.0 was first released August 23, 2024, see the changelog here 🧵2/2.
Tweet card summary image
github.com
Infinite Photorealistic Worlds using Procedural Generation - princeton-vl/infinigen
0
0
2
@PrincetonVL
Princeton Vision & Learning Lab
2 months
Infinigen v1.8.0 added tools to use Objaverse or other assets with our Infinigen-Indoors!.See thread for docs🧵1/2
Tweet media one
1
0
3
@PrincetonVL
Princeton Vision & Learning Lab
1 year
Infinigen v.1.7.0 is out! Infinigen now automatically generates point trajectory ground truth and camera IMU data. See the full release notes here:
2
1
11
@PrincetonVL
Princeton Vision & Learning Lab
1 year
Infinigen v1.6.0 is out! We have added more realistic tile material generators, a tool to generate data with floating objects, and more! See the full release notes here:
Tweet media one
Tweet media two
0
0
4
@PrincetonVL
Princeton Vision & Learning Lab
1 year
Check out Infinigen Indoors: a procedural generator of unlimited 3D indoor scenes. Useful for creating AI training data, 3D assets, simulator environments and more. Fully procedural and fully customizable. GitHub: Project:
Tweet media one
1
1
8