Joydeep Biswas Profile
Joydeep Biswas

@Joydeepb_robots

Followers
676
Following
560
Media
14
Statuses
128

Associate Professor, Computer Science, UT Austin. Visiting Professor, Nvidia Robot doctor / CS Professor / he / him Also @[email protected]

Austin, TX
Joined October 2020
Don't wanna be here? Send us removal request.
@Joydeepb_robots
Joydeep Biswas
27 days
Of course a simulator is useful when finetuning an LLM to generate robot programs. However, how can we synthesize simulation environments *on the fly* for *novel and arbitrary tasks*? Robo-Instruct shows how! Research led by @ZichaoHu99 in my lab - and accepted at COLM 2025
@ZichaoHu99
Zichao
28 days
🚀 Generating synthetic training data for code LLM in robotics? 🤖 Frustrated with too many programs that look correct but actually invalid? Introducing Robo-Instruct — a simulator-augmented framework that: ✅ Verifies programs against robot API constraints ✅ Aligns
Tweet media one
Tweet media two
1
0
10
@ZichaoHu99
Zichao
28 days
🚀 Generating synthetic training data for code LLM in robotics? 🤖 Frustrated with too many programs that look correct but actually invalid? Introducing Robo-Instruct — a simulator-augmented framework that: ✅ Verifies programs against robot API constraints ✅ Aligns
Tweet media one
Tweet media two
1
3
12
@jasperchlee
Jasper Lee
30 days
📢 Call for AAAI-26 Doctoral Consortium Submissions (Deadline: September 30 AoE) We invite doctoral students in the AI community to apply to the Doctoral Consortium at AAAI-26, held on January 20-21 2026 in Singapore!
1
3
5
@Joydeepb_robots
Joydeep Biswas
28 days
Need 3D structure and motion from arbitrary in-the wild videos, and VSLAM/COLMAP letting you down (moving objects, imprecise calibration, degenerate motion)? Try ViPE! Led by @huangjh_hjh , and with a fantastic team from Nvidia.
@huangjh_hjh
Jiahui Huang
30 days
[1/N] 🎥 We've made available a powerful spatial AI tool named ViPE: Video Pose Engine, to recover camera motion, intrinsics, and dense metric depth from casual videos! Running at 3–5 FPS, ViPE handles cinematic shots, dashcams, and even 360° panoramas. 🔗 https://t.co/1mGDxwgYJt
0
0
7
@AndrewYNg
Andrew Ng
3 months
I am alarmed by the proposed cuts to U.S. funding for basic research, and the impact this would have for U.S. competitiveness in AI and other areas. Funding research that is openly shared benefits the whole world, but the nation it benefits most is the one where the research is
107
473
3K
@ArthurKZhang
Arthur King Zhang
4 months
🗺️ Scalable mapless navigation demands open-world generalization. Meet CREStE: our SOTA navigation model that nails path planning in novel scenes with just 3 hours of data, navigating 2 Km with just 1 human intervention. Project Page 🌐: https://t.co/ZX4g47Pmiv A thread 🧵
2
6
22
@Joydeepb_robots
Joydeep Biswas
7 months
It is important, and possible (!!!) to test models in readily understandable settings: this leads to faster innovation, and *anyone* can then intuit what the SOTA can, and cannot do: important since such models are increasingly being deployed in settings that affect everyone.
0
0
0
@Joydeepb_robots
Joydeep Biswas
7 months
Q: Do you need to go to PhD-level questions to stress-test the SOTA reasoning LLMs? A: No. See new benchmark based on the NPR Sunday Puzzle Challenge You don't need esoteric knowledge to understand the questions, or to verify the answers. https://t.co/VV8H8zUrOM
Tweet card summary image
arxiv.org
Existing benchmarks for frontier models often test specialized, "PhD-level" knowledge that is difficult for non-experts to grasp. In contrast, we present a benchmark with 594 problems based on the...
@ArjunGuha
Arjun Guha
7 months
We present a new benchmark for reasoning models that reveals capability gaps and failure modes that are not evident in existing benchmarks. E.g., we find that o1 / o3-mini-high are significantly better at verbal reasoning than other models.
Tweet media one
1
0
2
@chengchunhsu
Cheng-Chun Hsu
10 months
How can robots learn household tasks from videos using just an iPhone, no robot hardware? Introducing SPOT, an object-centric framework that learns from minimal human demos, capturing the task-related constraints. (1/n)
2
14
55
@foxglove
Foxglove
10 months
🤔 NVIDIA's ReMEmbR project integrates large language and vision-language models with retrieval-augmented generation (RAG) to enable robots to reason and act autonomously over extended periods. By building and querying a long-horizon memory using vision transformers and vector
2
4
13
@Amanda_A_Adkins
Amanda Adkins
11 months
Do you need a dense map to localize your robot over long-term deployments? We don’t think so! Want to know how? If you're at #IROS2024, come checkout my presentation on "ObVi-SLAM: Long-Term Object-Visual SLAM" at 10:15 (October 17) in Room 1.
1
2
5
@Joydeepb_robots
Joydeep Biswas
11 months
Comet C/2023 A3, captured from East Austin, over downtown Austin.
Tweet media one
1
1
16
@Joydeepb_robots
Joydeep Biswas
11 months
RT @adityaakella: 🚀PhD applicants: Want to revolutionize OS design? Apply to UT to build LDOS—the next-gen learned OS—and work on cutting-e…
0
2
0
@gregd_nlp
Greg Durrett
1 year
This project started with us annoyed at papers evaluating CoT "reasoning" with only GSM8k & MATH. We didn't expect to find such strong evidence that these are the only type of problem where CoT helps! Credit to @juand_r_nlp & @kmahowald for driving the rigorous meta-analysis!
@ZayneSprague
Zayne Sprague
1 year
To CoT or not to CoT?🤔 300+ experiments with 14 LLMs & systematic meta-analysis of 100+ recent papers 🤯Direct answering is as good as CoT except for math and symbolic reasoning 🤯You don’t need CoT for 95% of MMLU! CoT mainly helps LLMs track and execute symbolic computation
Tweet media one
Tweet media two
Tweet media three
6
32
163
@Joydeepb_robots
Joydeep Biswas
1 year
Thrilled to share one of our projects at #NVIDIA this Summer - enabling long-horizon open-world perception and recall for mobile robots! Fantastic work by @_abraranwar over his internship, jointly with Yan Chang, John Welsh, and @SohaPouya https://t.co/nQn3iywCEz
@NVIDIAAIDev
NVIDIA AI Developer
1 year
How can #robots remember? 🤖 💭 For robots to understand and respond to questions that require complex multi-step reasoning in scenarios over long periods of time, we built ReMEmbR, a retrieval-augmented memory for embodied robots. 👀 Technical deep dive from #NVIDIAResearch
2
9
46
@ArthurKZhang
Arthur King Zhang
1 year
I am thrilled to announce that UT CODa has been accepted at #IeeeTro24! Thank you to all of my collaborators at @Joydeepb_robots @amrl_ut who helped make this work possible. We are also excited to share the release of CODa version 2. More on CODa v2 in this🧵
3
2
12
@UTGoodSystems
Good Systems
1 year
Last semester, a new 1-credit-hour course we co-designed, “Essentials of AI for Life and Society,” made its debut, and now the lecture recordings are available online. Check them out here: https://t.co/1khGcCD2r3 @UTCompSci, @PeterStone_TX, @Joydeepb_robots #TXCS
Tweet card summary image
youtube.com
The Essentials of AI for Life and Society features faculty from several parts of The University of Texas at Austin and covers fundamental concepts for AI lit...
0
3
8
@MLFoundations
Institute for Foundations of Machine Learning
1 year
@NSF Funded Expedition Project Uses AI to Rethink Computer Operating Systems. Led by @adityaakella , co-PIs are @Joydeepb_robots, @swarat , Shuchi Chawla, @IsilDillig , @daehyeok_kim, Chris Rossbach, @AlexGDimakis and Sanjay Shakkottai. 👏👏👏 https://t.co/xxXR5hyhdS
Tweet card summary image
cns.utexas.edu
Aditya Akella leads the project that aims to boost performance of OSes and help enable assistant robots, autonomous vehicles and smart cities.
0
2
18
@swarat
Swarat Chaudhuri
1 year
I am beyond excited to be part of this new @NSF CISE #Expedition on AI for systems: https://t.co/Vjew14h4CH. Our goal is to build a new kind of OS in which much of the decision-making is done by ML. This is a perfect playground for research on trustworthy/verified ML and
@MLFoundations
Institute for Foundations of Machine Learning
1 year
@NSF Funded Expedition Project Uses AI to Rethink Computer Operating Systems. Led by @adityaakella , co-PIs are @Joydeepb_robots, @swarat , Shuchi Chawla, @IsilDillig , @daehyeok_kim, Chris Rossbach, @AlexGDimakis and Sanjay Shakkottai. 👏👏👏 https://t.co/xxXR5hyhdS
4
7
57
@Joydeepb_robots
Joydeep Biswas
1 year
+1 to Vijay's book - it covers issues that I had to learn "on the fly". Wish I'd had it when I started! :-)
0
0
6