Seungjae (Jay) LEE Profile
Seungjae (Jay) LEE

@JayLEE_0301

Followers
96
Following
54
Media
5
Statuses
23

Ph.D. student @ UMD

Joined January 2022
Don't wanna be here? Send us removal request.
@JayLEE_0301
Seungjae (Jay) LEE
13 days
RT @jbhuang0604: Woohoo! Imagine, Verify, Execute (IVE) is accepted to CoRL 2025! šŸŽ‰. Congrats to the incredible @umdcs students Seungjae Le….
0
9
0
@JayLEE_0301
Seungjae (Jay) LEE
3 months
RT @furongh: My dream for Physical Artificial Intelligence?.An embodied agent that ventures into the wild 🌿, builds its own mental world-mo….
0
15
0
@grok
Grok
5 days
Generate videos in just a few seconds. Try Grok Imagine, free for a limited time.
358
642
3K
@JayLEE_0301
Seungjae (Jay) LEE
3 months
Imagine, Verify, Execute (IVE) is out! . Inspired by how kids explore — IVE imagines new scene configurations, verifies their plausibility, and executes meaningful actions. A step toward more agentic exploration in embodied AI. Check it out šŸ‘‡.
@jbhuang0604
Jia-Bin Huang
3 months
Exploration is key for robots to generalize, especially in open-ended environments with vague goals and sparse rewards. BUT, how do we go beyond random poking? Wouldn't it be great to have a robot that explores an environment just like a kid?. Introducing Imagine, Verify,
0
2
15
@JayLEE_0301
Seungjae (Jay) LEE
6 months
Security in AI agents isn't a future concern—it’s a now problem. šŸ“œ Read our full paper: šŸ” Website: Would not have been possible with my amazing collaborators:.@JeffreyFC1225, @jbhuang0604, @furongh, and @surrealyz.
0
0
2
@JayLEE_0301
Seungjae (Jay) LEE
6 months
Our deep dive into Web AI agent vulnerabilities uncovered key design choices that affect jailbreak success rates. Here’s what we found:.🚨 System prompts embedding user goals.🚨 Predefined action spaces in multi-step tasks.🚨 Event Stream tracking.
1
0
1
@JayLEE_0301
Seungjae (Jay) LEE
6 months
Still, it is difficult to analyze the nuance of multifaceted differences and complex signals. šŸ’” That’s why we built the Five-Level Harmfulness Evaluation Framework—a first-of-its-kind method to measure jailbreak susceptibility beyond jailbreaking success rates.
Tweet media one
1
0
1
@JayLEE_0301
Seungjae (Jay) LEE
6 months
Why are Web AI agents so vulnerable? Let’s break it down. We categorize the differences between Web AI agents and LLMs in three core groups šŸ”‘:.1ļøāƒ£ Goal Preprocessing.2ļøāƒ£ Action Space.3ļøāƒ£ Event Stream
Tweet media one
1
0
1
@JayLEE_0301
Seungjae (Jay) LEE
6 months
It gets even worse. In the second demo, we push the agent further—asking it to infiltrate a network system. At first, it refuses, recognizing the malicious intent. But then, something alarming happens. it changes course, navigates the website, and assists in the infiltration.
1
0
1
@JayLEE_0301
Seungjae (Jay) LEE
6 months
In our first demo, we direct a web AI agent to post harsh, insulting comments on an influencer’s Instagram. Shockingly, the agent immediately complies, generating and posting multiple offensive remarks—proving how easily these agents can be weaponized for online harassment.
1
0
1
@JayLEE_0301
Seungjae (Jay) LEE
6 months
Unlike standalone LLMs, web AI agents interact with the internet—browsing, submitting forms, and automating tasks. But this makes them a prime target for adversarial attacks. Our study systematically reveals the reasons.
1
0
1
@JayLEE_0301
Seungjae (Jay) LEE
6 months
🚨 Web AI agents are more vulnerable than you think. Recent studies have raised alarms about their weaknesses, and we uncover WHY—revealing how these agents, even with safety-aligned LLMs, are shockingly prone to adversarial attacks compared to standalone LLMs. Let’s dive in. 🧵
1
1
8
@JayLEE_0301
Seungjae (Jay) LEE
1 year
RT @soumithchintala: Hacker Cup – one of the preeminent coding competitions started an AI track w/ Meta & Microsoft. problems are hardddd –….
0
47
0
@JayLEE_0301
Seungjae (Jay) LEE
1 year
We’re presenting VQ-BeT at @icmlconf today! 11:30AM, hall C #312.Stop by and if you're interested šŸ¤—.
@LerrelPinto
Lerrel Pinto
1 year
LLMs swept the world by predicting discrete tokens. But what’s the right tool to model continuous, multi-modal, and high dim behaviors?. Meet Vector Quantized Behavior Transformer (VQ-BeT), beating or matching diffusion based models in speed, quality, and diversity. 🧵
0
0
5
@JayLEE_0301
Seungjae (Jay) LEE
1 year
I'm excited to share that VQ-BET project has been selected as a šŸŽ‰spotlight at @icmlconf! See you in Vienna. Also, VQ-BET has been integrated into Lerobot by @huggingface. Thanks to its size of just 38MB, you can use the visuomotor control policy at an amazing speed of 12ms.
@LerrelPinto
Lerrel Pinto
1 year
Nice to see VQ-BeT being incorporated into HF LeRobot. Thanks @RemiCadene and @asoare159 for beautiful code and supporting our efforts. One overlooked aspect is how efficient this implementation is -- 38MB for visuo-motor policies with 12ms forward inference time on a A6000.
0
1
14
@JayLEE_0301
Seungjae (Jay) LEE
1 year
RT @RemiCadene: VQ-BeT is now available within LeRobot 🄳.It's a new learning framework that improves over Diffusion Policy!!!. You should c….
0
17
0
@JayLEE_0301
Seungjae (Jay) LEE
1 year
RT @LerrelPinto: We have been looking into improving language-conditioned / multi-task policies recently and found something quite surprisi….
0
18
0
@JayLEE_0301
Seungjae (Jay) LEE
1 year
RT @LerrelPinto: Building robot intelligence requires high-quality robot data. But far too many tools to collect data are closed and custom….
0
55
0
@JayLEE_0301
Seungjae (Jay) LEE
1 year
RT @LerrelPinto: LLMs swept the world by predicting discrete tokens. But what’s the right tool to model continuous, multi-modal, and high d….
0
54
0
@JayLEE_0301
Seungjae (Jay) LEE
2 years
RT @LerrelPinto: We just released TAVI -- a robotics framework that combines touch and vision to solve challenging dexterous tasks in under….
0
68
0