Ale9806_ Profile Banner
Alejandro Lozano Profile
Alejandro Lozano

@Ale9806_

Followers
163
Following
46
Media
7
Statuses
48

Ph.D. Student @ Stanford AI Lab Building open biomedical AI

Stanford, California
Joined May 2023
Don't wanna be here? Send us removal request.
@Ale9806_
Alejandro Lozano
1 month
RT @Zhang_Yu_hui: ๐Ÿงฌ What if we could build a virtual cell to predict how it responds to drugs or genetic perturbations?. Super excited to iโ€ฆ.
0
20
0
@Ale9806_
Alejandro Lozano
4 months
RT @AkliluJosiah2: Thereโ€™s growing excitement around VLMs and their potential to transform surgery๐Ÿฅโ€”but where exactly are we on the path toโ€ฆ.
0
6
0
@grok
Grok
1 day
Join millions who have switched to Grok.
2
6
30
@Ale9806_
Alejandro Lozano
5 months
Shout out to my stellar first co-authors @minwsun and @jmhb0 for leading this effort, as well as the incredible team of computer scientists, statisticians,ย  biologists, and clinicians that made this possible: @jnirsch.Christopher Polzak, @Zhang_Yu_hui, @cliangyu_, Jeffrey Gu,.
0
0
1
@Ale9806_
Alejandro Lozano
5 months
Earlier this year, we released the BIOMEDICA dataset, featuring 24 million unique image caption pairs and 30 million image references derived from open-source biomedical literature. It's been great to see the community engaging with itโ€”we're currently seeing around 6K downloads
Tweet media one
3
9
26
@Ale9806_
Alejandro Lozano
5 months
Introducing video differencing, a new task for detecting differences between video frames. Notably, even the most advanced Video LLMs struggle with this challenge, underscoring the long road ahead!.
@jmhb0
James Burgess (at CVPR)
5 months
๐ŸšจLarge video-language models LLaVA-Video can do single-video tasks. But can they compare videos?. Imagine youโ€™re learning a sports skill like kicking: can an AI tell how your kick differs from an expert video?. ๐Ÿš€ Introducing "Video Action Differencing" (VidDiff), ICLR 2025.๐Ÿงต
0
0
4
@Ale9806_
Alejandro Lozano
6 months
RT @kevinywu: Which LLMs work best for medical queries? ๐Ÿฉบโœจ Introducing MedArena ๐Ÿฅโ€”the first chatbot arena just for clinicians โš•๏ธ!. ๐Ÿ‘ฉโ€โš•๏ธ๐Ÿ‘จโ€โš•๏ธโ€ฆ.
0
8
0
@Ale9806_
Alejandro Lozano
7 months
Check out our new work accepted to ICLR 2025. We introduce time-to-event (TTE) pretraining to leverage temporal supervision from longitudinal EHR data and estimate the risk of future events. By scaling to 225M clinical events, we achieve SOTA prognostic performance!.
@Zepeng_Huo
Frazier Huo
7 months
๐ŸŽ‰ Excited to share that our latest research, ๐˜›๐˜ช๐˜ฎ๐˜ฆ-๐˜ต๐˜ฐ-๐˜Œ๐˜ท๐˜ฆ๐˜ฏ๐˜ต ๐˜—๐˜ณ๐˜ฆ๐˜ต๐˜ณ๐˜ข๐˜ช๐˜ฏ๐˜ช๐˜ฏ๐˜จ ๐˜ง๐˜ฐ๐˜ณ 3๐˜‹ ๐˜”๐˜ฆ๐˜ฅ๐˜ช๐˜ค๐˜ข๐˜ญ ๐˜๐˜ฎ๐˜ข๐˜จ๐˜ช๐˜ฏ๐˜จ, has been accepted at ๐—œ๐—–๐—Ÿ๐—ฅ 2025! ๐Ÿš€. ๐Ÿ” ๐—œ๐—บ๐—ฝ๐—ฟ๐—ผ๐˜ƒ๐—ถ๐—ป๐—ด ๐— ๐—ฒ๐—ฑ๐—ถ๐—ฐ๐—ฎ๐—น ๐—œ๐—บ๐—ฎ๐—ด๐—ฒ ๐—ฃ๐—ฟ๐—ฒ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด ๐˜„๐—ถ๐˜๐—ต ๐—ง๐—ถ๐—บ๐—ฒ-๐˜๐—ผ-๐—˜๐˜ƒ๐—ฒ๐—ป๐˜.
0
1
3
@Ale9806_
Alejandro Lozano
7 months
RT @XiaohanWang96: ๐Ÿš€ Introducing Temporal Preference Optimization (TPO) โ€“ a video-centric post-training framework that enhances temporal grโ€ฆ.
0
12
0
@Ale9806_
Alejandro Lozano
7 months
Work done at.@StanfordAILab.
0
0
3
@Ale9806_
Alejandro Lozano
7 months
[10/10. @cliangyu_,@jnirsch, Jeffrey Gu, Ivan Lopez,.@AkliluJosiah2, Austin Katzer, Collin Chiu, Anita Rau,.@XiaohanWang96,@Zhang_Yu_hui, Alfred Song,.@robtibshirani,@yeung_levy.
0
0
2
@Ale9806_
Alejandro Lozano
7 months
[9/10] Shout out to my stellar first co-authors. @minwsun and @jmhb0 for leading this effort, as well as the incredible team of computer scientists, statisticians, biologists, and clinicians that made this possible:.
0
0
2
@Ale9806_
Alejandro Lozano
7 months
[8/10]. While our models offer state-of-the-art performance, all evaluations indicate that there is still significant room for improvement. We release all our contributions under a permissive license to facilitate broader use and further development.
0
0
2
@Ale9806_
Alejandro Lozano
7 months
[7/10] ๐Ÿ’ก We demonstrate the utility and accessibility of our resource by training BMC-CLIP, a suite of CLIP-style models continuously pre-trained on our dataset using different training recipes via streaming.
Tweet media one
0
0
2
@Ale9806_
Alejandro Lozano
7 months
[6/10]. ๐ŸŽฏ We demonstrate the utility and accessibility of our resource by training BMC-CLIP, a suite of CLIP-style models continuously pre-trained on our dataset using different training recipes via streaming.
0
0
2
@Ale9806_
Alejandro Lozano
7 months
[5/10]. ๐Ÿ’ต Our archive is hosted in HuggingFace, enabling streaming. Eliminating the need to download 3.9 TB of data locally in order to use BIOMEDICA.
0
0
2
@Ale9806_
Alejandro Lozano
7 months
[4/10]. โšกOur archive is serialized as a webdataset. Providing 3x-10x higher I/O rates when compared to random access memory (decreasing GPU idle time).
0
0
1
@Ale9806_
Alejandro Lozano
7 months
[3/10]. ๐Ÿ—„๏ธRather than pre-filtering to specific domains, we provide ~10x more metadata and expert-derived annotations at various granularities. Subsequently, we offer a pipeline to use these metadata and filter on demand, accommodating to different interests in the community.
0
0
2
@Ale9806_
Alejandro Lozano
7 months
[2/10]. Our framework produces a comprehensive archive with over 24M unique image-text pairs, including image-captions, image-references, full metadata, and human derived annotations from over 6M articles, that can be freely used by the community for model training.
Tweet media one
0
0
3
@Ale9806_
Alejandro Lozano
7 months
Biomedical datasets are often confined to specific domains, missing valuable insights from adjacent fields. To bridge this gap, we present BIOMEDICA: an open-source framework to extract and serialize PMC-OA. ๐Ÿ“„Paper: .๐ŸŒWebsite:
Tweet media one
13
55
145
@Ale9806_
Alejandro Lozano
8 months
RT @Zhang_Yu_hui: ๐Ÿ” Vision language models are getting better - but how do we evaluate them reliably? Introducing AutoConverter: transformiโ€ฆ.
0
74
0