oscarle_x Profile Banner
Oscar Le Profile
Oscar Le

@oscarle_x

Followers
4K
Following
17K
Media
356
Statuses
5K

Cofounder & CEO of SilverAI - SnapEdit, Fitroom. 50M users. Ph.D in CS. Interested in AI, Future Tech, Startups. My blog https://t.co/ClDIpvSPIn

London
Joined July 2016
Don't wanna be here? Send us removal request.
@oscarle_x
Oscar Le
10 months
19
2
184
@oscarle_x
Oscar Le
3 hours
Pairing Codex CLI (or SDK) with gpt-5-nano via API, you will not be disappointed.
0
0
0
@oscarle_x
Oscar Le
23 hours
Deepinfra already have all 3 recent OCR models up
0
0
2
@oscarle_x
Oscar Le
23 hours
First time seeing gpt-5-codex bashing gpt-5-pro's code. And pro said: "yes, you are absolutely right"
0
0
1
@oscarle_x
Oscar Le
1 day
Super interested in building AI Data Scientists. Seems to be in reach with current frontier LLM.
@AdinaYakup
Adina Yakup
2 days
DeepAnalyze🔥Agentic LLM for autonomous data science from RUC data Lab. Model: https://t.co/0ldZBmJIxs Dataset: https://t.co/dCUyv7G52e Paper: https://t.co/dlqciWTfMi ✨ Fully open-source ✨ End-to-end automation ✨ Handles all data types
0
0
0
@oscarle_x
Oscar Le
3 days
This python package is very useful for Data Scientists who work interactively with Jupyter Notebooks. Repo: https://github .com/knownsec/aipyapp
0
0
0
@oscarle_x
Oscar Le
3 days
1. AI agent write model training code 2. Spawn GPU servers by itself 3. ssh and train models with tmux by itself 4. download model back to local and destroy GPU server by itself Saving money from idle GPU server. And I don't need to touch the keyboard
1
0
2
@oscarle_x
Oscar Le
4 days
Just like he turn off some servers. And, like that, they're gone
0
0
0
@oscarle_x
Oscar Le
5 days
With auto-truncating context, Codex CLI can virtually run forever. I left it run through the night and wake up seeing it ran for 6h, and it works. It spent most of its time fixing errors though, but at least it delivers.
0
0
0
@oscarle_x
Oscar Le
5 days
The best use case of AI coding tool: tell it to fix the CUDA and pytorch versioning errors. Save you hours of hating life
0
0
3
@oscarle_x
Oscar Le
6 days
If - you have more than 1 regions - user data live in multi regions (or even better, no user data in cloud) Then outage of 1 region like today will not cause outage for your app
0
0
1
@oscarle_x
Oscar Le
7 days
These local models are very helpful when you don't have internet (or expensive), eg. in the plane or abroad.
@adrgrondin
Adrien Grondin
8 days
The latest Qwen 3 VL by @Alibaba_Qwen running on iPhone 17 Pro with MLX Qwen 3 VL brings upgraded visual understanding, recognition, and OCR capabilities without sacrificing text performance like previous models The 4B model here is close to Qwen 2.5 VL 72B in many benchmarks
0
0
0
@oscarle_x
Oscar Le
7 days
Best place to practice fine-tuning: Kaggle There you have real life problems, have other people to learn from, and have leaderboard to know how good your solutions are.
0
0
0
@oscarle_x
Oscar Le
7 days
. @OpenAI You made some changes for rendering GPT-5 Pro answer in ChatGPT. Now I can't copy the text out in markdown format anymore. Could you please fix it. Thank you.
0
0
2
@oscarle_x
Oscar Le
8 days
Being on X makes me love to keep up with the frontier of LLM and AI. Just because all I read on here makes me curious.
0
0
0
@oscarle_x
Oscar Le
8 days
Codex CLI is so smart. It can parallelize tasks by itself. Here is what it said: "Training is about 11% done and may take nearly two hours total, so I'll wait for it to finish before finalizing anything. Meanwhile, I'm considering preparing..."
0
0
0
@oscarle_x
Oscar Le
9 days
- 2024: the year of RAG - 2025: the year of Agentic Search - 2026: the year of Fine-tuning Fine-tuning is getting more and more attention recently.
@simonw
Simon Willison
9 days
Anyone got a success story they can share about fine-tuning an LLM? I'm looking for examples that produced commercial value beyond what could be achieved by prompting an existing hosted model - or waiting a month for the next generation of hosted models to solve the same problem
0
0
3
@oscarle_x
Oscar Le
10 days
I tried out Vast AI and kind of like it. So cheap comparing to GCP/AWS Good for training non-sensitive tasks. If don't write any creds there, there should be no problem.
1
0
3
@oscarle_x
Oscar Le
10 days
When I moved from CC to Codex, I find Codex's autistic behavior is so difficult to work with. Over time, I get used to it. And it is not bad at all.
0
0
2
@oscarle_x
Oscar Le
11 days
Suddenly we have both Amp Code and CTO_new focusing on toally free coding agent. Maybe they bet that model quality will increase significantly in the next year so that the serving cost < ads revenue. Let's see
@ctodotnew
cto.new
11 days
Introducing...
0
1
7
@oscarle_x
Oscar Le
11 days
Everyday, my view about AI/LLM shifts a bit, based on new info/updates I receive.
0
0
0