Oscar Le
@oscarle_x
Followers
4K
Following
17K
Media
356
Statuses
5K
Cofounder & CEO of SilverAI - SnapEdit, Fitroom. 50M users. Ph.D in CS. Interested in AI, Future Tech, Startups. My blog https://t.co/ClDIpvSPIn
London
Joined July 2016
SnapEdit is in the a16z's Top 50 AI Mobile Apps. https://t.co/guT2O9kdsH
https://t.co/jKfDyNf45i
#AIMobileApps #AIPhotoEditor #a16z #snapedit
19
2
184
Pairing Codex CLI (or SDK) with gpt-5-nano via API, you will not be disappointed.
0
0
0
First time seeing gpt-5-codex bashing gpt-5-pro's code. And pro said: "yes, you are absolutely right"
0
0
1
Super interested in building AI Data Scientists. Seems to be in reach with current frontier LLM.
DeepAnalyze🔥Agentic LLM for autonomous data science from RUC data Lab. Model: https://t.co/0ldZBmJIxs Dataset: https://t.co/dCUyv7G52e Paper: https://t.co/dlqciWTfMi ✨ Fully open-source ✨ End-to-end automation ✨ Handles all data types
0
0
0
This python package is very useful for Data Scientists who work interactively with Jupyter Notebooks. Repo: https://github .com/knownsec/aipyapp
0
0
0
1. AI agent write model training code 2. Spawn GPU servers by itself 3. ssh and train models with tmux by itself 4. download model back to local and destroy GPU server by itself Saving money from idle GPU server. And I don't need to touch the keyboard
1
0
2
With auto-truncating context, Codex CLI can virtually run forever. I left it run through the night and wake up seeing it ran for 6h, and it works. It spent most of its time fixing errors though, but at least it delivers.
0
0
0
The best use case of AI coding tool: tell it to fix the CUDA and pytorch versioning errors. Save you hours of hating life
0
0
3
If - you have more than 1 regions - user data live in multi regions (or even better, no user data in cloud) Then outage of 1 region like today will not cause outage for your app
0
0
1
These local models are very helpful when you don't have internet (or expensive), eg. in the plane or abroad.
The latest Qwen 3 VL by @Alibaba_Qwen running on iPhone 17 Pro with MLX Qwen 3 VL brings upgraded visual understanding, recognition, and OCR capabilities without sacrificing text performance like previous models The 4B model here is close to Qwen 2.5 VL 72B in many benchmarks
0
0
0
Best place to practice fine-tuning: Kaggle There you have real life problems, have other people to learn from, and have leaderboard to know how good your solutions are.
0
0
0
. @OpenAI You made some changes for rendering GPT-5 Pro answer in ChatGPT. Now I can't copy the text out in markdown format anymore. Could you please fix it. Thank you.
0
0
2
Being on X makes me love to keep up with the frontier of LLM and AI. Just because all I read on here makes me curious.
0
0
0
Codex CLI is so smart. It can parallelize tasks by itself. Here is what it said: "Training is about 11% done and may take nearly two hours total, so I'll wait for it to finish before finalizing anything. Meanwhile, I'm considering preparing..."
0
0
0
- 2024: the year of RAG - 2025: the year of Agentic Search - 2026: the year of Fine-tuning Fine-tuning is getting more and more attention recently.
Anyone got a success story they can share about fine-tuning an LLM? I'm looking for examples that produced commercial value beyond what could be achieved by prompting an existing hosted model - or waiting a month for the next generation of hosted models to solve the same problem
0
0
3
I tried out Vast AI and kind of like it. So cheap comparing to GCP/AWS Good for training non-sensitive tasks. If don't write any creds there, there should be no problem.
1
0
3
When I moved from CC to Codex, I find Codex's autistic behavior is so difficult to work with. Over time, I get used to it. And it is not bad at all.
0
0
2
Suddenly we have both Amp Code and CTO_new focusing on toally free coding agent. Maybe they bet that model quality will increase significantly in the next year so that the serving cost < ads revenue. Let's see
0
1
7
Everyday, my view about AI/LLM shifts a bit, based on new info/updates I receive.
0
0
0