Zain
@ZainHasan6
Followers
5K
Following
10K
Media
1K
Statuses
4K
AI builder & teacher | AI/ML https://t.co/PDkARZyitK | EngSci ℕΨ @UofT | ex-(Vector DBs, Health tech, Lecturer) | decoding AI’s future - follow for insights!
token factory
Joined August 2012
If you cannot explain something in simple terms, you don't understand it.
5
4
49
Was house hunting and I just gave this thing a detailed set of requirements including locations, rooms, amenities, price range, date constructed/renovated etc >> it emails me every morning with daily updates based on new listings Pretty sick 😎🔥 What should I try next?
I have access to Scouts by @yutori_ai! Applied for access after reading the blog - will try it out and report back with my honest opinion. I haven't really tried any browser agent before so looking forward to learning and trying them all out.
0
0
1
I have access to Scouts by @yutori_ai! Applied for access after reading the blog - will try it out and report back with my honest opinion. I haven't really tried any browser agent before so looking forward to learning and trying them all out.
0
0
2
lots of insightful nuggets about Kimi-K2-Thinking from the deep-dive session this morning with @MinakoOikawa from the @Kimi_Moonshot team! link below! 30 mins deepdive + 30 min Q&A we covered: > post-training for agentic tool calling > reward functions for interleaved tool
1
0
3
"Should I learn how to code?" My first year Eng prof. used to say: “To find the answer, you must know the answer” If you don't know how to code how will you know when the language model is producing code that helps vs hurts you? How to modify it to make it work? How do you know
0
0
2
my SF word bingo since moving here: non-trivial, orthogonal, tractable, directionally correct, superlinear, convex hull, possibility space, power-law, lindy, antifragile, bayesian, update priors, calibrate, steelman, mesa-optimizer, instrumentally convergent
1
0
5
>> It's like prepping your ingredients before you start cooking.👏
@_devJNS > do people still read coding books? Almost every day, because reading is the most effective way to soak up new knowledge for me. It's like prepping your ingredients before you start cooking. Books offer structure, since an expert put in all the effort to organize the
0
0
0
Love this visualization of how LLM input context has changed from RAG queries -> agent tool loops -> now reasoning agent traces. I'd recommend the full blog that talks about the improving support for hybrid architecture language models in vLLM.
1
2
2
why vision based web agents scale and generalize better than text based DOM agents.. Reminds me of the "only cameras" vs "lidar, radar, cameras ++" sensor argument self-driving cars all over again.
1
1
5
20+ pg comprehensive breakdown of the inference/token economics and speed vs. cost trade-offs that we see AI-native companies dealing with as they scale! How we've seen them scale inference: >> spec decoding >> quantization >> custom kernels >> the right hardware
1
0
2
What makes a good LLM benchmark? Difficulty, diversity, utility, reproducibility, and no data leakage! New post👇
1
0
1