Tushar Goyal
@tushowergoyal
Followers
104
Following
215
Media
12
Statuses
80
currently in blr 📍ai engineer @theWareIQ (YC S'20) | resident @lossfunk | csai @PlakshaUniv
Bangalore, Karnataka
Joined June 2021
picked up The 80/20 Principle on a walk in Indiranagar today. Anyone here actually tested the 80/20 rule in real life and seen it work? With everything getting commoditized, is differentiation still the edge? Thoughts…
0
0
1
@tushowergoyal gave LLMs existential crises and explained what it'll take to train LLMs to reason in a continuous latent space 🧠 💭 🤔 @lossfunk 7/n
0
1
6
one thing off the bucket list for 2025 - p̶r̶e̶s̶e̶n̶t̶i̶n̶g̶ ̶a̶t̶ ̶a̶ ̶t̶e̶c̶h̶ ̶c̶o̶n̶f̶e̶r̶e̶n̶c̶e̶ @lossfunk great event!
3
1
28
How much does the $20 cursor pro plan actually include? Asking as someone already $25 over 😅
0
0
1
AI startups are growing so fast because they’re winning big enterprise contracts that usually would automatically go to their established competitors. But the competitors are filled with grumpy mid career engineers who don’t believe in AI so they can’t even build the products.
101
78
977
if you are smart, don’t be rude. in the long run, nobody will care about your intelligence if you are not easy to work with.
38
35
858
best builders I have met aren't loud or don't try to standout in conversations. they just build & ship. they have no fluff to cover.
0
0
3
after using m1 macbook pro (4.5 yrs old), it was my first time hearing the fans spin up. best machine. (android studio is the culprit fyi)
0
0
2
my hunch is we might see a flood of consumer products with on-board AI... any cool products i'm missing already?
1
0
2
>> teaching robots to see + understand language + act usually needs huge AI models and tons of robot data >> this paper finds the minimal “vision + language cues” robots actually need >> uses a small model + a smart Policy module → trains in 8 hrs on a single GPU -> matches SOTA
1
0
1
turns out the multimodal vibes are hiding in the middle layers… like why is the halfway point smarter than the end?? https://t.co/hsl9TqxaKr
arxiv.org
Vision-Language-Action (VLA) models typically bridge the gap between perceptual and action spaces by pre-training a large-scale Vision-Language Model (VLM) on robotic data. While this approach...
1
0
1
@PrathameshD_8 @r_sindhoora @rs545837 @KDvaipayan 5/ Tushar Goyal (@tushowergoyal) previously built faster and cheaper Virtual Try On models at Unoffended Labs ( https://t.co/YvNRvI6zAQ) Over the next 6–8 weeks at LossFunk, he’s exploring ideas HRM-inspired architectures to figure out his next research direction.
2
2
24
Confused about how to pick a research problem? Read our latest blog post 👇
4
22
308
excited to attend @lossfunk batch 6 residency in blr! still wondering what area should i pick up? i wanna work at the level of math and pytorch, hierarchical reasoning models have caught my eye recently... any dirty experiments i can perform?
0
0
1
why do i have to watch this retarded ad on every time I come to YT... be10x
0
0
0
v0 by vercel is good to build basic crud applications using supabase but even if a small need for anything custom comes beside CRUD operations, it starts to mess up the project really bad. (3/n)
0
0
0
recently built a full stack app with cursor, with a proper backend, next.js frontend and cloud deployment. here's my experience. (2/n)
1
0
0