PrabhdeepS_ Profile Banner
Prabhdeep Profile
Prabhdeep

@PrabhdeepS_

Followers
182
Following
1K
Media
69
Statuses
282

I research AI at @MIT CSAIL & @Harvard's Ophthalmology AI Lab. Studying Math @UofT.

Canada
Joined September 2023
Don't wanna be here? Send us removal request.
@PrabhdeepS_
Prabhdeep
27 days
How did I get into @HackTheNorth, after 3 years of rejections. 😭. If anyone wants to team up, then let me know!
Tweet media one
3
0
21
@PrabhdeepS_
Prabhdeep
1 month
Had a quick little side-quest at NASA 🚀
Tweet media one
Tweet media two
3
0
4
@PrabhdeepS_
Prabhdeep
1 month
I was in Toronto, and this happened.
Tweet media one
0
0
11
@PrabhdeepS_
Prabhdeep
1 month
Took 2 all-nighters to build an MCP (Model Context Protocol) powered car. Don't regret a single second of it.
Tweet media one
Tweet media two
0
0
3
@PrabhdeepS_
Prabhdeep
1 month
Guess who won 2nd overall at Hack The 6ix 🥳
Tweet media one
Tweet media two
4
0
21
@PrabhdeepS_
Prabhdeep
3 months
Here is a chopped video of me explaining the app I've been working on for the past month, Rapix . p.s. I need only 6 more Android beta testers before I can upload it to the PlayStore. Reply if interested 🙏
0
0
7
@PrabhdeepS_
Prabhdeep
3 months
Side Note: I need 12 beta testers before I can launch the app. Feel free to DM me if you'd be open to it. It's barely any work :).
0
0
2
@PrabhdeepS_
Prabhdeep
3 months
Does 'Prabh Studio' go hard? 👀
Tweet media one
3
0
5
@PrabhdeepS_
Prabhdeep
4 months
We just won Snowball iCE microphones @ Canada's largest high school hackathon (@jam_hacks) 🥳. Welp, thanks for the birthday gift JamHacks :)
Tweet media one
Tweet media two
Tweet media three
4
1
13
@PrabhdeepS_
Prabhdeep
4 months
Hyped for @jam_hacks at @UWaterloo 💪🔥. cc: @evelynhannah_ @virkvarjun
Tweet media one
1
1
5
@PrabhdeepS_
Prabhdeep
4 months
I'm at @UWaterloo with @virkvarjun from the 16th to the 18th. Hmu if you wanna meet up.
Tweet media one
1
0
4
@PrabhdeepS_
Prabhdeep
4 months
8/ Curious? I wrote an in-depth article on this with mathematical explanation's. I'm building more AI projects like this, follow me for more :) . And if you’re ready to tinker, the GitHub repo linked above has everything you need.
Tweet card summary image
medium.com
I built a tiny Vision Transformer (ViT) that still works on CIFAR-10 and fits on low-end hardware.
0
0
1
@PrabhdeepS_
Prabhdeep
4 months
7/ Why bother?. Compression opens doors everywhere: object-detection in autonomous drones, diagnostic cues on portable medical scanners, battery-friendly IoT cameras that only wake the network when something matters, and the list goes on. Compact transformers = powerful,.
1
0
0
@PrabhdeepS_
Prabhdeep
4 months
6/ INT8 quantization:. Weights and activations are mapped from 32-bit floats to 8-bit integers using per-channel scale and zero-point values learned during a brief calibration pass. This cuts model size 4x and speeds up integer arithmetic on CPUs, while keeping performance
Tweet media one
1
0
0
@PrabhdeepS_
Prabhdeep
4 months
5/ Structured pruning:. After training, we measure how much each attention head and MLP unit contributes. Units with the lowest L₂-norm scores are removed in whole chunks (about 30% of heads, 25% of MLP channels). A short fine-tune lets the remaining units adjust, giving us a
Tweet media one
2
0
0
@PrabhdeepS_
Prabhdeep
4 months
4/ Knowledge distillation: . Imagine a pro photographer teaching a beginner. Instead of only saying “That’s a cat,” the pro also explains how sure they are and what features hint at “maybe a fox.” . A smaller ViT is trained to match both the class probabilities and the
Tweet media one
1
0
0
@PrabhdeepS_
Prabhdeep
4 months
3/ I was able to reduce the size of the model without greatly affecting accuracy using three techniques: . - Knowledge distillation (learn from a bigger model).- Structured pruning (delete lazy attention heads & MLP units).- INT8 quantization (store thoughts in 8-bit integers).
1
0
0
@PrabhdeepS_
Prabhdeep
4 months
2/ Therefore, I built Pocket-ViT. A Vision Transformer slimmed from DeiT-Small’s 22 M parameters down to ~5 M, then squashed to a 1.3 MB INT8 file, small enough to ship inside a mobile app update.
Tweet card summary image
github.com
Compressing a large ViT into a ≤5M-parameter “tiny” model that still reaches strong accuracy on CIFAR-10. - prabhxyz/pocket-vit
1
0
0
@PrabhdeepS_
Prabhdeep
4 months
1/ Most of our cameras are on our phones, drones, or Raspberry Pi boards. To use make use of transformer-level accuracy there, we need to compress these models significantly to work effectively on 'pocket size' hardware.
1
0
0
@PrabhdeepS_
Prabhdeep
4 months
Big Vision Transformers (ViT's) CRUSH accuracy benchmarks, but they also crush your RAM, battery, and download quota. We need smaller ViT's for edge devices, think drones, phones, even Raspberry Pi. My goal: keep the ViT accuracy, reduce the size. Starting from DeiT-Small (≈
Tweet media one
1
1
7