Ayush
@TensorThrottleX
Followers
415
Following
5K
Media
318
Statuses
1K
Crafting objective proof from vast and ambiguous datasets. Validating a chosen path to enable confident, decisive action. #100DaysOfML
nowhere
Joined April 2025
Day 0 – The 100-Day Raw Grind Begins- Starting tomorrow, @TensorThrottleX , @BinaryBlaze16, & @_n1nj07 are going all in n out . DSA, development, and daily wins. No excuses. No shortcuts. Every line of code, every problem solved, every day counts.
1
0
20
Transformers are often better than classical ML models! But there is this one field where classical ML still dominates most of the times - Tabular ML. Lets breakdown this further: > Tabular ML as the name suggests deals with csv/tabular data. > There are various DL
16
18
193
Day 65: Log your grind. Share the gains. Prove your sweat counts. #100DaysRawGrind
Day 180 : DataScience Journey Training VGG16 From Scratch on Cats vs Dogs Dataset (following the classic ICLR 2015 VGG paper): Implementing and running VGG16 completely from scratch; no pretrained weights,exactly as described in Sections 2.1 (Architecture) and 2.2 (Config) of
0
0
1
Day 180 : DataScience Journey Training VGG16 From Scratch on Cats vs Dogs Dataset (following the classic ICLR 2015 VGG paper): Implementing and running VGG16 completely from scratch; no pretrained weights,exactly as described in Sections 2.1 (Architecture) and 2.2 (Config) of
1
0
2
Day 64: Log your grind. Share the gains. Prove your sweat counts. #100DaysRawGrind
Day 179 : DataScience Journey Implementing VGGNet From Scratch on Cats vs Dogs (Inspired by ICLR 2015 VGG Paper) VGGNet became iconic for proving a simple insight: depth + small 3×3 filters = massive accuracy gains, without fancy tricks. Implemented the VGG architecture from
0
0
3
Day 179 : DataScience Journey Implementing VGGNet From Scratch on Cats vs Dogs (Inspired by ICLR 2015 VGG Paper) VGGNet became iconic for proving a simple insight: depth + small 3×3 filters = massive accuracy gains, without fancy tricks. Implemented the VGG architecture from
1
0
4
Day 63: Log your grind. Share the gains. Prove your sweat counts. #100DaysRawGrind
Day 178 : DataScience Journey VGGNet Results & Breakthrough over AlexNet (ICLR 2015) VGGNet revolutionized large-scale image recognition by showing how depth alone could drastically improve performance. Compared to AlexNet’s 8-layer design, VGG’s 16–19 layer configurations with
0
0
4
Day 178 : DataScience Journey VGGNet Results & Breakthrough over AlexNet (ICLR 2015) VGGNet revolutionized large-scale image recognition by showing how depth alone could drastically improve performance. Compared to AlexNet’s 8-layer design, VGG’s 16–19 layer configurations with
2
0
4
Day 62: Log your grind. Share the gains. Prove your sweat counts. #100DaysRawGrind
Day 177 : DataScience Journey VGGNet Paper for Large-Scale Image Recognition (ICLR 2015) : -Depth & Simplicity: proves that using small 3×3 filters stacked deep (up to 19 layers) greatly boosts accuracy. -Consistency in Design: Every conv layer uses the same filter size (3×3),
0
0
5
Day 177 : DataScience Journey VGGNet Paper for Large-Scale Image Recognition (ICLR 2015) : -Depth & Simplicity: proves that using small 3×3 filters stacked deep (up to 19 layers) greatly boosts accuracy. -Consistency in Design: Every conv layer uses the same filter size (3×3),
1
0
5
Day 61: Log your grind. Share the gains. Prove your sweat counts. #100DaysRawGrind
Day 176: DataScience Journey ->Trained AlexNet on CIFAR-10: loss dropped 0.63 → 0.35, accuracy hit 87%/81% in 10 epochs. ->My cat got tagged as a frog ,classic DL moment when textures fool vision! ->Curves show strong convergence ; next up: smarter generalization. #DeepLearning
0
0
4
Day 176: DataScience Journey ->Trained AlexNet on CIFAR-10: loss dropped 0.63 → 0.35, accuracy hit 87%/81% in 10 epochs. ->My cat got tagged as a frog ,classic DL moment when textures fool vision! ->Curves show strong convergence ; next up: smarter generalization. #DeepLearning
0
0
3
Day 60: Log your grind. Share the gains. Prove your sweat counts. #100DaysRawGrind
Day 175: DataScince Journey → AlexNet: CNN rev stacked conv layers extract spatial hierarchies from raw pixels. → Trained on CIFAR-10, accuracy rises each epoch; visualized loss & accuracy curves confirm convergence. → From basic edges → textures → object shapes #DataScience
0
0
6
Day 175: DataScince Journey → AlexNet: CNN rev stacked conv layers extract spatial hierarchies from raw pixels. → Trained on CIFAR-10, accuracy rises each epoch; visualized loss & accuracy curves confirm convergence. → From basic edges → textures → object shapes #DataScience
2
0
9
Day 59: Log your grind. Share the gains. Prove your sweat counts. #100DaysRawGrind
Day - 174 DataScience Journey AlexNet: ReLU Activation: max(0,x) – 6x faster training than tanh/sigmoid! Batch Norm: Stabilizes & speeds up Dropout: Randomly "drops" neurons to fight overfitting Data Aug: Flip, rotate, zoom – turn 1 image into 100s for robust training.
0
0
3
Day - 174 DataScience Journey AlexNet: ReLU Activation: max(0,x) – 6x faster training than tanh/sigmoid! Batch Norm: Stabilizes & speeds up Dropout: Randomly "drops" neurons to fight overfitting Data Aug: Flip, rotate, zoom – turn 1 image into 100s for robust training.
0
0
5
Day 58: Log your grind. Share the gains. Prove your sweat counts. #100DaysRawGrind
Day : 173 Data Science Journey Today, we're examining the seminal 2012 paper, "ImageNet Classification with Deep Convolutional Neural Networks," by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. This model achieved a top-5 error rate of 15.3% on the ImageNet challenge
0
0
2
surpassing the previous state-of-the-art of 26.2% and demonstrated that deep Cn's could be effectively scaled using GPUs. we'll derive the arch layer by layer, focusing on precise dims and cals. from the paper', we'll reconstruct it methodically. #ML #Datascience
0
0
1
Day : 173 Data Science Journey Today, we're examining the seminal 2012 paper, "ImageNet Classification with Deep Convolutional Neural Networks," by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. This model achieved a top-5 error rate of 15.3% on the ImageNet challenge
1
0
4
Day 57: Log your grind. Share the gains. Prove your sweat counts. #100DaysRawGrind
Day 172 : Data Science Journey -DBSCAN: Forms clusters by density marking core, border & noise points, unlike centroid-based K-Means. -Params: eps = radius, min_samples = density; tweak to reshape clusters. -Precision: Auto-finds clusters & outliers, no k needed.
0
0
4
-Insight: From 6-point demo to concentric circles, captures complex non-linear patterns. #DataScience #ML
0
0
0