Rajend
@webendrajend
Followers
37
Following
831
Media
7
Statuses
105
π¨βπ» Web dev + AI Engineer(in progress) | π± Learning everyday to become better dev | π― Code to help others | π₯ Current Goal: Web + AI
Joined March 2018
Despite the structural issues, the content is fantastic. A cleaned-up PDF with proper chapter order, numbering, and missing sections added would make the reading experience as smooth and intuitive as the concepts you explain. Thank you for writing this book. @subhashchy
0
0
0
2. After the Docker chapter, the entire build-up suggests Kubernetes should come next but instead, 2 unrelated chapters appear before Kubernetes. 3. Chapters appear out of order: chapter 13 chapter 16 - 238 chapter 15 - 245 all are skipped chapter 19 - 269 chapter 16 -280
1
0
0
1. Duplicate chapter numbers Two chapters are labelled βChapter 10β Chapter 10: Docker (The Shipping Container Revolution) Chapter 10: The Smart Clerk - Search)
1
0
0
Iβm on pg 252 of 'The Accidental CTO' - the content is brilliant with great analogyπ©· but the structure in the second half becomes confusing and messy, makes it difficult to read and breaking my flow. Here are all the issues I found in the current PDF π @subhashchy
2
0
0
Day 17/100 β #100DaysOfML π - Bias : model too simple, misses patterns (underfitting) - Variance : model reacts too much to small data changes (overfitting) - BUVO: Bias Underfitting, Variance Overfitting β
Done L1 & L2 reg @codebasicshub exercise. #MachineLearning #AI
0
0
0
Underfitting causes: 1. model too simple 2. bad feature engineering 3. not trained enough(less epochs) 4. excessive regularization Fix: better features, more training, more complex model
0
0
0
Overfitting causes: 1. too many features 2. poor model choice 3. little data 4. no validation 5. no regularization Fix: better features, more data, k-fold, apply regularization
0
0
0
Day 16/100 - #100DaysOfML π - Learnt how L1 (Lasso) & L2 (Ridge) regularization help reduce overfitting by penalizing large coefficients. - Also revised causes & fixes for overfitting/underfitting. Practiced Labs for it. #MachineLearning #AI
2
0
0
Missed posting for a few days, but Iβm back on track! This week I learnt: - Linear Regression - Multiple Linear Regression - Polynomial Regression Today I completed the exercise & lab for Poly Reg. thanks to #campusx Linear Regression playlist. #MachineLearning #AI #codebasics
1
0
2
Day 14/100 β #100DaysOfML π Today I learnt: Practiced 1-0 (One-Hot) Encoding for nominal data. Reduced multicollinearity by removing one dummy column. Trained & evaluated the model after encoding. @codebasicshub
#MachineLearning #AI
0
0
0
Day 13/100 β #100DaysOfML π Today I learnt: Applied MSE, MAE, and R2 Score to evaluate model performance. Multicollinearity : when features are highly correlated. Dummy Variable Trap in 1-0 Encoding can cause it, fix: remove 1 column. @codebasicshub
#MachineLearning #AI
0
0
0
Day 12/100 β #100DaysOfML π Learnt about R2 score and done lil lab Ps: gonna resume my 100 day journey from today. Also i passed Oracle Foundation Ai cert examπ
#MachineLearning #AI #oraclecertificationprogram
0
0
0
Iβm taking a break from ML today to recover from a cold and come back refreshed tomorrow. #100DaysOfMl #ai
0
0
0
Day 11/100 β #100DaysOfML π Learnt why MSE > MAE for GD MSE (xΒ²) : - Best when few outliers. - smooth, differentiable (fβ(x)=2x). - GD finds minima easily. MAE (|x|) : - better with many outliers. - not smooth, undefined at 0. - harder to optimize. #MachineLearning #AI
0
0
0
Day 10/100 β #100DaysOfML π Learnt: Gradient Descent : finds the best-fit line by adjusting slope & intercept to reach the global minim Manually found the best-fit line using MSE and partial derivatives. min max scaling: bring features into 0β1 range. #MachineLearning #AI
0
0
0
My system got corrupted so evrything is delayed(currently fixing) also exam going on. Hopefully will post my progress on ml from tommorow or soon after exam.
0
0
0
Day 9/100 β #100DaysOfML π Yesterday i missed, because of laptop issue. Today I learnt : Confidence Intervals! - CI gives a range where the true population parameter likely lies π₯Μ Β± Z * (Ο/βn) - Wider CI -> more uncertainty, narrower CI -> more precise #MachineLearning #AI
0
0
0
Day 8/100 β #100DaysOfML π Today I practiced solving problems using the Z-Score Table to find probabilities under the Normal Distribution. π Following @codebasicshub π #MachineLearning #AI #DataScience
0
0
0