codewithraphael | Statistics π
@codewithraphael
Followers
479
Following
5K
Media
213
Statuses
830
building ai, ml, llm tuning hyperparamters and chasing SOTA
terminal
Joined February 2022
As far as the east is from the west so has he removed our transgression from us Psalm 103:12
0
0
11
MIT's "Statistics for Applications" Lecture Videos: https://t.co/qivvGmn8mk Lecture Slides: https://t.co/RZXnnosERg
1
69
456
11 FREE Books from MIT for Absolute Beginners - Machine Learning (ML) - Deep Learning (DL) - Reinforcement Learning (RL) - Artificial Intelligence (AI) To get: - 1. Follow (So I can DM you ) 2. Like & retweet 3. Reply " Send "
494
575
3K
This animation breaks it down—literally. What you’re seeing is how models convert human language into vectors in 3D space. Each word or phrase becomes a direction or position This is how machines “learn”: by turning language into math
0
0
6
> roadmap to mathematics for machine Iearning
0
0
3
this is it
1
0
3
omo the struggles 😂😂
IShowSpeed knew EXACTLY what he was doing slapping this pro women’s wrestler’s A$$ as his way of “tapping” out 😭🍑 https://t.co/ihDJi8eT75
0
0
1
okay this is tuff 😅💜
0
0
1
Crying guy: “Noo you can’t just solve the economy with programming!!” Chad coder: while True: print("GDP++")
0
0
1
Transformers have lower inductive bias, they learn patterns from data, not from built-in assumptions like locality or hierarchy Lower inductive bias = more flexibility, but more data needed to learn effectively.
0
0
1
MLPs (Multi-layer Perceptrons) can model complex functions but learn from data without strong built-in structure CNNs use locality and translation invariance (good for images)
0
0
1
Inductive bias = the assumptions a model makes to learn patterns from data. Linear Regression assumes linear relationships SVMs assume linear boundaries (unless using kernels) Decision Trees split orthogonally (axis-aligned) large language models
0
0
1