Explore tweets tagged as #regularization
LLMs (Large Language Models) Learning Roadmap |-- Foundations of Machine Learning | |-- Linear Algebra, Probability & Statistics | |-- Supervised, Unsupervised & Reinforcement Learning | |-- Model Training, Validation & Evaluation | |-- Overfitting, Regularization &
15
94
535
Math and ML → Part 1: The Mathematical Foundation of Regularization Regularization is one of the most crucial techniques in machine learning for preventing overfitting. But have you ever wondered why it actually works from a mathematical perspective? The Problem: When a model
21
70
615
Coming to mjlab today! This is vanilla RL, no motion imitation/AMP. Natural gaits emerge from minimal rewards: velocity tracking, upright torso, speed-adaptive joint regularization, and contact quality (foot clearance, slip, soft landings). No reference trajectories or gait
20
51
453
Day 85 - #DataScience Journey: -> Today, I dove into the legendary XGBoost🚀 -> It conquered Kaggle for a reason: it's a highly optimized successor to GBM. -> Feature: It automatically uses (L1) & (L2) regularization and handles missing values built-in!🧠 #ML #XGBoost #Boosting
0
0
4
Still using temperature scaling? With @DHolzmueller, Michael I. Jordan and @FrancisBach we argue that with well designed regularization, more expressive models like matrix scaling can outperform simpler ones across calibration set sizes, data dimensions, and applications.
1
2
7
تمام SSEs اور AEOs کا اب واحد حل Nawaz Sharif Regularization of Eminence 🫣 https://t.co/N3Tnr3i87K
1
2
12
@geeksforgeeks Day-118 Multicollinearity and Regularization(Lasso and Ridge) https://t.co/NOEPbPTCHi
#nationskillup #skillupwithgfg #geeksforgeeks
0
0
2
🚀 Deep Learning — 7: Optimize your Neural Networks through Dropouts & Regularization Deeper networks are powerful, but they can easily overfit. Here’s how dropout, L₁/L₂ regularization, and architecture design can make your models more robust & generalizable. Medium Blog:
0
1
2
Regularization in ML Cheat Sheet Image Credit-Aqeel-Anwar
4
21
161
#OpenAccess | Annals of Geophysics Machine learning for #geophysics! The study shows how #Bayesian Regularization accurately predicts lithology, porosity, permeability & water saturation in carbonate reservoirs. Ann. Geophys., 68,4, 2025 👇 https://t.co/8FG8SzZgHd
0
1
1
@JAMBHQ But we Did jamb Regularization , and we where giving admission letter from jamb So I’m getting confused of this write up right now So pls what about this
3
3
5
Hail to the Thief: Exploring Attacks and Defenses in Decentralised GRPO - 25% adversarial nodes → ~100% attack success in ≤20 iterations with no aggregate reward drop - Token log-prob check: 100% (out-of-context); LLM-as-judge: up to 95.2% blocked; KL regularization
1
0
1
PUTANGINA NAREGULAR NA AKO 😭😭😭 WAIT ang casual sinabi ng manager ko sksksks “happy regularization”
2
0
20
नियमितीकरण की मांग को लेकर हजारों उपनल कर्मी हड़ताल पर हैं। कोर्ट के आदेश के बावजूद सरकार की वादाखिलाफी से आक्रोशित उपनल कर्मियों ने आर पार की जंग छेड़ दी है। #strike
#protest
#UPNL
#Workers
#regularization
#sameworksamepay
#uttarakhand
@SapnaPandey28
0
1
5
Policy Transfer Ensures Fast Learning for Continuous-Time LQR with Entropy Regularization.
0
0
0
5. Data science: machine learning Create a movie recommender using key data science methods. Learning objectives: Machine learning basics, cross-validation, algorithms, recommender systems, regularization 🔗 https://t.co/zNc3isQn2t
1
0
8
(6/n) With this, we can run coarse-grained Langevin dynamics directly, without the need for any priors or force labels. This works across biomolecular systems including fast-folding proteins like Chignolin and BBA. Here is a comparison with and without our regularization:
1
1
9
💥Excited to share the publication: "An Implicit Registration Framework Integrating Kolmogorov–Arnold Networks with Velocity Regularization for Image-Guided Radiation Therapy" 🔗 https://t.co/nW9qAGlTvG 📌 #MedicalImaging #RadiationTherapy #KolmogorovArnoldNetwork
0
1
3
New Yann LeCun research is out. He introduces LeJEPA, a new self-supervised learning method that fixes JEPA’s instability issues. It uses a simple Gaussian regularization trick to make training stable and scalable in about 50 lines of code. LeJEPA models trained from scratch
1
0
2