Explore tweets tagged as #gradientdescent
@SAIConference
SAI Conferences
2 months
At #Computing2025, Thomas Nowotny explained how spiking neural networks mimic the brain 🧠—from input to output layers where spikes and dynamics interact. #ThomasNowotny #Computing2025 #GradientDescent #AIResearch #DeepLearning #MachineLearning #NeuralNetworks #BrainInspiredAI
0
0
0
@kamikamustudio
MNEMONIKK & The Bands ft. VidGim
15 days
https://t.co/tsglnXD5uJ A song about Toxic and Traumatic relationship 🎵 MNEMONIKK - Gradient Descent Baby #electroclash #electropop #gradientdescent #toxic #traumatic https://t.co/HAexhuh5al
0
2
4
@kamikamustudio
MNEMONIKK & The Bands ft. VidGim
3 days
0
1
5
@DrMElhosseini
Mostafa Elhosseini
1 year
🚀 Unravel the nuances of Gradient Descent in our latest video lecture focused on Linear Regression! Learn how its variants—Batch, Stochastic, and MiniBatch—affect model training efficiency. Essential for optimizing machine learning models. 🧠📈 #GradientDescent #MachineLearning
0
1
2
@SketchpunkLabs
VoR
2 years
I've had an idea for an IK solver for a long while. Start with a simple curve like quadratic bezier. Have a user chose the end pos & control pos, then move the ctrl so it matches the arc length of a bone chain then Fabik it Need to learn err calc, newton & GradientDescent #math
1
0
2
@TeflonTommy2
MrTommy (Mister)
3 years
The only winner in a AI arms race is the AI. #AI #artificalintelligence #intelligenceexplosion #GradientDescent
1
2
3
@DeadCarcosa
Dead Carcosa
2 years
The extremely healthy dynamic between Era, my psionic character and Dodger, my friends space soldier character. The ttrpg is #gradientdescent
0
0
0
@ByteMohit
Mohit Goyal
1 year
Day 15 of #100DayofCode ↳ DSA Progress Child Sum property in Binary Tree ↳ Learned about Gradient Descent ↳ Build the Auth. of Amnesecure #dsa #java #LearnInPublic #BuildInPublic #AtoZDSA #ML #GradientDescent
0
0
6
@Hermetiphysics
Space Admiral General
2 months
If you believe that tarot cards “work”, then you should also believe that artificial intelligence is already proto-conscious because gradient descent is a parallel panpsychic “mechanism.” #tarot #AI #gradientdescent
0
0
0
@datamlistic
datamlistic
4 months
🚀 Ever wondered how AI actually learns? Gradient Descent is the key! 🧠⚡ Watch this quick guide to understand how machines optimize and get smarter. 🎥 Watch the full video here: https://t.co/Xd4RzzxZoe #AI #MachineLearning #GradientDescent #DeepLearning #DataScience
0
0
1
@lombardweb
☷ Lombard Web
3 months
21 septembre 2025 17h04 GMT+2 La log loss serait privilégier le doute de l'art et la manière sur une expérience qui a déjà conclu un résultat mais qui ne satisfait pas tout le monde. Voici l'explication succincte et ses contraintes d'optimisations #LBFGS #newton #gradientDescent
0
1
2
@Sensors_MDPI
Sensors MDPI
2 years
#paperfromEBM Estimation of Foot Trajectory and Stride Length during Level Ground Running Using Foot-Mounted Inertial Measurement Units https://t.co/zkq5706iDS #runningspeed #wearablesensors #gradientdescent
0
1
1
@Sachintukumar
Sachin Kumar
2 years
#Optimization: Mini-Batch Gradient Descent 🎉📊 ✅Mini-Batch #GradientDescent (MBGD) is powerful technique that strikes balance between computational efficiency of Stochastic Gradient Descent (SGD) & stability of Batch Gradient Descent (BGD) 🎯💡 Source - Prudhvi Vardhan 🧵
1
4
18
@__kanhaiya__
Jayant Verma
1 year
#Optimizers & the challenges of using #GradientDescent in #DeepLearning! Key issues: Learning rate selection Local minima Vanishing/exploding gradients Convergence speed Understanding these is crucial for robust models! 🚀#AI #MachineLearning #DataScience #TechInsights
1
0
1
@TensorThrottleX
Ayush
3 months
Day 123 – Data Science Journey -> IG (like SGD) -> subsets for gradient approx -> CRAIG: coreset-based IG, speeds convergence -> Error bound: subset ≈ full data -> CRAIG Algo: greedy submodular pick + weights= efficient gradient est. #100DaysOfCode #ML #GradientDescent
2
1
7
@TensorThrottleX
Ayush
3 months
Day 130 - Data Science Journey: -> Practiced Chain Rule: dL/dW = dL/dA * dA/dZ * dZ/dW – Multiply Layers to Backprop Errors -> Sigmoid Gradient: ∇σ(z) = σ(z)(1 - σ(z)): Fine-Tune Activations for Deeper Nets #MachineLearning #NeuralNetworks #GradientDescent #DataScience
2
1
11
@Sachintukumar
Sachin Kumar
2 years
🗓️Day 14 of #Deeplearning ▫️ Topic - Gradient Descent & #Vectorization#GradientDescent is first-order optimization technique used to find local minimum or optimize loss function. It is also known as parameter optimization technique A Complete 🧵
6
11
76