Christy Varghese
@technomadlyf
Followers
30
Following
35
Media
45
Statuses
148
I am a #technomad, looking to be free from 9-5 shackles. As a datascientist I am building my world in it. Keep pushing myself to be ready for the #future.
Kochi
Joined January 2022
🚨 See Like a Robot 🤖👁️ Turning a Raspberry Pi into a DIY AI Security Cam with YOLO object detection! ➡️ Logs objects with time & count ➡️ Funny + useful detections ➡️ Step-by-step for makers #RaspberryPi #YOLO #AIProjects #ComputerVision #DIYTech
https://t.co/Czp4fjNnT3
0
0
0
Here’s the final result 👇 📽️ https://t.co/9M3HFRkoqx Curious how others are using AI for bold, visual narratives. Let’s build the future of storytelling. #GenerativeAI #AIFilmmaking #CreativeTech #AnimeStyle #AIStorytelling
0
0
0
🧠 Gemini for expressive scene generation 🎞️ Runway ML for cinematic sequencing & emotion-driven pacing Not just tech for tech’s sake — this was about testing how AI can deliver emotion, scale, and drama in storytelling.
1
0
0
Tried something different this week: Used AI tools to reimagine one of the most intense stories in history — with an anime-inspired visual style. ⚔️🎥 Think Attack on Titan meets generative storytelling.
1
0
0
💡 Final Take: Go for regularization strategies and don’t fear larger networks—they might just be more robust! #AI
0
0
0
Why Large Networks are More Reliable 🤖 Larger networks, despite having more minima, usually converge to more stable, low-loss solutions. Research shows that they’re less dependent on lucky initializations, yielding more reliable outcomes. #DataScience #MachineLearning
1
0
0
The Challenge with Small Networks ⚠️ Smaller networks can be harder to train with gradient-based methods. They might have fewer local minima in their loss function, but those minima often yield high losses, leading to inconsistent and poor results. #GradientDescent #AIResearch
1
0
0
Better Solutions for Overfitting 🔍 Instead of reducing neurons, methods like L2 regularization, dropout, or adding input noise can control overfitting more effectively. These techniques improve generalization without limiting the network’s learning power. #MLTips #DeepLearning
1
0
0
🧵 Choosing Neural Network Size: Small vs. Large 💡 It’s a common misconception that smaller networks reduce overfitting on simple data. But here’s why that approach can lead to bigger issues and why larger networks, with regularization, are often better! #NeuralNetworks #AI
1
0
0
💬 Question: What’s your experience with activation functions? Do you prefer ReLU, tanh, or another alternative to sigmoid? #MachineLearningCommunity
0
0
0
📚 I’m compiling more notes on activation functions and other neural network insights. Stay tuned if you want more posts like this to improve your #MLskills!
1
0
0
3. Non-Zero-Centered Output Sigmoid outputs only positive values, which can create unbalanced gradients. During backprop, this can lead to zig-zagging in gradient updates, making training slower. Alternative functions like ReLU can avoid this. #DeepLearningTips #Sigmoid
1
0
0
🔄 2. Problem with Large Initial Weights If initial weights are large, many neurons can start saturated, effectively killing gradients from the start. This means the network may not learn at all, especially in deep layers. #NeuralNetworkTips #AIResearch
1
0
0
🚫 1. Saturation & Gradient Killing When a neuron’s output approaches 0 or 1, the gradient becomes almost zero. This can “kill” the gradient during backpropagation, meaning less signal flows back to update weights. Result? Slower or stalled learning. #Backpropagation #AITraining
1
0
0
🤔 What is Sigmoid? The sigmoid function takes any input and “squashes” it to a range between 0 and 1. This is useful for binary classification but has limitations that could slow down training. #DeepLearning #NeuralNetworks
1
0
0
🧵 Exploring the Sigmoid Function in Neural Networks 💡 I’m digging into the pros and cons of popular activation functions like sigmoid. While it’s widely used, sigmoid has some drawbacks that can impact your network’s learning ability. Let’s dive in! #MachineLearning #AI
1
0
0
💬 Have you tried using a single neuron for classification? Share your experiences or thoughts below! #MachineLearning #NeuralNetworks #artificalintelligence #YOLO
0
0
0
🌱 Regularization as "Forgetting" Regularization acts like “gradual forgetting,” keeping weights in check by driving them closer to zero with each update. This helps the neuron avoid overfitting and focus on general patterns.
1
0
2
💪 Binary SVM Classifier Alternatively, adding hinge loss lets the neuron act like a Support Vector Machine (SVM), maximizing the "margin" (distance) between classes. This helps the neuron separate data more effectively.
1
0
0
👥 Binary Softmax Classifier (Logistic Regression) One approach is to use the sigmoid function on the neuron’s output to interpret it as a probability. A result >0.5 indicates one class, while <0.5 suggests another. Sound familiar? This is similar to logistic regression! 📊
1
0
0