MindTechSavant Profile Banner
Mind Tech Savant Profile
Mind Tech Savant

@MindTechSavant

Followers
39
Following
2K
Media
9
Statuses
180

Insightful Minds: Exploring Tech & Mathematics through an Autistic Lens

United States
Joined August 2021
Don't wanna be here? Send us removal request.
@MindTechSavant
Mind Tech Savant
1 year
Check out the new article on medium!! https://t.co/mgGLHc8w81
0
0
0
@omarsar0
elvis
2 years
An Illustrated Guide to Modern Machine Learning with Geometric, Topological, and Algebraic Structures Provides an illustrated guide and graphical taxonomy of recent advances in non-Euclidean machine learning.
5
105
407
@bindureddy
Bindu Reddy
2 years
I am extremely bullish about both India and China The work ethic is very high, they have enormous amounts of local talent and China, especially is building a ton of very cool models. I am also hopeful that they will continue to be pro open-source and share their research and
37
34
270
@MindTechSavant
Mind Tech Savant
2 years
I don’t know how some people register to many courses, and try to study them at the same time. I just discovered this lately that many do it, but I will not do it in my whole life because of my #autistic brain. And yeah that is my strength and weakness at the same time, #FOCUS
0
0
0
@MindTechSavant
Mind Tech Savant
2 years
#grok is the biggest open source LLM This will revolutionize NLP research and applications worldwide.
0
0
0
@bindureddy
Bindu Reddy
2 years
Good paper by Netflix on cosine similarity. It goes back to building good RAG systems, which is hard. Before deploying these systems, you have to make intelligent decisions about chunking, hierarchical chunking, embedding, and even the algorithm for similarity look-up.
18
268
1K
@gabrielpeyre
Gabriel Peyré
2 years
The cone of positive semi-definite matrices is a fundamental object of convex analysis and optimization. One can encode or approximate convex constraints as linear sections of this cone. https://t.co/OaqHVLkL4A
2
84
584
@MindTechSavant
Mind Tech Savant
2 years
A revolution in building LLMs!!
@LangChain
LangChain
2 years
✨ Today, we’re thrilled to announce ✨ - The general availability of LangSmith (no more waitlist!) - Our Series A fundraise led by @sequoia - Our beautiful new homepage and brand We've worked hard over the past few months to add requested features and ensure LangSmith can
0
0
1
@gabrielpeyre
Gabriel Peyré
2 years
Oldies but goldies: A. Legendre, Nouvelles méthodes pour la détermination des orbites des comètes, 1805. First publication of the least square method, before Gauss according to French people … https://t.co/LJHUYbH8oW
3
96
469
@AndrewYNg
Andrew Ng
2 years
My takeaways from attending WEF at Davos last week: - There were lots of discussions on business implementation of AI. My top two tips: (i) Pretty much all knowledge workers can benefit from using GenAI now, but most will need training. (ii) Task-based analysis of jobs is helping
104
262
1K
@MindTechSavant
Mind Tech Savant
2 years
My way in learning AI #artificalintelligence: 1. Foundation layer: Machine learning, Math for machine learning 2. Gaining knowledge layer: Deep learning, Probabilistic graphical model, and Reinforcement Learning 3. Mining layer: Natural Language Processing, and MLOps 4.
0
0
3
@MindTechSavant
Mind Tech Savant
2 years
“That is the way to learn the most, that when you are doing something with such enjoyment that you don’t notice that the time passes.” - #AlbertEinstein
0
0
0
@MindTechSavant
Mind Tech Savant
2 years
7/ So, as aspiring data detectives, let's aim for models that are just right – not too biased, not too variable. Finding that balance ensures our models don't just memorize the past but can also predict the future accurately! #MachineLearning #BiasAndVariance #DataScience
0
0
0
@MindTechSavant
Mind Tech Savant
2 years
6/ The challenge lies in identifying and minimizing these biases and variances during model training. It's a delicate dance between simplicity and complexity, between underfitting and overfitting.
1
0
0
@MindTechSavant
Mind Tech Savant
2 years
5/ Think of it as cooking: too little spice (bias) and your dish is bland, too much spice (variance) and it's overwhelming. Achieving that perfect flavor is like finding the optimal balance in ML models.
1
0
0
@MindTechSavant
Mind Tech Savant
2 years
4/ Avoidable bias and variance often go hand in hand. The key is finding the sweet spot – a model that captures the essence of the data without getting bogged down by noise.
1
0
0
@MindTechSavant
Mind Tech Savant
2 years
3/ Striking the right balance is crucial. Too much bias, and your model will generalize poorly. Too much variance, and it becomes a 'memorizer,' failing to adapt to new situations.
1
0
0
@MindTechSavant
Mind Tech Savant
2 years
2/ Variance, on the other hand, is the model's sensitivity to small fluctuations in the training data. It's like a detective who overanalyzes every detail, including noise. This can lead to the model performing well on training data but poorly on new, unseen data.
1
0
0
@MindTechSavant
Mind Tech Savant
2 years
🧵 Exploring the nuances of avoidable bias and variance in machine learning! 🤖 Let's dive in. 1/ Bias is like wearing tinted glasses – it distorts our view of the world. In ML, avoidable bias occurs when a model oversimplifies the data, missing crucial patterns. Imagine a
1
0
0
@MindTechSavant
Mind Tech Savant
2 years
One of the 2023 #agenda has been successfully accomplished. Learned how to tune hyperparameters in deep learning with @AndrewYNg course. And ready to start the new year with a continuity goals.
0
0
0