leadgenmanthan Profile Banner
Manthan Patel | Lead Gen Man Profile
Manthan Patel | Lead Gen Man

@leadgenmanthan

Followers
233
Following
2K
Media
175
Statuses
670

I teach AI Agents & Lead Gen AI Agents & Automation | Lead Generation 100K+ students · 200K+ community

Gujarat, India
Joined April 2016
Don't wanna be here? Send us removal request.
@leadgenmanthan
Manthan Patel | Lead Gen Man
20 days
this is what the anti-ai crowd hates on btw
0
0
2
@leadgenmanthan
Manthan Patel | Lead Gen Man
2 days
95% of AI projects fail because they can’t handle uncertainty. This MIT blueprint shows how to fix that. You don’t need to read 700 pages. Here’s everything you need to know. Algorithms for Decision Making (MIT Press, 2022) is the manual for building systems that decide under
0
0
0
@leadgenmanthan
Manthan Patel | Lead Gen Man
4 days
For years, LLM memory was a nightmare: either forget everything or choke on tokens. SimpleMem ends it. It compresses conversations into meaning, turns repeats into habits, and retrieves only what matters. This is real long-term memory !
0
0
1
@leadgenmanthan
Manthan Patel | Lead Gen Man
5 days
Want higher-quality AI results? Have it map the state machine. Fewer assumptions, more correctness
0
0
1
@leadgenmanthan
Manthan Patel | Lead Gen Man
6 days
Google just revealed a ridiculously powerful LLM hack: repeat the exact prompt twice. Same tokens. Same latency. Same outputs. 47/70 benchmarks improved. Zero regressions. One copy-paste. Noticeably smarter models.
0
0
2
@leadgenmanthan
Manthan Patel | Lead Gen Man
7 days
AI will do everything for you. But only if you tell it what to do. Garbage prompt = garbage output. Clear prompt = unfair advantage. 13 free guides to master prompting. Learn the skill that controls the machine. Comment "guide" and I will send you all the details
0
0
0
@leadgenmanthan
Manthan Patel | Lead Gen Man
8 days
MCP vs gRPC
0
0
2
@leadgenmanthan
Manthan Patel | Lead Gen Man
9 days
HOLY SHIT ! DeepSeek-R1 is the AI That Thinks Like a FREAKING Genius 86-page paper. 1-minute summary. You’re welcome :) Open-source AI that self-learns deep reasoning (math/coding Olympiad level) Skip human answers → Pure RL on 671B MoE model → Discovers better thinking
0
0
1
@leadgenmanthan
Manthan Patel | Lead Gen Man
10 days
Decision guide :
0
0
0
@leadgenmanthan
Manthan Patel | Lead Gen Man
10 days
Disadvantages:
1
0
0
@leadgenmanthan
Manthan Patel | Lead Gen Man
10 days
Advantages :
1
0
0
@leadgenmanthan
Manthan Patel | Lead Gen Man
10 days
What is MoE ?
1
0
0
@leadgenmanthan
Manthan Patel | Lead Gen Man
10 days
Holy shit ! Everyone on internet is talking about "MoE " Here's everything you need to know No Bs ! No fluff !
1
0
0
@leadgenmanthan
Manthan Patel | Lead Gen Man
12 days
Task-specific fine-tuning vs. Domain-specific fine-tuning
0
0
2
@leadgenmanthan
Manthan Patel | Lead Gen Man
14 days
MIT just released research that changes how AI processes information. ⭕️ The problem Even frontier models like GPT-5 struggle with massive documents. As inputs grow, performance collapses. ⭕️ The breakthrough Recursive Language Models (RLMs). They don’t cram everything into
0
0
6
@leadgenmanthan
Manthan Patel | Lead Gen Man
15 days
LLM vs. LCM vs. LAM vs. MOE vs. VLM vs. SLM vs. MLM vs. SAM
0
0
1
@leadgenmanthan
Manthan Patel | Lead Gen Man
16 days
Holy Shit ! MIT opened the “brains” of 60 scientific AI models and found they’re all converging on the same internal picture of matter. Different senses. Different training. One shared reality. What MIT did Opened up the latent spaces of ~60 scientific AI models Compared
0
0
2
@leadgenmanthan
Manthan Patel | Lead Gen Man
17 days
Only playbook you need for 2026
2
0
9
@leadgenmanthan
Manthan Patel | Lead Gen Man
19 days
The gains didn’t come from scale. They came from training the model to behave like software. The Real Implication If your system calls APIs, orchestrates workflows, runs in production, or has cost or latency constraints, then throwing a bigger model at it is lazy engineering.
0
0
1
@leadgenmanthan
Manthan Patel | Lead Gen Man
19 days
Smaller models, when fine-tuned suppress verbosity and hallucination by default This is an alignment through capacity constraint The Training Setup Is the Point Model: OPT-350M Data: ToolBench (~187k examples) Method: 1 epoch of supervised fine-tuning That’s it
1
0
1
@leadgenmanthan
Manthan Patel | Lead Gen Man
19 days
Large models are optimized to explain, hedge, and generate plausible continuations. Agents need to choose the right tool, format arguments exactly, follow strict control flow, and stop when done. The paper shows that general capability actively hurts tool reliability.
1
0
0