Manthan Patel | Lead Gen Man
@leadgenmanthan
Followers
233
Following
2K
Media
175
Statuses
670
I teach AI Agents & Lead Gen AI Agents & Automation | Lead Generation 100K+ students · 200K+ community
Gujarat, India
Joined April 2016
this is what the anti-ai crowd hates on btw
0
0
2
95% of AI projects fail because they can’t handle uncertainty. This MIT blueprint shows how to fix that. You don’t need to read 700 pages. Here’s everything you need to know. Algorithms for Decision Making (MIT Press, 2022) is the manual for building systems that decide under
0
0
0
For years, LLM memory was a nightmare: either forget everything or choke on tokens. SimpleMem ends it. It compresses conversations into meaning, turns repeats into habits, and retrieves only what matters. This is real long-term memory !
0
0
1
Want higher-quality AI results? Have it map the state machine. Fewer assumptions, more correctness
0
0
1
Google just revealed a ridiculously powerful LLM hack: repeat the exact prompt twice. Same tokens. Same latency. Same outputs. 47/70 benchmarks improved. Zero regressions. One copy-paste. Noticeably smarter models.
0
0
2
AI will do everything for you. But only if you tell it what to do. Garbage prompt = garbage output. Clear prompt = unfair advantage. 13 free guides to master prompting. Learn the skill that controls the machine. Comment "guide" and I will send you all the details
0
0
0
HOLY SHIT ! DeepSeek-R1 is the AI That Thinks Like a FREAKING Genius 86-page paper. 1-minute summary. You’re welcome :) Open-source AI that self-learns deep reasoning (math/coding Olympiad level) Skip human answers → Pure RL on 671B MoE model → Discovers better thinking
0
0
1
Holy shit ! Everyone on internet is talking about "MoE " Here's everything you need to know No Bs ! No fluff !
1
0
0
Task-specific fine-tuning vs. Domain-specific fine-tuning
0
0
2
MIT just released research that changes how AI processes information. ⭕️ The problem Even frontier models like GPT-5 struggle with massive documents. As inputs grow, performance collapses. ⭕️ The breakthrough Recursive Language Models (RLMs). They don’t cram everything into
0
0
6
LLM vs. LCM vs. LAM vs. MOE vs. VLM vs. SLM vs. MLM vs. SAM
0
0
1
Holy Shit ! MIT opened the “brains” of 60 scientific AI models and found they’re all converging on the same internal picture of matter. Different senses. Different training. One shared reality. What MIT did Opened up the latent spaces of ~60 scientific AI models Compared
0
0
2
The gains didn’t come from scale. They came from training the model to behave like software. The Real Implication If your system calls APIs, orchestrates workflows, runs in production, or has cost or latency constraints, then throwing a bigger model at it is lazy engineering.
0
0
1
Smaller models, when fine-tuned suppress verbosity and hallucination by default This is an alignment through capacity constraint The Training Setup Is the Point Model: OPT-350M Data: ToolBench (~187k examples) Method: 1 epoch of supervised fine-tuning That’s it
1
0
1
Large models are optimized to explain, hedge, and generate plausible continuations. Agents need to choose the right tool, format arguments exactly, follow strict control flow, and stop when done. The paper shows that general capability actively hurts tool reliability.
1
0
0