AIwithMJ Profile Banner
Mritunjay | AIwithMJ Profile
Mritunjay | AIwithMJ

@AIwithMJ

Followers
55
Following
87K
Media
1
Statuses
351

Learning AI daily — sharing what I learn in simple words. 🤖 Chai-fueled curiosity ☕ Automation → AI Engineering 🚀 Tools, roadmaps & real projects 📚

Bengaluru, India
Joined January 2018
Don't wanna be here? Send us removal request.
@AIwithMJ
Mritunjay | AIwithMJ
10 days
🚀 Hi, I’m MJ! Learning AI in Public. I’m transitioning from Automation → AI Engineering, and I’ll be sharing everything I learn along the way: ☕ Daily chai-fueled insights 🤖 LLMs, Agents & AI tools 📚 Roadmaps, notes & real projects 💡 Wins, mistakes & learning progress If
1
0
6
@AIwithMJ
Mritunjay | AIwithMJ
9 hours
Rewriting the query has fixed retrieval more often for me than changing the index.
0
0
1
@AIwithMJ
Mritunjay | AIwithMJ
9 hours
🧠 Day 10 — Your query shapes what retrieval can find This one was unintuitive for me. Retrieval doesn’t start at the vector DB. It starts at the query. If the query is: • vague • underspecified • phrased like a human thought …the retriever has very little to work with.
1
0
1
@AIwithMJ
Mritunjay | AIwithMJ
1 day
Changing chunk size has fixed more RAG bugs for me than swapping models.
0
0
2
@AIwithMJ
Mritunjay | AIwithMJ
1 day
🧠 Day 9 — Chunking decides what your AI can see This was another quiet “oh” moment. Retrieval doesn’t happen on documents. It happens on chunks. If chunks are: • too big → you get noise • too small → you lose context • poorly split → you retrieve the wrong thing No
1
0
2
@AIwithMJ
Mritunjay | AIwithMJ
2 days
Most RAG issues I’ve debugged weren’t downstream — they were decided at retrieval time.
0
0
1
@AIwithMJ
Mritunjay | AIwithMJ
2 days
🧠 Day 8 — Why “similar” doesn’t always mean “useful” This part surprised me. Vector search doesn’t retrieve answers. It retrieves what’s mathematically close. That’s not the same thing. Two chunks can be “similar” because: • same keywords • same topic • similar phrasing
1
0
2
@AIwithMJ
Mritunjay | AIwithMJ
3 days
Vector DBs clicked for me when I realized they’re not databases in the traditional sense — they’re search engines for meaning.
0
0
1
@AIwithMJ
Mritunjay | AIwithMJ
3 days
🧠 Day 7 — Vector databases (where embeddings actually live) I used to think embeddings were the “smart part.” Turns out, storage + retrieval matter just as much. Embeddings by themselves are just numbers. Vector databases are what make them useful. What they actually do: •
1
0
2
@AIwithMJ
Mritunjay | AIwithMJ
4 days
What finally clicked for me was realizing RAG isn’t about making models smarter — it’s about making them less wrong. Curious how others think about this.
0
0
1
@AIwithMJ
Mritunjay | AIwithMJ
4 days
🧠 Day 6 — RAG (why embeddings alone aren’t enough) I used to think embeddings = memory. They’re not. Embeddings help AI understand similarity. RAG is what actually gives AI useful memory. RAG, simply: • embed your data • retrieve only the most relevant pieces • inject
2
0
3
@AIwithMJ
Mritunjay | AIwithMJ
5 days
It still blows my mind that AI represents meaning using pure math. Understanding embeddings makes everything else in AI click.
0
0
2
@AIwithMJ
Mritunjay | AIwithMJ
5 days
⭐ DAY 5 — Embeddings (Explained in Human Words) 🧠 If AI had a map of meaning, embeddings are the coordinates. Here’s the simplest way to understand them: • AI can’t “read” words the way we do • So it turns text into vectors—lists of numbers that represent meaning • Words
2
0
5
@AIwithMJ
Mritunjay | AIwithMJ
6 days
Bigger context isn’t just about size — it’s about better memory and better reasoning. The more I learn, the more this piece makes everything click.
0
0
2
@AIwithMJ
Mritunjay | AIwithMJ
6 days
⭐ DAY 4 — CONTEXT WINDOWS (Explained Simply) 🧠 What Is a Context Window in AI? (Explained Simply) If parameters are the model’s long-term memory, the context window is its short-term memory — everything the AI can “see” at one time. That includes: • your prompt •
1
0
3
@AIwithMJ
Mritunjay | AIwithMJ
7 days
Breaking it down day by day makes learning AI way less overwhelming. Next up tomorrow: context window.
0
0
1
@AIwithMJ
Mritunjay | AIwithMJ
7 days
⭐ DAY 3 POST — PARAMETERS EXPLAINED SIMPLY 🔧 What Are Parameters in AI? (Explained Simply) If tokens are the words, parameters are the model’s “memory.” They’re tiny numerical values a model learns during training — and they decide how well the AI understands, reasons, and
2
0
2
@AIwithMJ
Mritunjay | AIwithMJ
8 days
Breaking things down one concept at a time. Tomorrow: parameters.
0
0
1
@AIwithMJ
Mritunjay | AIwithMJ
8 days
🔍 What Are Tokens in AI? (Explained Simply) If you're learning AI like me, you’ve probably seen the word “tokens” everywhere. Here’s the simplest way to understand it 👇 A token is just a small piece of text. It can be: a word part of a word punctuation even a space
2
0
3
@AIwithMJ
Mritunjay | AIwithMJ
9 days
Staying consistent. Day 1 starts now.
0
0
1
@AIwithMJ
Mritunjay | AIwithMJ
9 days
🚀 Why I'm Learning AI in 2025 👇 AI isn’t slowing down — it’s accelerating. I’m transitioning from automation → AI engineering, and here’s why: 1️⃣ AI is becoming a real skill advantage People who understand LLMs, tools, and agents can build faster than ever. 2️⃣ AI is
1
0
3