Manish Kumar Shah
@manishkumar_dev
Followers
55K
Following
48K
Media
9K
Statuses
49K
AI Enthusiast 🤖 | AI & Tech Content Creator 👨‍💻 | Sharing Latest AI Tools ⚡|350K+ LinkedIn & Instagram Community 🚀 | DM for Promotion 📩
Bhopal, Madhya Pradesh
Joined November 2023
Big news for devs and creators 🚀 https://t.co/xPNtbXzzrL just opened early access to GLM-4.6V, the next-generation multimodal model that finally connects vision to real execution. Built for real-world workflows where images, documents, video, and code work together seamlessly
56
44
86
I just tried the new AI video model Kling O1 and honestly it makes most other video apps feel outdated. Subjects stay consistent, edits feel natural, and scene transitions finally look smooth instead of stitched together. Here’s how it actually works with real examples 👇
56
43
106
🧠GlobalGPT has officially unlocked Black Friday. Gemini 3, GPT-5.1, Nano Banana Pro, Sora 2 pro—all live, all discounted. Creators now get unlimited AI access with up to 50% OFF. I don’t know if it can make you a genius in a day, but it can definitely make your work look
31
19
75
Big news for devs and creators 🚀 https://t.co/xPNtbXzzrL just opened early access to GLM-4.6V, the next-generation multimodal model that finally connects vision to real execution. Built for real-world workflows where images, documents, video, and code work together seamlessly
56
44
86
🚨 Doctly just reinvented how PDFs actually get used. No more broken OCR text. No more missing tables or scrambled layouts. No more manual copy-pasting from complex documents. Real documents now become clean, structured data – ready for workflows, databases, and AI systems.
64
49
141
LLaDA2.0 shows how powerful diffusion can really become at scale. A 100B model that runs faster, thinks deeper, and stays fully open for everyone. Feels like a true shift in how the next generation of AI will be built.
🧬 Introducing LLaDA2.0, for the first time scaled to 100B, as a Discrete Diffusion LLMs (dLLM)! Featuring 16B (mini) and 100B (flash) MoE versions. With 2.1x faster inference than AR models and superior performance in Code, Math, and Agentic tasks, we prove that at scale,
0
1
5
LLaDA2.0 shows how powerful diffusion can really become at scale. A 100B model that runs faster, thinks deeper, and stays fully open for everyone. Feels like a true shift in how the next generation of AI will be built.
🧬 Introducing LLaDA2.0, for the first time scaled to 100B, as a Discrete Diffusion LLMs (dLLM)! Featuring 16B (mini) and 100B (flash) MoE versions. With 2.1x faster inference than AR models and superior performance in Code, Math, and Agentic tasks, we prove that at scale,
0
1
5
🧬 Introducing LLaDA2.0, for the first time scaled to 100B, as a Discrete Diffusion LLMs (dLLM)! Featuring 16B (mini) and 100B (flash) MoE versions. With 2.1x faster inference than AR models and superior performance in Code, Math, and Agentic tasks, we prove that at scale,
65
55
144
Inworld making their number one rated TTS free is a genuine gift to builders everywhere. If this is the holiday surprise, next week’s redefine number one moment is going to be special.
We're making Inworld TTS free until the end of the year (!) We were feeling in the holiday spirit today, and after seeing the community rate our TTS at #1 on leaderboards and help us grow 100% week on week, we wanted to gift something and give every builder the chance to try
8
5
21
Inworld making their number one rated TTS free is a genuine gift to builders everywhere. If this is the holiday surprise, next week’s redefine number one moment is going to be special.
We're making Inworld TTS free until the end of the year (!) We were feeling in the holiday spirit today, and after seeing the community rate our TTS at #1 on leaderboards and help us grow 100% week on week, we wanted to gift something and give every builder the chance to try
8
5
21
We're making Inworld TTS free until the end of the year (!) We were feeling in the holiday spirit today, and after seeing the community rate our TTS at #1 on leaderboards and help us grow 100% week on week, we wanted to gift something and give every builder the chance to try
84
79
182
This feels like a real turning point for creators. If AI can decode viral structure and rebuild it in your own style, everything changes. Buzzy is shaping a future where virality becomes a repeatable system, not a lucky moment.
In 5 days, 9 industries will be killed. Why? For the first time, AI will learn what's “virality” Traffic doesn’t come from “better video quality” It comes from the viral sense behind it. And the first step for AI is to understand the viral structure Then remix in your own
17
9
27
This feels like a real turning point for creators. If AI can decode viral structure and rebuild it in your own style, everything changes. Buzzy is shaping a future where virality becomes a repeatable system, not a lucky moment.
In 5 days, 9 industries will be killed. Why? For the first time, AI will learn what's “virality” Traffic doesn’t come from “better video quality” It comes from the viral sense behind it. And the first step for AI is to understand the viral structure Then remix in your own
17
9
27
🚨 SciSpace just released an AI upgrade that truly understands biomedical research. No more stitching together PubMed, ClinVar, omics tools, CRISPR design platforms, PDF libraries, and diagram software. Biomedical research now happens in ONE place. Here’s why this is a total
59
43
121
In 5 days, 9 industries will be killed. Why? For the first time, AI will learn what's “virality” Traffic doesn’t come from “better video quality” It comes from the viral sense behind it. And the first step for AI is to understand the viral structure Then remix in your own
53
37
153
We ordered 50 mascot shots. The studio delivered a bill for $50K and a Santa who looked like he’d been generated by every cousin on Earth. 💀 PixVerse V5.5 fixed the chaos in minutes. Here’s the exact same Santa… but actually iconic:👇
33
18
69
Kling 2.6 really leveled up 🚀 It leaves 2.5 Turbo behind with its full audio visual output in one single shot. The upgrade feels powerful already.
57
37
108
Meet GLM-4.6V by @Zai_org – the powerful multimodal model family built to see, reason, and execute together with native Function Calling support and a massive 128k token context window. You show an image, document, UI, or video → GLM-4.6V understands → reasons → takes action.
0
0
8
GLM-4.6V doesn’t just see content. It understands it, reasons through it, and acts on it. Vision becomes execution. If you’re building agents, research workflows, document automation, video analysis tools, or front-end systems, GLM-4.6V gives you one unified multimodal base to
1
0
6
5. UI Replication to Production Code Upload any UI screenshot or design mockup. GLM-4.6V recreates it as high-fidelity HTML, CSS, and JS with: • Accurate layouts and gradients • Dark-mode support • Modular components • Fully responsive behavior From screenshot →
1
0
7
4. Video Understanding for Real Learning Drop in tutorial or interview videos. GLM-4.6V: • Breaks content into chapters • Summarizes key insights • Extracts on-screen text and product mentions • Generates structured learning notes It also deconstructs storytelling and
1
0
8