Rishabh Srivastava
@rishdotblog
Followers
12K
Following
15K
Media
642
Statuses
4K
Co-Founder @factiq (YC W23)
Singapore
Joined September 2011
I genuinely think we built the best search engine for official economic data. Been working on this for 6 months. We spent ~$100k in tokens to structure economic data and make it easier to search. It's answers economic data really well. From "What has been the actual impact of
Excited to launch FactIQ today! 🚀 We just indexed 7.4M+ official US data series to build the ultimate economic research agent. Visualize trends instantly. Verify every source. Export charts for your reports. Free for the next week - try it out at factiq[dot]com!
19
7
141
Vibe coded an analytics dashboard for FactIQ. Way prefer it to any third party tool - could just use my existing DB and logs - easy to set up custom features (custom funnels/filters/event replays) 30 mins to iterate end to end. SaaS internal tools are done for
2
0
9
In 2000, the S&P 500 had an elevated trailing P/E of 26, well above its long-term avg. of 14. For the next 11 years, the S&P 500 delivered zero returns! The current S&P 500 trailing P/E is 30 – even higher than 2000. History rhymes. For standardized performance visit our website.
0
19
118
Launching something new today. Thought I had everything covered and could have a chill launch week. Then, found tons of bugs and this happened ðŸ«
0
0
6
Been taking Opus 4.5 for a spin. Opus 4.5 + Claude Code is super worth it for planning, but I still prefer Codex for actual coding and reliability. Opus +ves - Great at using web search to get info that it needs - Thoroughly explores the codebase - Creates fairly concise plans
2
2
19
Announcing Olmo 3, a leading fully open LM suite built for reasoning, chat, & tool use, and an open model flow—not just the final weights, but the entire training journey. Best fully open 32B reasoning model & best 32B base model. 🧵
55
328
2K
I like the new Codex Max – but it's extremely emotionally challenged when writing frontend copy 😅 It's also meh at design Very, very good at verifiable tasks (sp on the backend) though!
2
0
2
Huh, it's somehow gone to shit in the last 30 minutes. Guess they're still figuring out how to handle more traffic w/o compromising quality
Gemini Pro 3 + Antigravity is very good. Antigravity still has janky UX – but its capabilities more than make up for it. Handles major refactors and large codebases extremely well Gemini's long-context supremacy really shining through here
2
0
4
Gemini Pro 3 + Antigravity is very good. Antigravity still has janky UX – but its capabilities more than make up for it. Handles major refactors and large codebases extremely well Gemini's long-context supremacy really shining through here
1
0
12
My (non-technical) cofounder, last month: "I can push features and fix bugs myself - this is incredible!" Today: "Sheesh Codex is so slow. I waste so much time just waiting for tokens" The AI hedonic treadmill is so real.
1
1
16
ChatGPT has (finally) started taking credit for the thankless work it does 😅
4
0
8
Added `grok-4-fast` to my agentic data analysis benchmark – super cheap, super fast, super good
Haiku 4.5 hits a sweet spot for agentic data analysis workflows Super nice blend of low cost, low latency, and high quality outputs. I found it better than gpt-5. Will try to publish proper evals if I can find the time!
2
0
5
Haiku 4.5 hits a sweet spot for agentic data analysis workflows Super nice blend of low cost, low latency, and high quality outputs. I found it better than gpt-5. Will try to publish proper evals if I can find the time!
4
0
14
You're doing yourself a disservice if you still have not used Codex It worked uninterrupted for 35 mins for a super complex task - and got it right first try Quite nuts - it's already a much better programmer than me (for verifiable tasks) already.
96
35
917
Man OpenAI killed it this DevDay. Tons of startups will have to pivot as a result of this. "Ride the waves caused by constant churn" seems to be the only viable strategy for an early stage co moving forward 😅
2
1
10
Jfc Sonnet 4.5 is so obnoxiously confidently wrong – specially if you've gotten used to GPT-5 Was figuring out how to dump pgvector into a table via psycopg3, using a COPY BINARY method It led me down a wild goose chase (link below), GPT5 just gave the right answer
1
0
3
Using gpt-5-nano was way faster though! Took just ~2 hours with high concurrency. The RTX5090 took ~22 hours
0
0
2
As more tasks become token intensive, local LLMs on fast consumer GPUs start to make more sense (just for $ savings) Just finished an text extraction on ~million docs. Cost ~$120 with gpt-5-nano. But only $2 (electricity) with similar perf with Qwen3-4B-FP8 on my RTX5090
1
0
6