Explore tweets tagged as #ProductionCoding
@jeremychone
Jeremy Chone
3 months
AIPack `pro@coder` 0.2.20 released! + Plan-based dev support - with dev/plan-todo.md and dev/plan-done.md templates + `max_files_size_kb: ...` support (defaults to 5MB), to prevent sending too much data. . Tunings aip install pro@coder @AIPackAI #AICoding #ProductionCoding
1
1
3
@jeremychone
Jeremy Chone
5 months
Gemini 2.5 Pro vs. OpenAI o3 It is twice as expensive, and here is why: #ProductionCoding #RustCoding
0
1
6
@jeremychone
Jeremy Chone
7 months
Guess when AIPACK `pro@coder` hit the ground running for me: @rustlang #ProductionCoding @AipackAI
0
0
4
@jeremychone
Jeremy Chone
5 months
Using #AIPACK pro@coder to code AIPACK New TUI in the building ... (AIPACK is now about 35k loc, but context control makes all the difference) #ProductionAI #ProductionCoding #BuiltInRust #rustlang
0
0
4
@jeremychone
Jeremy Chone
5 months
Gemini 2.5 Pro vs. #OpenAI o3 Actual #Coding Costs Gemini 2.5 Pro is often 2.3x or more expensive than OpenAI o3. OpenAI o3 uses very little reasoning by default, which makes all the difference in cost. Below, exact same coding task (git restore . + replay) #ProductionCoding
2
0
3
@jeremychone
Jeremy Chone
6 months
Live Production Rust Coding Now: Sat 31 May, 4PM PT https://t.co/zNxZ7qA9Yt #RustLang #RustProgramming #ProductionCoding #YesWithAI
0
0
3
@jeremychone
Jeremy Chone
15 days
This seems obvious, but unfortunately some devs still skip basic best practices and Git discipline (especially when using AI). We do not have to be Git experts to use Git well. #ProductionCoding #programming https://t.co/FEhptq39K8
1
1
5
@jeremychone
Jeremy Chone
19 days
For anyone interested in a quick pricing comparison: My current production coding models are: - Bigger work ➜ Gemini 3 (backup 5.1-codex) - Simpler work ➜ 5.1-codex-mini (backup flash-latest) #AICoding #ProductionCoding #Gemini3 @openai
3
1
5
@jeremychone
Jeremy Chone
8 days
Big release of #AIPack 0.8.10 and pro@coder 0.2.28 With attachment support (image and pdf) See demo below, turning static image to interactive chart. #vibecoding to #productioncoding @AIPackAI https://t.co/Qk9AoyOj1R
0
1
4
@jeremychone
Jeremy Chone
5 days
Gemini Flash is the most underrated coding model. Surprisingly good, decently fast, huge context window, and very cheap. Pair it with another "big coder" model and you are all set. Can't wait for gemini-3-flash @GoogleAI @OfficialLoganK #AICoding #ProductionCoding
0
1
2
@jeremychone
Jeremy Chone
2 months
👇My new coding pair👇 gpt-5-codex AND gemini-flash-latest Why: ➜ gpt-5-codex Slow, but the best coder. ➜ gemini-flash-latest (09-2025) Fast, cheap, and pretty good!! Why not #Sonnet 4.5: ➜ doesn't listen well and is expensive. #genai #AICoding #ProductionCoding
0
1
3
@jeremychone
Jeremy Chone
5 months
GitHub Copilot AI PRs take much longer to review. There is a high noise-to-content ratio, especially compared to a good developer. Developers should use AI, but they should take ownership of their PRs. #MicromanageAI #UseAI #WithControl #ProductionCoding
1
0
4
@jeremychone
Jeremy Chone
2 months
@sdrzn Our take for #ProductionCoding is to go: 1) API (per usage) 2) Provider Unlocked - Use tool that are model provider agnostic. Try #AIPack with pro@coder, #AIProductionCoding A little bit more work, but so much more control
0
0
1
@jeremychone
Jeremy Chone
6 months
JC Rust Live Coding ➜ Today 10:30am PT GenAI Library - Adding Multi-Content to Streaming https://t.co/n7UgD53Fw2 #LiveCoding #RealCoding #ProductionCoding #rustlang
0
1
5
@jeremychone
Jeremy Chone
7 months
Master pricing with AI coding: Craft a sharp prompt for Flash 2.5 for simple coding tasks. Input/output virtually free. Even with a few reasoning tokens at $3.50, still dirt cheap. Gemini 2.5 Pro & Flash are the perfect pair for #ProductionCoding @OfficialLoganK @GoogleAI
0
0
4
@jeremychone
Jeremy Chone
4 months
gpt-5 is awesome at coding Rust #genai #rustlang @OpenAI #ProductionCoding
0
0
4
@jeremychone
Jeremy Chone
4 months
Ok, GLM 4.5 is pretty good at coding, and dirt cheap compared to Gemini 2.5 Pro. But for production coding, it comes short sometimes. Gemini 2.5 Pro is still the big boy there. #genai #AICoding #ProductionCoding cc @OfficialLoganK
1
0
4
@jeremychone
Jeremy Chone
2 months
It's sad that #gpt5 has become so slow for #coding (via API). 3x to 5x slower (gpt-5, -mini, -codex) #Alternatives: gpt-5-mini ➜ gemini-flash-latest gpt-5-codex ➜ ~ #Sonnet 4.5 @OpenAI #GiveSomeCodingSpeed #Please #genai #AICoding #ProductionCoding @AnthropicAI @GoogleAI
0
1
4
@jeremychone
Jeremy Chone
7 months
@OfficialLoganK Awesomeness!!! Gemini 2.5 Pro + Gemini 2.5 Flash are the perfect #AICoding duo. #ProductionCoding
0
0
0