nanomader
@nanomader
Followers
18
Following
432
Media
43
Statuses
384
Joined August 2020
@embirico @pvncher @EyalToledano for what it's worth, I did asked GPT-5.2 xhigh about it. I assume GPT-5.2 still loves XML!
0
0
0
Is this GPT-5 coding cheatsheet still relevant for GPT-5.2 @embirico? @pvncher @EyalToledano are you also using it? https://t.co/5g3sYHhpVW
1
0
0
Doesn’t matter. They will switch to GPT-OSS-120B hosted locally and still do their malicious actions
The level of automation and speed we observed have direct implications for how state‑linked and organized cybercriminals will operate moving forward. We are sharing our response so others can strengthen their defenses. https://t.co/y2yeY2jp1r
0
0
0
Because teams can move 10x faster architecture is even more important now
Another example of the unexpected changes of work due to AI agents is that we will start to see new bottlenecks emerge that we didn’t predict before. In a world where AI agents make engineers 2-3X more productive, the new bottleneck will be deciding what to build and designing
0
0
0
Hey @pvncher, this tweet ( https://t.co/IseHYdktro) is the best possible angle to show RepoPrompt, in my opinion, because it shows how easy you can talk to "the Oracle", GPT-5-Pro, with all relevant code. It's amazing! no. 1 feature IMHO
This is @RepoPrompt 1.5 The new Context Builder connects to your agent of choice, using your existing subscriptions, to fully automate the process of building the perfect context for a given token budget. All of Repo Prompt's power, fully automated, with 1 click
2
0
8
GPT-5-codex in Codex CLI vs GPT-5-Codex in Cursor Opus in Claude Code vs Opus in Cursor They all behave differently, it’s not the same model Codex performs the best in Codex CLI GPT-5 better in Cursor
0
0
1
working with cheetah got me thinking that the bottleneck is me and my brain reading cheetah output instead of another cheetah doing it. Wait, I have an idea!
0
0
0
if magic dev release LLM with 100m context window this will be the end! And beginning of something completely new, magic and groundbreaking! https://t.co/msjVcgt5J2
magic.dev
Research update on ultra-long context models, our partnership with Google Cloud, and new funding.
0
0
0
GPT-5 works well with XML-like instructions, but what about GPT-5-Codex? Is it worth using XML for prompts? cc @pakrym @dylan__hurd @thsottiaux For GPT-5-high I tend to use your prompt optimizer and then ask GPT-5 itself to rewrite my prompt "Help me refine my prompt for GPT
0
0
1
Use GPT-5-high in Cursor but GPT-5-Codex always in medium
@mattshumer_ Recommend to run gpt-5-codex on medium
1
0
0
I really believe that Cursor is playing chess while we play checkers and they are taking all input (prompts) from their product and train LLM to do it better, faster and the ultimate goal is to reduce/cut middleman aka SWEs aka us.
0
0
1
I am fully aware how easy it is to criticize product on the Internet. With that in mind I just want to say I have bad memories with CC. And I really hope it's a skill issue on my side but I get better (more important: valid) results from other tools. When I paste the same
0
0
1