NOTBAD AI
@notbadai
Followers
104
Following
52
Media
5
Statuses
32
We've open-sourced our internal AI coding IDE. We built this IDE to help with coding and to experiment with custom AI workflows. It's based on a flexible extension system, making it easy to develop, test, and tweak new ideas quickly. Each extension is a Python script that runs
2
5
11
4/ Follow @notbadai for updates. Editor: https://t.co/o27MdaPLJ6 Basic Extensions: https://t.co/lqECrkWrEV Discord:
discord.com
Check out the NotBadAI IDE community on Discord – hang out with 3 other members and enjoy free voice and text chat.
0
0
1
3/ You can quickly switch between different chat extensions and models using a dropdown menu. Our default chat extension uses @morphllm. For complex code where the chat doesn't work as well, you can use extensions that suggest inline autocompletions (multiline). We also use
1
1
2
2/ Extensions have access to project files, terminal content, cursor position, open tabs, uncommitted changes, etc. Extensions can run in the chat, be triggered by menu items/shortcuts, provide inline completions, and suggest edits. Extensions are very simple to write; for
1
0
2
Because we started from quiet star we had developed and have been using a slightly different version of grpo. We believe it may be a lower variance gradient and therefore possibly increase stability. We will write this up later but wanted to share it now so other people can use
9
7
67
The new training also improved GPQA from 64.2% to 67.3% and MMLU Pro from 64.2% to 67.3%. This model was also trained with the same reasoning datasets we used to train the v1.0 model. We mixed more general instruction data with answers sampled from the
We are releasing an updated reasoning model with improvements on IFEval scores (77.9%) than our previous model (only 51.4%). 👇 Links to try the model and to download weights below
1
6
7
We are releasing an updated reasoning model with improvements on IFEval scores (77.9%) than our previous model (only 51.4%). 👇 Links to try the model and to download weights below
1
6
10
We are releasing an updated reasoning model with improvements on IFEval scores (77.9%) than our previous model (only 51.4%). 👇 Links to try the model and to download weights below
1
6
10
Multi-node NVIDIA HGX B200-accelerated clusters are available NOW, on-demand through Lambda 1-Click Clusters.
1
11
83
Innovate faster with self-serve, on-demand access to multi-node NVIDIA HGX B200-accelerated clusters.
1
7
45
There's no structural reason why academia & universities can't get back to being an attractive place to do AI research and push the state of the art. Academia + Open research + infra investments = 🔥🔥🔥 Let's go!
Observation (for those not following): Universities in China have become more competitive at attracting junior academic researchers recently through facilitating corporate GPU use.
4
13
96
Python functions reasoning dataset: https://t.co/Hv2oBQ8iem Our reasoning model:
huggingface.co
0
0
4
We just released a Python coding reasoning dataset with 200k samples on @huggingface This was generated by our RL-based self-improved Mistral 24B 2501 model. This dataset was used to train train Notbad v1.0 Mistral 24B. 🤗 Links in replies 👇
2
7
19
Uploaded the dataset of 270k math reasoning samples that we used to finetune Notbad v1.0 Mistral 24B (MATH-500=77.52% GSM8k Platinum=97.55%) to @huggingface (link in reply) Follow @notbadai for updates
9
13
62
We're open-sourcing a math reasoning dataset with 270k samples, generated by our RL-based self-improved Mistral 24B 2501 model and used to train Notbad v1.0 Mistral 24B. Available on Hugging Face:
huggingface.co
0
4
9
📢 We are excited to announce Notbad v1.0 Mistral 24B, a new reasoning model trained in math and Python coding. This model is built upon the @MistralAI Small 24B 2501 and has been further trained with reinforcement learning on math and coding.
0
0
1
We uploaded the model Notbad v1.0 Mistral 24B to @huggingface
https://t.co/lB64miC9qI
huggingface.co
1
1
4
Thanks to @LambdaAPI and @deepinfra for providing help with compute resources for our research and training this model.
0
0
1