NOTBAD AI Profile
NOTBAD AI

@notbadai

Followers
104
Following
52
Media
5
Statuses
32

Joined July 2024
Don't wanna be here? Send us removal request.
@notbadai
NOTBAD AI
1 month
We've open-sourced our internal AI coding IDE. We built this IDE to help with coding and to experiment with custom AI workflows. It's based on a flexible extension system, making it easy to develop, test, and tweak new ideas quickly. Each extension is a Python script that runs
2
5
11
@notbadai
NOTBAD AI
1 month
3/ You can quickly switch between different chat extensions and models using a dropdown menu. Our default chat extension uses @morphllm. For complex code where the chat doesn't work as well, you can use extensions that suggest inline autocompletions (multiline). We also use
1
1
2
@notbadai
NOTBAD AI
1 month
2/ Extensions have access to project files, terminal content, cursor position, open tabs, uncommitted changes, etc. Extensions can run in the chat, be triggered by menu items/shortcuts, provide inline completions, and suggest edits. Extensions are very simple to write; for
1
0
2
@gharik
Georges Harik
7 months
Because we started from quiet star we had developed and have been using a slightly different version of grpo. We believe it may be a lower variance gradient and therefore possibly increase stability. We will write this up later but wanted to share it now so other people can use
9
7
67
@vpj
vpj
8 months
The new training also improved GPQA from 64.2% to 67.3% and MMLU Pro from 64.2% to 67.3%. This model was also trained with the same reasoning datasets we used to train the v1.0 model. We mixed more general instruction data with answers sampled from the
@notbadai
NOTBAD AI
8 months
We are releasing an updated reasoning model with improvements on IFEval scores (77.9%) than our previous model (only 51.4%). 👇 Links to try the model and to download weights below
1
6
7
@notbadai
NOTBAD AI
8 months
We are releasing an updated reasoning model with improvements on IFEval scores (77.9%) than our previous model (only 51.4%). 👇 Links to try the model and to download weights below
1
6
10
@notbadai
NOTBAD AI
8 months
💬 Try it for free: https://t.co/8jpnOqlYNq 🤗 Download weights:
Tweet card summary image
huggingface.co
0
0
0
@notbadai
NOTBAD AI
8 months
We are releasing an updated reasoning model with improvements on IFEval scores (77.9%) than our previous model (only 51.4%). 👇 Links to try the model and to download weights below
1
6
10
@LambdaAPI
Lambda
9 months
Multi-node NVIDIA HGX B200-accelerated clusters are available NOW, on-demand through Lambda 1-Click Clusters.
1
11
83
@LambdaAPI
Lambda
9 months
Innovate faster with self-serve, on-demand access to multi-node NVIDIA HGX B200-accelerated clusters.
1
7
45
@ClementDelangue
clem 🤗
8 months
There's no structural reason why academia & universities can't get back to being an attractive place to do AI research and push the state of the art. Academia + Open research + infra investments = 🔥🔥🔥 Let's go!
@srush_nlp
Sasha Rush
8 months
Observation (for those not following): Universities in China have become more competitive at attracting junior academic researchers recently through facilitating corporate GPU use.
4
13
96
@notbadai
NOTBAD AI
8 months
Python functions reasoning dataset: https://t.co/Hv2oBQ8iem Our reasoning model:
Tweet card summary image
huggingface.co
0
0
4
@notbadai
NOTBAD AI
8 months
We just released a Python coding reasoning dataset with 200k samples on @huggingface This was generated by our RL-based self-improved Mistral 24B 2501 model. This dataset was used to train train Notbad v1.0 Mistral 24B. 🤗 Links in replies 👇
2
7
19
@vpj
vpj
8 months
Uploaded the dataset of 270k math reasoning samples that we used to finetune Notbad v1.0 Mistral 24B (MATH-500=77.52% GSM8k Platinum=97.55%) to @huggingface (link in reply) Follow @notbadai for updates
9
13
62
@vpj
vpj
8 months
Uploaded Notbad v1.0 Mistral 24B to @huggingface https://t.co/WNgvlJRxsr
Tweet card summary image
huggingface.co
0
2
6
@notbadai
NOTBAD AI
8 months
We're open-sourcing a math reasoning dataset with 270k samples, generated by our RL-based self-improved Mistral 24B 2501 model and used to train Notbad v1.0 Mistral 24B. Available on Hugging Face:
Tweet card summary image
huggingface.co
0
4
9
@notbadai
NOTBAD AI
8 months
@notbadai
NOTBAD AI
9 months
📢 We are excited to announce Notbad v1.0 Mistral 24B, a new reasoning model trained in math and Python coding. This model is built upon the @MistralAI Small 24B 2501 and has been further trained with reinforcement learning on math and coding.
0
0
1
@notbadai
NOTBAD AI
8 months
We uploaded the model Notbad v1.0 Mistral 24B to @huggingface https://t.co/lB64miC9qI
Tweet card summary image
huggingface.co
1
1
4
@notbadai
NOTBAD AI
8 months
Thanks to @LambdaAPI and @deepinfra for providing help with compute resources for our research and training this model.
0
0
1