datawizzai Profile Banner
Datawizz AI Profile
Datawizz AI

@datawizzai

Followers
103
Following
5
Media
11
Statuses
24

Datawizz helps companies transition to Specialized Language Models

Joined February 2025
Don't wanna be here? Send us removal request.
@iddogino
Iddo Gino šŸ™
1 month
After taking some time off post-Rapid, I'm excited to share what I’ve been up to since: @datawizzai! We’ve raised a $12.5M Seed led by @humancapital to make AI 10x cheaper, 2x more accurate and 15x faster by transitioning from LLMs to SLMs. AI is eating the world. But unit
10
14
61
@datawizzai
Datawizz AI
7 months
Are OpenAI's newest models hallucinating more than before? Hallucinations have always been one of the biggest issues plaguing AI deployment. It now seems that this problem is getting worse - not better - with newer AI models - especially powerful reasoning models. The reality
1
1
3
@datawizzai
Datawizz AI
7 months
We built Prompt Debloat to help visualize which tokens (words / parts of words) have the most (and least) impact on the LLM answers. We use a technique called Token Ablation. How does it work? At every step we remove a token, re-run the prompt and check how the model
Tweet card summary image
promptdebloat.datawizz.ai
Make your AI prompts more efficient by analyzing token importance
0
1
2
@datawizzai
Datawizz AI
7 months
How much of the average LLM prompt is just bloat that doesn't impact results? More than 20% it turns out! We built a free tool to help visualize redundant tokens in LLM prompts! Link & examples below!
1
2
4
@datawizzai
Datawizz AI
7 months
🚨 Big announcements from OpenAI, Anthropic, Google and Meta this week. Multiple new SOTA model drops in just a few days, including GPT-4.1, o3, o4-mini, Gemini 2.5's crazy 2M context window and the Llama 4 family. Checkout our new model cheat-sheet:
0
2
12
@datawizzai
Datawizz AI
7 months
9/9 Selecting the best model is going to be harder than just always defaulting to the newest models. You’ll have to spend more time evaluating models around specific use-cases, and often leverage different models for different use cases inside on application.
0
0
0
@datawizzai
Datawizz AI
7 months
8/9 Our prediction is that we’ll quickly see other leading labs releasing more specialized models. Think: - Coding Models - Agent Models (function calling) - Vision / Extraction Models - Large Context Models - Human-sounding models (EQ) Etc…
1
0
0
@datawizzai
Datawizz AI
7 months
7/9 This is a logical evolution - new SOTA models are increasingly large and expensive to train and run. Scaling them farther while optimizing for a wide array of use cases is much harder. It’s easier to focus on optimizing models around more specialized use cases.
1
0
0
@datawizzai
Datawizz AI
7 months
6/9 Where in the past OpenAI’s new models were positioned as more generic, and you could generally assume every new release is an upgrade for most/all use cases other the past version (GPT-3 < GPT-3.5 < GPT-4 etc…), new models seem to be more "specialized".
1
0
0
@datawizzai
Datawizz AI
7 months
5/9 Even before that, GPT-4o was initially released as a specialized multi-modality model, and the o1 and o3 families are specific for heavy reasoning use cases.
1
0
0
@datawizzai
Datawizz AI
7 months
4/9 This isn’t the first time a new OpenAI release is marketed as a more ā€˜specialized’ model. GPT-4.5 was positioned as a specialized high-EQ / human sounding model. Sam signaled it ā€œisn’t for every use caseā€, but rather for applications where the higher ā€œEQā€ matters.
1
0
0
@datawizzai
Datawizz AI
7 months
3/9 The official benchmarks back up this messaging - it crashes all other OpenAI models on coding, but lags behind other models on other benchmarks like instruction following and conversational evaluations.
1
0
0
@datawizzai
Datawizz AI
7 months
2/9 You’ll notice that GPT-4.1 is being heavily marketed for its coding ability - everyone is reporting on its coding strengths, and it seems like something OpenAI is happily pushing. All case studies in the release are coding related.
1
0
0
@datawizzai
Datawizz AI
7 months
OpenAI changing strategy with GPT-4.1? OpenAI just released their newest flagship model - GPT 4.1. They notably focused this model on coding, positioning GPT 4.1 as a specialized coding model. Is this a new trend of new OpenAI models being more specialized? 🧵 1/9...
1
2
3
@datawizzai
Datawizz AI
8 months
Using @langfuse ? You can train a Specialized Language Model directly from your Langfuse logs using Datawizz! Fine out how --
0
2
8
@uiuxadrian
Adrian
9 months
Our friends from @datawizzai sent us pics from a live event with the marketing collateral we designed. Love to see our work in action šŸ’Ŗ
12
2
124
@datawizzai
Datawizz AI
9 months
The reality is most tasks done with LLMs today can be solved more efficiently - and accurately - with small specialized models. Read our full take here:
Tweet card summary image
datawizz.ai
New fine tuned version of the Deepseek-R1-Distilled-Qwen-1.5B by Berkeley research team surpasses OpenAI’s frontier o1 model in math problem solving, at 1/1000th of the size.
0
1
2
@datawizzai
Datawizz AI
9 months
This goes to show just how powerful small, mission-specialized models can be. Remember that while OpenAI o1 costs ~$60/1M tokens, you can easily run this 1.5B parameter model for ~0.15$/1M tokens. A 400x cost savings. 4/5
1
0
0
@datawizzai
Datawizz AI
9 months
This model ended up beating o1 in multiple math evaluations - insane for a model 3 orders of magnitude smaller 3/5
1
0
0