Epoch AI Profile Banner
Epoch AI Profile
Epoch AI

@EpochAIResearch

Followers
9,104
Following
24
Media
63
Statuses
293

Epoch AI is a research institute investigating the trajectory of AI for the benefit of society.

Joined May 2022
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@EpochAIResearch
Epoch AI
5 months
2023 was a great year for Epoch! We just published our annual impact report, listing our achievements in the past year and our plans for the coming year. Here’s a summary 🧵:
2
11
75
@EpochAIResearch
Epoch AI
3 months
How many large AI models are out there, who developed them, and for what applications? To answer this question, we present a new dataset tracking every AI model we could find trained with over 10^23 FLOP. Highlights in thread 🧵
Tweet media one
37
278
2K
@EpochAIResearch
Epoch AI
26 days
1/ How quickly are state-of-the-art AI models growing? The amount of compute used in AI training is a critical driver of progress in AI. Our analysis of over 300 machine learning systems reveals that the amount of compute used in training is consistently being scaled up at
Tweet media one
19
254
1K
@EpochAIResearch
Epoch AI
6 months
🧵 Epoch is tracking the most compute-intensive models developed to date. Learn more below!
Tweet media one
19
64
961
@EpochAIResearch
Epoch AI
19 days
Are we running out of data to train language models? State-of-the-art LLMs use datasets with tens of trillions of words, and use 2-3x more per year. Our new ICML paper estimates when we might exhaust all text data on the internet. 1/12
Tweet media one
22
108
562
@EpochAIResearch
Epoch AI
2 months
OpenAI claims that the newest version of GPT-4 Turbo is “majorly improved” from the previous version. We found that it performs noticeably better on @idavidrein et al.’s GPQA benchmark, though still short of Claude 3 Opus.
Tweet media one
9
37
189
@EpochAIResearch
Epoch AI
3 months
1/ New Epoch post! In our newest article, researcher @EgeErdil2 argues that we should expect to see training and inference budgets to end up being roughly equal in size.
Tweet media one
2
10
123
@EpochAIResearch
Epoch AI
19 days
Given all this, when will we exhaust the web's text? Training a compute-optimal dense model on ~100T tokens for 4 epochs would take ~5e28 FLOP (around 3 OOMs above GPT-4). At historical growth rates, we'll reach this level by 2028. 7/12
Tweet media one
2
20
90
@EpochAIResearch
Epoch AI
21 days
How much does it cost to develop frontier AI models? Our study (w/ @StanfordHAI ) finds hardware costs grow 2.4x/year, with training alone in the tens of millions for models like GPT-4. Full development costs (when amortized) are over $100M for the most advanced models. 🧵
Tweet media one
3
33
115
@EpochAIResearch
Epoch AI
2 years
📈 @OurWorldInData just released their new topic page on AI. It features our research on trends in AI, including: - Training compute - GPU price performance - Number of parameters - Number of data points for training
@finmoorhouse
Fin Moorhouse
2 years
For instance, here's a nice presentation of training compute used for major models over time, from @Jsevillamol et al. (2022) More:
Tweet media one
1
3
15
1
19
62
@EpochAIResearch
Epoch AI
26 days
2/ Over two years ago, we found a 4x/year compute growth rate for notable ML models. Now, with updated and 3x more data, this trend holds: we estimate a growth rate of 4.1x/year (90% CI: 3.7x to 4.6x). If this continues, the scale-performance relationship suggests major AI
Tweet media one
4
15
64
@EpochAIResearch
Epoch AI
1 month
Could increasing returns to software R&D lead to explosive tech progress? Our new paper surveys estimation methods and finds evidence of increasing returns to scale in software R&D.
3
13
53
@EpochAIResearch
Epoch AI
2 months
1/ We’ve been researching trends in machine learning models, in collaboration with AI Index ( @indexingai ) and @StanfordHAI . Here’s what we found 🧵
4
19
51
@EpochAIResearch
Epoch AI
13 days
Imagine if climate change forecasting was just based on vibes, instead of data or scientific models. This is the current state of AI discourse. Our goal at Epoch AI is to change this, and introduce rigor into AI forecasting.
Tweet media one
1
4
50
@EpochAIResearch
Epoch AI
19 days
However, models are likely to be overtrained (trained on more data than is Chinchilla optimal). Overtrained models, such as Llama-3, are more efficient and run faster. If so, data might become a bottleneck sooner: 2027 with 5x overtraining and 2025 with 100x overtraining. 8/12
Tweet media one
2
5
55
@EpochAIResearch
Epoch AI
1 year
We are launching the Epoch and Forecasting Research Institute mentorship program for women, non-binary people, and trans people of all genders who want to contribute to the field of AI forecasting. Apply now!
2
19
48
@EpochAIResearch
Epoch AI
2 months
Google DeepMind has released AlphaFold 3, the latest in their family of groundbreaking biological prediction models. How does it stack up against previous AI models in the domain of biology?
Tweet media one
1
8
46
@EpochAIResearch
Epoch AI
5 months
In 2023, we published almost 20 pieces of research. We covered topics such as ML hardware trends, biological sequence models, algorithmic improvements, AI automation of labor, microprocessor energy efficiency, compute requirements for transformative AI, and more.
Tweet media one
1
5
38
@EpochAIResearch
Epoch AI
3 months
43 out of 81 compute-intensive models were developed by organizations based in the United States, followed by 19 in China and 6 in the UK. The proportion from China has grown in recent years.
Tweet media one
2
4
39
@EpochAIResearch
Epoch AI
3 months
Most compute-intensive models are language models, as might be expected. The largest models in our dataset, GPT-4 and Gemini Ultra, are both multimodal vision-language models. After excluding language, the main other domain was video/image generation.
Tweet media one
1
1
40
@EpochAIResearch
Epoch AI
26 days
How much does it cost to train frontier AI models? Join us next Wednesday for a webinar on the rising costs of training cutting-edge AI models and the implications for AI research, development, and policymaking. Click below to add this to your calendar!
Tweet media one
2
6
37
@EpochAIResearch
Epoch AI
26 days
3/ What about models at the frontier? We look at models in the running top 10 by compute. The growth up to ~2018 was faster than the overall trend, perhaps reflecting labs entering the AI race. After 2018, the growth of frontier models slowed, converging with the overall trend.
Tweet media one
1
4
37
@EpochAIResearch
Epoch AI
26 days
6/ Lastly, we find that the 4-5x growth rate allows us to retrodict the scale of the largest models today, such as GPT-4 and Gemini Ultra. This suggests that scaling is to an extent predictable, and we might be able to anticipate future growth at the leading edge of AI.
Tweet media one
1
6
36
@EpochAIResearch
Epoch AI
26 days
10/ You can read our full analysis and see interactive versions of our charts below! And follow us for further updates and research. This article was authored by @jsevillamol and Eduardo Roldán.
5
5
35
@EpochAIResearch
Epoch AI
14 days
We've refreshed our data trends dashboard! We've updated the figures on compute trends, data trends, and training costs to reflect our recent research on these topics, as well as new model releases.
Tweet media one
2
5
35
@EpochAIResearch
Epoch AI
26 days
8/ 4-5x/year is an incredibly fast pace of growth that will require significant engineering and scientific challenges to maintain. Training will soon involve managing clusters of hundreds of thousands of GPUs, and using them to efficiently train ever larger models.
2
4
37
@EpochAIResearch
Epoch AI
2 months
There’s been a lot of interest in Epoch’s training cost estimates, featured in Stanford’s 2024 AI Index Report. Our estimates show the skyrocketing costs of training large AI models. Want to learn more about how we calculated these costs? 🧵
@chiefaioffice
Chief AI Officer
2 months
AI training cost estimates from the Stanford 2024 AI Index Report: Original transformer model - $930 GPT-3 - $4.3M GPT-4 - $78.4M Gemini Ultra - $191.4M
Tweet media one
25
211
1K
2
9
33
@EpochAIResearch
Epoch AI
3 months
The leading developers of confirmed compute-intensive models are Google, Meta, DeepMind, Hugging Face, and OpenAI. If including unconfirmed compute-intensive models, then Anthropic and Alibaba are also near the top.
Tweet media one
1
4
31
@EpochAIResearch
Epoch AI
19 days
E.g. if "chair" is in 0.2% of CC pages & Google reports 40B "chair" results, it suggests an index size of 40B/0.002 = 20T pages. We estimate Google's index at ~270B pages. At ~1.9KB plain text/page, this suggests ~500T tokens on the indexed web, 5x larger than the CC. 4/12
2
0
34
@EpochAIResearch
Epoch AI
19 days
How much text is available for training? The Common Crawl (CC), a widely-used repository of crawled data, contains more than 100 trillion tokens, roughly 10x the size of the largest datasets. But CC is not the whole web. 2/12
1
1
35
@EpochAIResearch
Epoch AI
19 days
To maintain the current rate of progress beyond 2028 (or 5e28 FLOP), developing or refining alternative data sources (like synthetic data) seems crucial. Though challenges remain, these could enable continued ML scaling beyond the limits of public text. 12/12
1
0
35
@EpochAIResearch
Epoch AI
2 years
Epoch is hiring! We are looking for a Research Data Analyst to help us develop datasets that inform us about key trends in machine learning. This role is fully remote and we can hire in most countries. Apply before December 14th!
2
18
28
@EpochAIResearch
Epoch AI
26 days
5/ Leading AI labs, including OpenAI, Google DeepMind, and Meta AI, have been scaling their models at rates broadly consistent with the overall trend, with an average growth of 5-7x per year.
Tweet media one
1
5
31
@EpochAIResearch
Epoch AI
26 days
4/ Language models are among the most important models today. Frontier LLMs saw fast compute growth in the last decade, slowing to 5x/year after GPT-3 (2020), reflecting how they grew in scale until catching up with the overall frontier.
Tweet media one
1
4
32
@EpochAIResearch
Epoch AI
19 days
Filtering low-quality content reduces the usable web text stock to ~100T tokens. Training for multiple epochs can increase the "effective stock" by ~4x before steep diminishing returns (Muennighoff et al.). More headroom, but still finite. 6/12
Tweet media one
2
0
33
@EpochAIResearch
Epoch AI
19 days
What about the non-indexed "deep" web & private data? It's about 10x larger—we estimate ~3 quadrillion tokens across social media & messaging apps like Facebook, Instagram & WhatsApp, based on usage stats, post lengths & share of global web traffic. 9/12
1
1
34
@EpochAIResearch
Epoch AI
3 months
This illustrates how quickly the frontier of AI development advances. At the start of 2024, anyone could download and run a model that would have been at the frontier in 2022.
Tweet media one
1
1
27
@EpochAIResearch
Epoch AI
26 days
9/ Given the strong relationship between compute and performance that we have observed in many contexts (the so-called scaling laws), this is reason to expect AI performance to continue improving in the near future beyond today’s capabilities.
2
4
26
@EpochAIResearch
Epoch AI
19 days
The web contains a lot of low-quality text. Typically, a significant portion of Common Crawl data is removed to produce a suitable dataset for LLM training: between 70% (as in RedPajama-v2) and 90% (as in RefinedWeb). 5/12
2
0
32
@EpochAIResearch
Epoch AI
3 months
Training compute has been a key driver of AI progress, and AI systems trained with the most compute tend to have the most advanced and general capabilities.
1
2
26
@EpochAIResearch
Epoch AI
1 month
1/ The first Interim International Scientific Report on the Safety of Advanced AI is out! Authored by 75 global experts, including contributors from @EpochAIResearch , it gives a comprehensive look at the state of general-purpose AI (GPAI), its capabilities and risks, and
Tweet media one
1
1
25
@EpochAIResearch
Epoch AI
3 months
For this reason, we’ve been thoroughly searching for models trained using over 10^23 floating-point operations (FLOP). This corresponds to a training cost of hundreds of thousands of dollars or more. We’ll call these models “compute-intensive” in this thread.
2
0
25
@EpochAIResearch
Epoch AI
3 months
Through our search, we’ve identified 81 models above this threshold, and another 86 that we believe likely exceed this threshold, but do not have publicly available training details.
1
1
25
@EpochAIResearch
Epoch AI
3 months
The first model trained on over 10^23 FLOP, AlphaGo Master, was published in 2017, but the large majority of compute-intensive models were released after 2021.
Tweet media one
1
1
24
@EpochAIResearch
Epoch AI
3 months
Just under half of these models have downloadable model weights. The downloadable model with the most training compute to date is Falcon-180B, which was trained on 3.8*10^24 FLOP, or roughly ⅕ as much compute as GPT-4.
1
1
24
@EpochAIResearch
Epoch AI
26 days
8/ If training compute has grown 4-5x/year so far, what are the implications? First and foremost, it is a reasonable baseline for future predictions before considering reasons to expect a slowdown or speed up.
1
4
23
@EpochAIResearch
Epoch AI
19 days
Will the "data wall" result in a halt in progress from scaling? Not a halt, perhaps a slowdown. After 2028, we could continue training bigger models (more parameters) on the existing slowly growing stock of data, yielding performance gains, but more slowly. 11/12
Tweet media one
2
2
30
@EpochAIResearch
Epoch AI
19 days
To estimate the indexed web's size, we use a method commonly used to study search engines. First, we calculate word frequencies using a web corpus like CC. Then, we search for words with varying frequencies on Google & record the number of pages for each word. 3/12
1
0
26
@EpochAIResearch
Epoch AI
1 month
We recently published a replication study of the well-known Chinchilla scaling law. In doing so, we unified their scaling laws.
Tweet media one
1
4
22
@EpochAIResearch
Epoch AI
2 months
GPQA is a set of very difficult questions that are designed to be “Google-proof”. Experts who have or are pursuing PhDs in the relevant domains only get 65% of the questions correct.
1
0
21
@EpochAIResearch
Epoch AI
7 months
We’ve updated our AI Trends dashboard! Most significantly, we have updated the hardware section to reflect the results of our most recent work. Check it out!
0
1
21
@EpochAIResearch
Epoch AI
3 months
However, there are limits to our methods, especially when some developers do not publish details about how they trained their models. We found over 80 models that may have been trained on over 10^23 FLOP, but with unconfirmed training details. But there could be more that we
2
0
20
@EpochAIResearch
Epoch AI
19 days
In practice, using all web text data is hard: crawling everything is complex & costly. Private data from social media & messaging may be off-limits due to privacy & legal concerns, though some labs (e.g. Meta w/ FB & IG) seem to be exploring this avenue. 10/12
2
0
24
@EpochAIResearch
Epoch AI
2 months
8/ We found that the most expensive models now cost tens of millions of dollars to train. The original Transformer from 2017 cost only $900 to train, while GPT-4 cost $78 million, and Gemini Ultra cost nearly $200 million! (Based on cloud compute rental prices)
Tweet media one
2
5
19
@EpochAIResearch
Epoch AI
9 months
Model sizes in machine learning increased by 0.1 orders of magnitude per year for several decades, until ~2018. However, between 2018 and 2022, model sizes increased by 1 order of magnitude per year – ten times faster than before!
2
3
19
@EpochAIResearch
Epoch AI
2 months
We are hiring a researcher to investigate the economics of AI deployment and automation. We'll accept applications on a rolling basis. This role is remote, and we can hire in many countries!
0
3
19
@EpochAIResearch
Epoch AI
3 months
We’ve updated our Machine Learning Trends dashboard to reflect our most recent research, as well as new developments in the field of AI!
Tweet media one
2
4
19
@EpochAIResearch
Epoch AI
2 months
We attempted to replicate the Chinchilla paper's parametric scaling law, and we found some issues.
@tamaybes
Tamay Besiroglu
2 months
The Chinchilla scaling paper by Hoffmann et al. has been highly influential in the language modeling community. We tried to replicate a key part of their work and discovered discrepancies. Here's what we found. (1/9)
Tweet media one
16
136
901
1
1
17
@EpochAIResearch
Epoch AI
21 days
So do AI labs spend more on compute than on staff? Yes, but staff costs may still be very significant. The combined cost of hardware makes up 47–67% of the total in our breakdown, whereas R&D staff costs are 29–49% (including equity).
Tweet media one
1
5
17
@EpochAIResearch
Epoch AI
13 days
ICYMI: we did a deep dive into the cost to develop and train frontier AI models.
@EpochAIResearch
Epoch AI
21 days
How much does it cost to develop frontier AI models? Our study (w/ @StanfordHAI ) finds hardware costs grow 2.4x/year, with training alone in the tens of millions for models like GPT-4. Full development costs (when amortized) are over $100M for the most advanced models. 🧵
Tweet media one
3
33
115
1
1
17
@EpochAIResearch
Epoch AI
3 months
We are hiring a Software Engineer! This person will own and maintain our website and develop an interactive data visualisation library, among other responsibilities. The role is remote and we can hire in many countries. Apply before April 13th.
0
4
16
@EpochAIResearch
Epoch AI
2 months
The new GPT-4 Turbo scored 46.5%, which is around 3 percentage points better than the previous version, and over 10 points better than the original GPT-4! It's also partway between Claude 3 Sonnet and Opus. This suggests that the model is a meaningful improvement.
0
0
16
@EpochAIResearch
Epoch AI
3 months
6/ We will be investigating the validity of this argument with more realistic assumptions in future work. Read our short article now at the link below, and follow us to be notified of future work!
1
0
16
@EpochAIResearch
Epoch AI
9 months
One of the largest costs in AI development is the monetary cost of computation to train a system. Over the past decade, this cost has more than tripled each year (0.5 orders of magnitude per year) for notable systems.
Tweet media one
1
5
15
@EpochAIResearch
Epoch AI
8 months
We're pleased to see our data inform the current discussions about the recent executive order on compute, and broader discussions about frontier model regulation.
@ohlennart
Lennart Heim
8 months
Putting the EO's training compute threshold into perspective: No publicly known model currently exceeds this. The threshold of 1e26 FLOP is roughly 5x that of GPT-4 by estimates, which suggests the training compute cost alone could be around ≈$250M.
Tweet media one
9
27
155
0
1
15
@EpochAIResearch
Epoch AI
3 months
Note that while we previously estimated that AI developers will run out of high-quality text data by 2026, we'll soon be releasing an updated analysis indicating that there may be more room to scale AI systems before developers run into data limits.
2
0
15
@EpochAIResearch
Epoch AI
2 months
2/ Before 2014, most notable (meaning highly-cited, widely-used, or state-of-the-art) ML models were released by academia, but industry has taken the lead since then. In 2023, there were 51 notable models developed by industry, compared to just 15 from academia.
Tweet media one
1
3
14
@EpochAIResearch
Epoch AI
3 months
How will AI training costs grow over time? This new Fortune article features our research into this question, including quotes from our director @Jsevillamol and a graph of our (extrapolated) projection of training costs.
@FortuneMagazine
FORTUNE
3 months
The cost of training AI could soon become too much to bear. Here's why.
0
3
7
0
4
14
@EpochAIResearch
Epoch AI
2 months
Meanwhile, under the US executive order on AI safety, AI systems that are trained primarily on biological sequence data are subject to reporting requirements if their training compute exceeds 1E23 operations. 4E22 is just under half of that threshold.
1
0
13
@EpochAIResearch
Epoch AI
18 days
Our research on training data bottlenecks was also featured in the Associated Press today! Check out the article below, which includes several quotes from our associate director @tamaybes .
Tweet media one
1
3
12
@EpochAIResearch
Epoch AI
2 months
3/ The US leads in number of notable models, followed by China, France and Germany.
Tweet media one
1
4
12
@EpochAIResearch
Epoch AI
2 months
4/ The number of parameters in ML models has grown exponentially over time. Some models today have over a trillion parameters.
Tweet media one
1
3
12
@EpochAIResearch
Epoch AI
18 days
Check out TIME’s great profile of our director @Jsevillamol , and Epoch AI’s history and mission! We want to introduce rigor into AI discourse, to improve decisionmaking about AI, and to encourage people to take seriously this technology’s transformative potential. (link below)
Tweet media one
1
3
12
@EpochAIResearch
Epoch AI
2 months
So AlphaFold 3 is probably still under the 1E23 reporting threshold. However, this requirement will become increasingly relevant as biological models continue to improve and scale.
0
0
12
@EpochAIResearch
Epoch AI
2 months
5/ Training compute, or the amount of computation required to train a system, has also been growing exponentially, crossing many orders of magnitude over decades. The most compute-heavy systems are now approaching 10^26 floating point operations (FLOP), or 100B petaFLOP.
Tweet media one
1
4
12
@EpochAIResearch
Epoch AI
21 days
If costs continue to grow at the current trend, the largest training runs will cost over $1 billion by 2027. Only a few labs would be able to fund this, and this level of spending would fuel models at a much larger scale than today’s models.
1
2
12
@EpochAIResearch
Epoch AI
4 months
Check out this new paper on compute governance, which extensively cites and features our data!
@ohlennart
Lennart Heim
4 months
Today, we’re publishing “Computing Power and the Governance of AI”. We argue that compute is a particularly promising node for AI governance. We also highlight the risks of compute governance and offer suggestions for how to mitigate them. 1/
Tweet media one
5
49
182
0
2
12
@EpochAIResearch
Epoch AI
3 months
This New York Times article features our data on training dataset sizes, and cites our research on when available training data might be exhausted.
Tweet media one
1
1
11
@EpochAIResearch
Epoch AI
6 months
Epoch is hiring! We’re looking for a Product and Data Visualization Designer to own, maintain and expand the design and visual communication of our research. Apply by this Friday, December 15!
1
4
11
@EpochAIResearch
Epoch AI
3 months
We tried to make our search as comprehensive as possible. We found these models using benchmark leaderboards, model repositories, and extensive searches in 15 different languages.
1
0
11
@EpochAIResearch
Epoch AI
1 month
Beyond Stockfish, we also explored computer vision, SAT solvers, linear programming, and reinforcement learning. Despite data limitations, our estimates suggest the returns to R&D could indeed exceed 1, although inconclusive.
Tweet media one
1
2
10
@EpochAIResearch
Epoch AI
2 months
11/ These escalating training costs have contributed to industry dominance of the field. If trends continue, the cost of training ML models will continue to grow rapidly, and become increasingly economically significant.
1
2
11
@EpochAIResearch
Epoch AI
1 month
Why does this matter? In an AI-automated R&D world, the 'returns to research effort' parameter is crucial. If AI systems improve AI-software so that a doubling growth in AI inputs more than doubles the rate of innovation, it could lead to explosive tech growth.
1
1
11
@EpochAIResearch
Epoch AI
2 months
10/ Naturally, since these costs are driven by growing compute, there is a close correlation between training cost and training compute.
Tweet media one
1
2
11
@EpochAIResearch
Epoch AI
7 months
We are hiring a Product and Data Visualization Designer to own the design and visual communication of our research. This work will shape how our research is perceived and understood globally. The role is remote, and we can hire in many countries! Apply by December 15.
0
6
11
@EpochAIResearch
Epoch AI
2 months
7/ Most developers do not publish their training expenses, but we often have enough details about the training process, such as the chips used and the duration of training, to produce reasonable estimates of training costs.
1
2
10
@EpochAIResearch
Epoch AI
9 months
🧵Revisiting past work by Epoch: “A time-invariant version of Laplace's rule” by @Jsevillamol and @EgeErdil2
1
2
10
@EpochAIResearch
Epoch AI
10 months
The amount of compute used to train state-of-the-art machine learning systems has grown 0.6 orders of magnitude per year over the last decade.
Tweet media one
1
5
10
@EpochAIResearch
Epoch AI
1 year
Epoch now has a newsletter. Subscribe here!
1
1
10
@EpochAIResearch
Epoch AI
2 months
9/ These training costs have also grown exponentially over time.
Tweet media one
1
2
10
@EpochAIResearch
Epoch AI
2 months
12/ If you’d like to learn more, check out the full AI Index report below!
0
2
10
@EpochAIResearch
Epoch AI
11 months
We are hiring an ML Hardware Researcher. As our go-to specialist in computing hardware, your work will be crucial for improving our understanding of the future and impact of AI. The role is remote, and we can hire in many countries! Apply by August 20th
1
7
10
@EpochAIResearch
Epoch AI
5 months
We hired senior researcher David Owen ( @everysum ) to help with research and management, and data scientist Robi Rahman ( @robi_rahman ) and data contractors to scale up our data efforts. We updated the style of our website and visualizations to better reflect the quality of our
1
0
9
@EpochAIResearch
Epoch AI
1 year
In the latest 80,000 Hours podcast, Tom Davidson explores AI's exponential growth potential. Epoch played a crucial role in the work discussed.
0
2
9
@EpochAIResearch
Epoch AI
8 months
Just released: new report by Epoch on ML hardware trends.
@MariusHobbhahn
Marius Hobbhahn
8 months
We took a careful look at trends in ML hardware. Main finding: lower-precision formats like FP16 & INT8 combined with specialized tensor cores increase computational performance by up to 10x on average. NVIDIA's H100 sees 30x speedup with INT8 vs FP32. 1/
Tweet media one
1
13
81
0
2
9