Hasan_Shap Profile Banner
Hassan Shapourian Profile
Hassan Shapourian

@Hasan_Shap

Followers
440
Following
1K
Media
19
Statuses
200

AI and Quantum. currently @ZyphraAI. ex-Cisco, ex-Microsoft. Former postdoc at Harvard, MIT. Former student at Princeton, UIUC, and U.Chicago.

Joined October 2012
Don't wanna be here? Send us removal request.
@Hasan_Shap
Hassan Shapourian
2 years
Very much enjoyed working with and learning from a distinguished group of physicists turned ML scientists. Happy to announce my very first ML paper. Let the journey begin… https://t.co/gw47r3QMKl
Tweet card summary image
huggingface.co
@danintheory
Dan Roberts
2 years
Do LLMs really need to be so L? That's a rejected title for a new paper w/ @Andr3yGR, @kushal_tirumala, @Hasan_Shap, @PaoloGlorioso1 on pruning open-weight LLMs: we can remove up to *half* the layers of Llama-2 70B w/ essentially no impact on performance on QA benchmarks. 1/
0
2
17
@ZyphraAI
Zyphra
19 days
In collaboration with @AMD and @IBM, we @ZyphraAI are sharing ZAYA1-base! The first large-scale model on an integrated AMD hardware, software, and networking stack. ZAYA1 uses Zyphra’s novel MoE architecture with 760M active and 8.3B total params. Tech paper and more below👇
3
50
280
@Alibaba_Qwen
Qwen
2 months
Introducing the compact, dense versions of Qwen3-VL — now available in 4B and 8B pairs, each with both Instruct and Thinking variants. ✅ Lower VRAM usage ✅ Full Qwen3-VL capabilities retained ✅ Strong performance across the board Despite their size, they outperform models
73
231
1K
@Hasan_Shap
Hassan Shapourian
2 months
Check out the new paper from @ZyphraAI
0
0
2
@gabriberton
Gabriele Berton
5 months
A year ago Ross Girschik (the object detection GOAT) gave a talk on "real tasks" (the end goal of an ML system) vs "fake tasks" (the intermediate tasks we created to achieve the real tasks). Most vision tasks, like classification and detection, are fake. (1/5)
@sainingxie
Saining Xie
5 months
@gabriberton slam = object detection, iykyk
9
37
401
@polynoamial
Noam Brown
5 months
Today, we at @OpenAI achieved a milestone that many considered years away: gold medal-level performance on the 2025 IMO with a general reasoning LLM—under the same time limits as humans, without tools. As remarkable as that sounds, it’s even more significant than the headline 🧵
@alexwei_
Alexander Wei
5 months
1/N I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO).
142
530
5K
@karpathy
Andrej Karpathy
6 months
Products with extensive/rich UIs lots of sliders, switches, menus, with no scripting support, and built on opaque, custom, binary formats are ngmi in the era of heavy human+AI collaboration. If an LLM can't read the underlying representations and manipulate them and all of the
334
581
6K
@ZyphraAI
Zyphra
8 months
Zyphra is releasing our first reasoning model, ZR1-1.5B. This small but powerful reasoning model excels at both math and code, making it one of the best models in these categories for its size. It also uses 60% less reasoning tokens than comparable models. 🆓Apache 2.0 license.
15
63
503
@condensed_the
Condensed Matter Theory Center
9 months
~ 10K people are working in US quantum tech, this is ~ 5 years worth of US physics PhD production (not all are physics PhDs), estimated total expense/yr ~ $5 billion, income ~ zero How long can this continue? If/when quantum winter comes what would happen to these QC workers?
8
7
68
@ZyphraAI
Zyphra
10 months
Today, we're excited to announce a beta release of Zonos, a highly expressive TTS model with high fidelity voice cloning. We release both transformer and SSM-hybrid models under an Apache 2.0 license. Zonos performs well vs leading TTS providers in quality and expressiveness.
139
441
3K
@ZyphraAI
Zyphra
1 year
We @Zyphra previously described our preliminary RAG system that achieved SOTA performance on the HashHop long-context task. We are now excited to share our paper presenting a newer version of this RAG system that achieves SOTA results across multiple long-context benchmarks.
2
12
57
@letonyo
Anthony Leverrier
1 year
Oh this is great! Who said quantum computers had to work with qubits? Turns out it's possible to design a quantum algorithm for factoring that only requires 3 quantum oscillators and a single qubit! (1/4)
6
12
123
@ZyphraAI
Zyphra
1 year
We’ve been hard at work with @AMD to optimize training for AMD GPUs. Today, we’re sharing a critical milestone towards this goal: FlashAttention-2 (FA2) and Mamba-2 backward kernels on AMD MI300X that surpass NVIDIA H100. We @ZyphraAI are the first to achieve this.
8
27
141
@QuantumAephraim
Aephraim Steinberg
1 year
Damned impressive, if taken at their word. "Logical computation demonstrated with a neutral atom quantum processor” [claiming up to 28 log. qubits in 256 phys., implementing real error-corrected algorithms] https://t.co/SGkg8YVQs1 #Quantum #QuantumComputing #AtomComputing
Tweet card summary image
arxiv.org
Quantum computing experiments are transitioning from running on physical qubits to using encoded, logical qubits. Fault-tolerant computation can identify and correct errors, and has the potential...
0
9
35
@karpathy
Andrej Karpathy
1 year
Remember exercise pages from textbooks? Large-scale collection of these across all realms of knowledge now moves billions of dollars. Textbooks written primarily for LLMs, compressed to weights, emergent solutions served to humans, or (over time) directly enacted for automation.
116
345
4K
@ZyphraAI
Zyphra
1 year
Did you know that a leading open LLM dataset, DCLM, is ~80% duplicates? We discovered this while making Zyda2. Although performance seems fine on evals, downstream effects are less clear. So here are the 750B tokens of deduped, quality tokens from DCLM: https://t.co/WZIDfLoo7l
Tweet card summary image
huggingface.co
3
26
160
@MBarkeshli
Maissam Barkeshli
1 year
The Nobel Committee recognizes profound contributions from Physics to ML / AI. There's a lot more where that came from. We are in an era where an increasing number of physicists are making important contributions to ML / AI, and even more are needed going forward.
@NobelPrize
The Nobel Prize
1 year
BREAKING NEWS The Royal Swedish Academy of Sciences has decided to award the 2024 #NobelPrize in Physics to John J. Hopfield and Geoffrey E. Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.”
2
3
24
@PedramRoushan
Pedram Roushan
1 year
A major step in error correction @GoogleQuantumAI : Pushing the surface code to the next level Below threshold: Distance-7 logical qubit, 0.0014 error per cycle, >2x better than physical qubits. https://t.co/N0UKUFtjWn
7
21
158
@zlatko_minev
Zlatko Minev
1 year
What do graph theory, many-body physics, the golden ratio, and Fibonacci anyons have in common? In our experiment, arXiv link below, I’m excited how a very fundamental graph problem – https://t.co/YzzlTXMrce 1/...
2
23
106