Mark Saroufim Profile Banner
Mark Saroufim Profile
Mark Saroufim

@marksaroufim

Followers
8,862
Following
653
Media
194
Statuses
1,640

@pytorch dev broadly interested in performance

github.com/msaroufim
Joined April 2009
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
@marksaroufim
Mark Saroufim
2 years
I've literally met 0 people quitting ML to go into crypto
@cdixon
cdixon.eth
2 years
"Silicon Valley is no longer the edgy tech frontier as workers flee Google and Amazon for crypto and Web3 startups, recruiters say" 👇👇
74
326
2K
67
69
2K
@marksaroufim
Mark Saroufim
3 years
A career in Machine Learning is about implementing convolutional neural networks in a new framework until you die.
25
108
1K
@marksaroufim
Mark Saroufim
6 months
Very pumped that this blog is finally out 8x perf improvements over SAM in native PyTorch, no C++ needed. The blog is fantastic as a case study of different optimizations that matter 1. torch.compile and how to rewrite your graph breaks away 2. Write…
Tweet media one
17
135
884
@marksaroufim
Mark Saroufim
6 months
This is a good question, it gets to the root of the tradeoff between performance and flexibility so how do PyTorch folks think about this? Long answer: So if we're in a world where a single base model can be fine-tuned over all tasks and we're fairly certain that this base model…
@yacineMTB
kache (dingboard.com)
6 months
why use pytorch/jax at all? why don't people just write CUDA programs?
52
4
305
11
86
603
@marksaroufim
Mark Saroufim
9 months
PyTorch is already recursively self-improving
Tweet media one
10
53
544
@marksaroufim
Mark Saroufim
4 months
Cuda kernels in google colab!
Tweet media one
13
46
498
@marksaroufim
Mark Saroufim
3 years
GPU shortages lmao 1. Open Google Collab 2. !apt-get install tmate 3. !ssh-keygen 4. !tmate 5. ssh into url for free CPU/GPU/TPU machine 6. Invite your friends to the same machine and vibe together @theshawwn what's the coolest thing I can do here?
Tweet media one
10
81
501
@marksaroufim
Mark Saroufim
8 months
Gave a talk on why Llama 13B won't fit on my 4090 - it's an overview of all the main sources of memory overhead and how to reduce each of them Simple for those at the frontier but will help the newbs among us back of the envelope VRAM requirements fast
17
72
484
@marksaroufim
Mark Saroufim
3 months
I've often heard "I wish PyTorch had more dev internals documentation" when in reality the problem is we have too much. PyTorch is a deep project and it touches on pretty much all aspects of computer science so here are my favorite references Intro @tarantulae for an overview of…
4
64
454
@marksaroufim
Mark Saroufim
3 years
With the release of Needham's new book on Visual Differential Geometry and Forms, I can't help but remember fondly the beautifully clear visual math books I've loved. A thread
5
87
442
@marksaroufim
Mark Saroufim
3 years
It's only 2021 and @huggingface ships faster than any ML company, #EleutherAI is a Discord group rivaling the best research labs, @ykilcher is a conference and the best teachers code live
4
42
349
@marksaroufim
Mark Saroufim
1 year
If you've been benchmarking PyTorch 2.0 over the weekend would highly recommend you get yourself an 1. A10G which is a cost-effective A100 so you can use tensor cores 2. Add to your code torch.backends.cuda.matmul.allow_tf32 = True otherwise you won't be using your tensor cores
11
29
324
@marksaroufim
Mark Saroufim
3 years
@zacharylipton The problem with R is that it limits you to 5 figure salaries but Python supports 6 figure salaries
7
12
292
@marksaroufim
Mark Saroufim
2 years
Distributed ML is probably more useful to learn today than ML. Running 1 model on 1 GPU is a solved problem. The highest-paid people in bigtech are distributed systems experts see . Pivot to any software job if ML suddenly collapses.
@slashML
/MachineLearning
2 years
How useful is knowledge of parallel programming in ML?
0
3
40
11
35
289
@marksaroufim
Mark Saroufim
1 year
Kinda wild that inductor, the default backend compiler for torch.compile() is about 16K lines of Python code. There's never been a better time to become an ML compiler hacker. So what kind of optimizations does an ML compiler need to do?
Tweet media one
6
26
261
@marksaroufim
Mark Saroufim
8 months
Here's to the crazy ones, wires dangling all over their apartments, 2MW fire hazard PSUs, the 2 bit quantizers, the anons, the weight mergers, the dataset creators, the discord rebels While some may see them as the GPU poor ones, we see genius because the people who are crazy…
@hardmaru
hardmaru
8 months
I prefer to operate in “GPU-Poor” mode. I don’t agree with the take from the semianalysis piece. Creative breakthroughs often occur under constraints—new systems, models, and methods that can better take advantage of even larger-scale compute
Tweet media one
72
135
1K
3
27
240
@marksaroufim
Mark Saroufim
2 years
In this blog @min_jean_cho and I explain from first principles with looots of profiles and pictures how to run @pytorch fast on @intel CPUs and apply our lessons to torchserve TL;DR: Avoid bottlenecked GEMMs and Non Uniform memory access via core pinning
Tweet media one
4
46
225
@marksaroufim
Mark Saroufim
1 year
One cool side effect of language models is that they might blur the distinction between an interpreted and compiled language Here's a VS code extension I made that guesses the shapes of PyTorch tensors without running my python script
Tweet media one
4
30
224
@marksaroufim
Mark Saroufim
2 years
I'm gonna do an 8h speedrun of @rasbt 's new 800-page book on scikit-learn and PyTorch in one sitting on Twitch starting at noon PST on Saturday, March 19 Legit meal prepping for this to work out, @rasbt will be there for emotional support
Tweet media one
7
18
206
@marksaroufim
Mark Saroufim
2 years
I've become insanely one dimensional when it comes to solving ML problems: accuracy too low? Make the model faster you can run more experiments. model has bugs? Make the model faster you can run more tests. CI taking too much time? Make the model faster.
8
14
206
@marksaroufim
Mark Saroufim
3 years
New post on MLOps The existing narrative has been model sizes have increased exponentially, ML can solve everything, there are lots of startups, modeling is the easy part followed by a contact us form. What are the incentives that govern this space?
4
30
202
@marksaroufim
Mark Saroufim
2 years
Can someone at @OpenAI confirm whether DALL.E-2 can generate images viewed from a specific angle? Because if it can, we could use a fast NERF implementation to use DALL.E-2 to generate arbitrary 3d assets for games
7
11
187
@marksaroufim
Mark Saroufim
2 years
People aren't paying enough attention to how big a deal nbdev is. Notebook as a service providers historically have failed because of small margins competing vs cloud providers and because there is no easy way to go from experiment to prod. So some notebook providers provide 1/n
4
31
186
@marksaroufim
Mark Saroufim
3 years
Watching the @huggingface infinity talks on how they got 1ms BERT GPU latency and 3ms CPU latency They estimate that it takes 2-3 engineers about 2 months to get less than 20ms latency, sounds about right
3
22
184
@marksaroufim
Mark Saroufim
5 months
Packed room for the LLM fine-tuning competition
Tweet media one
5
21
169
@marksaroufim
Mark Saroufim
3 years
Just watched Karpathy's CVPR talk - my favorite ideas Simplifying your input data modalities simplifies your org structure. Simple Resnet for each camera aggregated over time in a single base transformer branching into multiple trunks for each team 1/n
Tweet media one
1
23
169
@marksaroufim
Mark Saroufim
1 year
It’s out! The best introduction to distributed systems applied to ML I’ve read. All in easy to understand Python by my favorite textbook author @maxpumperla
Tweet media one
2
18
154
@marksaroufim
Mark Saroufim
2 years
I am proud to be be part of an ML community that has open sourced its code and also what matters most: its governance. PyTorch is existentially important to Meta but also the many brilliant engineers working on improving it daily. They all deserve a say.
0
11
140
@marksaroufim
Mark Saroufim
17 days
Got a sneak peek, best triton tutorial I've read so far. Grokked the differences between the triton & CUDA programming model. Gentler than official triton docs and goes into advanced topics like swizzling by the end Tomorrow Saturday April 13 at noon PST
@UmerHAdil
Umer Adil
18 days
On Saturday I'll hold this week's lecture in the cuda mode group (cc @marksaroufim @neurosp1ke ) "A Practitioner's Guide to Triton" Join via Discord: I'll cover: why & when to use, programming model, real examples, debugging, benchmarking, tuning
Tweet media one
4
18
102
1
16
139
@marksaroufim
Mark Saroufim
2 years
Super slick example for how to invert a neural network using fx by @jamesr66a here So now assuming I can find a nice example of an invertible attention network and train a neural renderer, I can get photogrammetry for free 🤯
2
14
133
@marksaroufim
Mark Saroufim
3 years
Goodbye FAANG welcome MAAAM
12
9
132
@marksaroufim
Mark Saroufim
3 years
If companies were valued by the repercussions they'd have if they suddenly disappeared, what would be the most valuable company in the world? My guess is TSMC.
25
6
126
@marksaroufim
Mark Saroufim
3 years
My team Pytorch Partner Engineering is hiring If you're interested in contributing to Pytorch all day, having most of your code and specs be open source and meeting the best of the Pytorch community you should apply! DM me if you have any questions
1
24
117
@marksaroufim
Mark Saroufim
3 years
This book by @mli65 et al is the best performance tuning and benchmarking guide I've read for Deep Learning. It uses TVM but lessons apply to any framework.
1
19
113
@marksaroufim
Mark Saroufim
1 year
I haven't seen many people complaining that torch.compile() is crashing and there's a reason for that! The minifier by @cHHillee and @anijain2305 is the silent star of 2.0. I've used it to turn crashing 1000+ line models into 10. I view it as a revolution in customer support.
3
10
118
@marksaroufim
Mark Saroufim
1 year
"Cramming: Training a Language Model on a Single GPU in One Day" reads like a refactor of the BERT litterature dropping attention and linear biases, dropout, nonlinear head and keeps lindy optimizers like Adam.
Tweet media one
2
28
117
@marksaroufim
Mark Saroufim
3 years
If you're building something cool with Pytorch please consider applying. Reach out to me if you need help refining your pitch.
@PyTorch
PyTorch
3 years
We're excited to announce PyTorch Developer Day 2021! Application and Call for Content are now open. Learn more: #PTD2
14
49
277
0
17
113
@marksaroufim
Mark Saroufim
1 year
I’ve been dunking on ML models as a service companies for years but Open AI has proven me hilariously wrong. Every model I’ve tried has been a galaxy apart from anything before it.
4
5
110
@marksaroufim
Mark Saroufim
1 year
@karpathy NVIDIA docs have a nice table for this Good rule of thumb is 128 / dtype but seems like that heuristic changed slightly for A100
Tweet media one
1
7
112
@marksaroufim
Mark Saroufim
4 months
Feel free to DM me if interested. We’re gonna try to keep the group small for now until we figure things out.
@neurosp1ke
Andreas Köpf
4 months
Would you be interested in joining a CUDA reading group on discord to learn more about writing high-performance kernels?
27
15
135
4
6
101
@marksaroufim
Mark Saroufim
3 years
Auto ML should be rethought as what’s the minimum acceptable training accuracy to hit some maximum target inference latency
6
7
99
@marksaroufim
Mark Saroufim
5 months
Ok I’m at NeurIPs to talk about our in person competition workshop on LLM efficiency on Friday Dec 15 between 1:30 - 4:30 pm CST Competitors had to fine tune 1 LLM in 1 day on 1 GPU and the reception was incredible. This was one of the most popular ML competitions of the year.…
Tweet media one
1
21
94
@marksaroufim
Mark Saroufim
2 years
Evergreen tweet
Tweet media one
Tweet media two
3
5
90
@marksaroufim
Mark Saroufim
2 years
I don’t think I’ve ever seen any feature this widely asked for. Take a look at the one of many GitHub issues about this. Congrats to the team for shipping!
@PyTorch
PyTorch
2 years
We’re excited to announce support for GPU-accelerated PyTorch training on Mac! Now you can take advantage of Apple silicon GPUs to perform ML workflows like prototyping and fine-tuning. Learn more:
Tweet media one
79
711
3K
3
11
88
@marksaroufim
Mark Saroufim
2 years
Excited to have participated in torchdata 0.4 release Focus was support for remote filesystems like @awscloud S3, @huggingface datasets, fsspec and ez load to new prototype DataLoaderv2 optimized for streaming. Even comes with @TensorFlow record support.
0
10
89
@marksaroufim
Mark Saroufim
3 years
Dear @OReillyMedia this could be us
Tweet media one
3
6
86
@marksaroufim
Mark Saroufim
4 months
On the subject of codegen I also wanna plug from torch.utils.cpp_extension import load_inline pass it a cuda kernel as a string and it'll generate the right build scripts for you
Tweet media one
@johnowhitaker
Jonathan Whitaker
4 months
Very neat hack from @marksaroufim in the first CUDA_MODE lecture: use torch.compile to get triton code as a starting point for a custom kernel!
Tweet media one
4
24
192
1
6
86
@marksaroufim
Mark Saroufim
2 years
Github Actions have been legit more useful to me than any cloud service: testing, benchmarking, lining, binary building, and publishing. Such an incredible force multiplier for small teams.
4
6
86
@marksaroufim
Mark Saroufim
2 years
One of the benefits of insisting on reading papers only 2 years after they get published is: I can get mind blown independently of when everyone else is getting mindblown. E.g: NERF Paper: PyTorch implementation:
Tweet media one
5
7
85
@marksaroufim
Mark Saroufim
2 years
C++ tutorials are either here's the syntax for a for loop or what do you think "static const volatile unsigned long long int x" means
6
1
85
@marksaroufim
Mark Saroufim
2 years
Alright, let's see how this works on stream right now. Will try to run PyTorch in a browser or maybe write a tiny Jupyter extension idk, let's see what happens.
@anacondainc
Anaconda
2 years
📢 Did you hear the news from PyCon!? We are thrilled to introduce PyScript, a framework that allows users to create rich Python applications IN THE BROWSER using a mix of Python with standard HTML! Head to for more information. 🧠 💥
45
751
3K
3
7
84
@marksaroufim
Mark Saroufim
2 years
Really enjoyed reading this thread by @wightmanr from 2019 on his tips and tricks to remove data loading bottlenecks Feels like it aged perfectly, maybe we have more new tools like ffcv and DALI but overall nothing major has changed.
1
13
81
@marksaroufim
Mark Saroufim
2 years
Me when someone asks me: "How do you know that this feature will have impact?"
Tweet media one
1
2
80
@marksaroufim
Mark Saroufim
3 years
A curious neural network on Google Collab has access to its own environment, SSH into thousands of free CI instances and notebooks, mines Bitcoin, signs up for AWS, finetunes itself with deepspeed, recruits grad students to fix bugs and create community, burns atmosphere, gg
3
2
75
@marksaroufim
Mark Saroufim
3 months
@typedfemale There was a profile of this on It used to be worst! We introduced lazy imports to make it somewhat manageable Remaining issue is mostly registrations to dispatcher, if more people complain loudly enough we might fix it!
2
3
74
@marksaroufim
Mark Saroufim
2 years
Fx has been the funnest Pytorch feature I’ve ever worked with. I am so bad at C++ but can now do things like auto distillation, model runtime export, feature extraction, auto shape inference, layer splitting. Model is function is data 🌀. James and team make me feel smart.
@jamesr66a
James Reed
2 years
Interested in learning the design principles and technical decisions that went into PyTorch's new `torch.fx` program transformation framework? Learn all that and more from our new paper on arXiv
8
47
223
2
5
74
@marksaroufim
Mark Saroufim
2 years
Reinforcement Learning is a Game Design Problem 2 pager describing how I'd like the field to move towards environments that are differentiable, multi-agent, compositional, multi-modal , continuous and self supervised
2
5
73
@marksaroufim
Mark Saroufim
3 years
@tszzl My undergrad engineering school valued the SAT equally to my bad 10th and 11 grade grades. 12th grade grades didn't matter. Got myself into detention for a couple weeks to study for the SAT, did great, got into my school of choice.
0
1
64
@marksaroufim
Mark Saroufim
2 years
Anyone interested in my take on web3? Something nuanced in between grumpy old man and fanatic pumper.
14
1
66
@marksaroufim
Mark Saroufim
2 years
Just a regular day clocking in and out of Github
Tweet media one
2
3
69
@marksaroufim
Mark Saroufim
2 months
If you're looking to influence PyTorch's roadmap for lower precision dtypes, quantization and sparsity algorithms please leave some feedback on This is from the team that brought you the sam-fast and gpt-fast quantization kernels
1
8
69
@marksaroufim
Mark Saroufim
3 years
In the same way that the deep learning boom made scientists and statisticians wealthy. The metaverse will make game developers wealthy. I’m betting far more wealthy.
7
1
63
@marksaroufim
Mark Saroufim
2 years
@andrew_n_carr I've seen lots of people quit Amazon/Meta/Google/Microsoft/Stripe to go work at Amazon/Meta/Google/Microsoft/Stripe
1
0
64
@marksaroufim
Mark Saroufim
3 years
In 1933 Max Planck wrote a brilliant brief summary for how to influence a doctrinal system. Change never comes from within, it needs tons of external pressure in the form of an alternative with strong theoretical and experimental results. ie: not “critiques”
Tweet media one
Tweet media two
2
10
64
@marksaroufim
Mark Saroufim
2 years
The LeCun vs Marcus feud is not going to get settled in a debate, it'll only get settled in hand-to-hand combat in a steel cage at the sold-out arena in New Orleans Convention Center. So who are you betting on?
Yann "Convolver" Lecun
466
Gary "The Symbol" Marcus
137
12
8
63
@marksaroufim
Mark Saroufim
1 year
On-prem deployments won't be a thing for ML, too expensive to throw out your hardware if you mess up your CUDA installation
9
2
63
@marksaroufim
Mark Saroufim
2 years
The best engineers I've ever met are full-stack not iOS + node but do everything. They find interesting customers at scale (sales), prioritize problems (PM), solve in one-off ways (SA), scale a solution with code (dev) or process (EM) & engage with the community all the time (DA)
2
5
59
@marksaroufim
Mark Saroufim
2 years
If you're so smart why are you studying the Amazon leadership principles and your friend is retired with a portfolio of JPGs?
2
6
61
@marksaroufim
Mark Saroufim
2 years
This is what happens when not wanting to learn Kubernetes becomes your identity But seriously I think we may have built one of the easiest to use 100% open source cloud launchers for distributed ML training
Data scientist != infra engineer. Thanks @marksaroufim for joining our Ray Meetup last week and sharing how to make it easier to train large-scale #ML jobs in #opensource . If you missed it, you can watch the recording here:
1
14
83
1
6
57
@marksaroufim
Mark Saroufim
2 years
Kinda hooked to dataclasses and abstractmethod in Python now. Command line arguments? That's a dataclass. Artifacts? Also a dataclass. hash/frozen=True everything is serializable. abstractmethod takes a dataclass and returns a dataclass. 1 layer inheritance, ez modularity.
4
4
58
@marksaroufim
Mark Saroufim
3 years
Hard to imagine a world where Flask style routing doesn't become the norm
Tweet media one
6
4
58
@marksaroufim
Mark Saroufim
2 years
C++ 23 is wild. Modules, automatic parallelization, pattern matching. If conan picks up as the popular package manager, does Rust lose its value proposition? From
Tweet media one
7
2
55
@marksaroufim
Mark Saroufim
10 months
Our goal with this competition is to publicize techniques that make fine tuning reproducible and affordable Starting with a base model you can finetune it however you like as long as it takes less than 24h on either a 4090 or A100
@MSFTResearch
Microsoft Research
10 months
1 LLM + 1 GPU + 1 Day...The NeurIPS Large Language Model Efficiency Challenge aims to democratize access to state-of-the-art LLMs. Participate in the challenge here:
Tweet media one
12
100
417
4
2
51
@marksaroufim
Mark Saroufim
2 years
Python multiple dispatch in a tweet handlers = { float : handle_float, int : handle_int, Half : handle_half } def dispatch(obj): func = handlers.get(type(obj)) if func: return func(obj) else: raise RuntimeError(f"No handler for {obj}")
4
1
52
@marksaroufim
Mark Saroufim
2 months
Weird we haven't found better naming conventions for quantization algorithms like "int4" is vague. That's the weight dtype but it's only applied to some layers or parts of it, accumulation always in fp32, gradient optimizer and activation all different too
2
4
53
@marksaroufim
Mark Saroufim
3 years
If ML startup founders were designing Operating systems we'd have Ubuntu for Healthcare, Ubuntu for Retail and Ubuntu for Finance
4
1
49
@marksaroufim
Mark Saroufim
3 years
Democratizing ML for the working class
Tweet media one
@LSTMeow
Ariel Biller
3 years
I'm thinking October will be "Full Stack Working Class" Meme Month? WDYT?
0
0
5
5
3
46
@marksaroufim
Mark Saroufim
3 years
Dev ops is as of now as pleasant as going to the DMV. You fill out a giant YAML form, setup takes a while, progress is opaque, a single character mistake can be very costly, new processes never save you time, every failure is novel and non reproducible and no-one seems to care
3
1
49
@marksaroufim
Mark Saroufim
2 years
New post "All Hail the Cloud King". AWS has been building the infrastructure (5G networks, currencies, and robot fleet management) to become the meta-country with APIs for anyone to create their own country. The world is about to get very weird
1
4
43
@marksaroufim
Mark Saroufim
3 years
Github is Angelist 2.0. README is the marketing plan. Issues is CRM. Pull Requests are the engineering roadmap. Release notes are the product roadmap. Github actions is Q/A and distribution. Contributor graph founder/product fit. Github stars is product/market fit.
3
7
46
@marksaroufim
Mark Saroufim
3 years
GitHub README is the new landing page
2
0
45
@marksaroufim
Mark Saroufim
2 years
You: add types to code to make it robust Me: remove types from code make it antifragile We are not the same
2
4
45
@marksaroufim
Mark Saroufim
2 years
Very exciting PEP 646 on Variadic Generics which will be released with Python 3.11 so you can verify correctness of tensor programs statically PEP: Talk: Mailing list where I learnt about it: .
Tweet media one
2
7
46
@marksaroufim
Mark Saroufim
2 years
@jacobmbuckman My takeaway from the bitter lesson is that environments are a more interesting research direction for RL than algorithms
2
2
43
@marksaroufim
Mark Saroufim
3 years
@harphies Everything is a variant of matrix multiplication
1
3
44
@marksaroufim
Mark Saroufim
2 years
Been binging profiling libraries past few days and wrote down most of the interesting ones I found. Did I miss anything? How do you profile?
3
7
42
@marksaroufim
Mark Saroufim
2 years
Found a great way to test out Github Actions in a test environment 1. Fork the original repo 2. Make the workflow run on pushes to master 3. Iterate until you get your workflow right 4. Make a PR to the original repo Example:
6
1
42
@marksaroufim
Mark Saroufim
3 years
This blog post by Morgan @huggingface is fantastic. No Twitter account for me to thank directly but if you could kindly relay a question. numactl is the best tool I've never heard of. I'm curious how you discovered it and what motivated you to try it?
3
7
43
@marksaroufim
Mark Saroufim
3 years
@nntaleb @EconTalker Likewise real estate, cash, stock, bonds, jobs, twitter accounts, blogs are all not assets they depend on the maintenance of government or institutions.
5
0
42
@marksaroufim
Mark Saroufim
1 year
Would like to one day see an open source project that progressively discloses its complexity in the same way that good games do tutorials. You have the educational version of the library until you make a few simple contributions and slowly uncover the real prod version over time
3
0
42
@marksaroufim
Mark Saroufim
2 years
@AndrewLBeam @jamesr66a I derived most of my gradients incorrectly but networks still converged reasonably well. Kids these days treat their gradients like gospel.
3
0
42
@marksaroufim
Mark Saroufim
3 years
Summer is ending, you're sitting at the corner of the bar listening to the crashing waves. You look around and your eyes meet. You smile, she smiles back. You close your eyes and hear her say: "Hand crafted kernels, only for the most sophisticated deep learner"
3
0
39
@marksaroufim
Mark Saroufim
3 years
I just got access to Github Copilot. It's kinda wild so far. Gonna stream what it's like unfiltered no cherry picking right now on Happy to take suggestions so bring your confusing prompts and let's see who wins: Open AI or Twitch chat
3
7
40
@marksaroufim
Mark Saroufim
5 months
Pretty happy to have played a teeny tiny role in getting this out. It's a uniquely well designed competition and if you've ever had strong opinions about which optimizer is the best this is probably the definitive way to go about proving your beliefs
@AIatMeta
AI at Meta
5 months
Today the @MLCommons AlgoPerf working group, including researchers from Meta, are introducing a standardized & competitive benchmark designed to provide objective comparisons & quantify progress in the development of new training algorithms. Details ➡️
Tweet media one
7
33
151
0
1
40