Explore tweets tagged as #encoding
@statusfailed
statusfailed
12 days
I have a theory on what makes LLM slop tweets distinctive: low information content but high encoding complexity. See graph⬇️
33
159
2K
@avrldotdev
avrl ☘
6 days
The best guide for anyone to start learning networking with programming. This article will help you learn: 0. Networking protocols (TCP/UDP) 1. HTTP under the hood 2. Chunked encoding 3. Building state machines 4. Writing your own parser 5. Concurrency basics (multithreading)
7
66
488
@amitiitbhu
Amit Shekhar
10 days
My recent 7 articles on X: - KV Cache in LLMs - Paged Attention in LLMs - Causal Masking in Attention - Byte Pair Encoding in LLMs - Harness Engineering in AI - Math behind Attention - Q, K, and V - Math behind √dₖ Scaling Factor in Attention X is a knowledge sharing
4
38
328
@CompSciFact
Computer Science
8 days
Pythagorean triples, one's complement, and run-length encoding
0
3
10
@momika233
张惠倩
14 days
CloudFront WAF sets a 403 interception rule for the `/actuator` path, but you can use URL encoding `/%61%63%74%75%61%74%6f%72` (That's, each character of `/actuator` is hexadecimal encoded) to bypass the WAF and directly access Spring Boot #BugHunter #BugBounty #BugBountyTips
7
62
425
@RemoteSens_MDPI
Remote Sensing MDPI
9 days
🛳️🚢 BurgsVO: Burgs-Associated Vertex Offset Encoding #Scheme for #Detecting Rotated #Ships in #SAR Images ✍️ Mingjin Zhang et al. 🔗 https://t.co/RYgkvAC3qe
0
4
17
@amitiitbhu
Amit Shekhar
8 days
My recent 9 articles on X: - KV Cache in LLMs - Paged Attention in LLMs - Causal Masking in Attention - Byte Pair Encoding in LLMs - Harness Engineering in AI - Math behind Attention - Q, K, and V - Math behind √dₖ Scaling Factor in Attention - Math Behind Backpropagation -
2
36
237
@AlphaSignalAI
AlphaSignal AI
4 days
AI coding agents are fast but reckless. They skip specs, tests, and security. Google engineer just open-sourced a fix. Agent Skills is a free repo that brings 19 engineering skills and 7 slash commands to any AI coding agent. It works by encoding what senior engineers
5
12
80
@amitiitbhu
Amit Shekhar
13 days
My recent 5 articles on X: - KV Cache in LLMs - Paged Attention in LLMs - Causal Masking in Attention - Byte Pair Encoding in LLMs - Harness Engineering in AI X is a knowledge sharing platform.
0
41
257
@DivyanshT91162
divyansh tiwari
7 days
🚨 BREAKING: AI memory just got flipped… by a video file. No vector DB. No infra headaches. Just a single .mp4. Someone built Memvid — and it stores millions of embeddings inside a video file using encoding tricks. Sounds crazy. It works. Here’s why this is wild: → Entire
2
9
24
@NullSecurityX
NullSecurityX
5 days
Burp Suite's "Decoder" is great for URL encoding - but there is a much faster way to do it!..
1
6
78
@HowToAI_
How To AI
11 days
🚨 BREAKING: Vector databases for AI memory just got replaced by MP4 files. Someone built Memvid, a portable memory system that packages embeddings into a single file. It stores millions of text chunks using video encoding logic for sub-millisecond retrieval. → Replace
79
214
2K
@amitiitbhu
Amit Shekhar
6 days
My recent 10 articles on X: - KV Cache in LLMs - Paged Attention in LLMs - Causal Masking in Attention - Byte Pair Encoding in LLMs - Harness Engineering in AI - Math behind Attention - Q, K, and V - Math behind √dₖ Scaling Factor in Attention - Math Behind Backpropagation -
5
36
241
@amitiitbhu
Amit Shekhar
2 days
[LLM Internals] Just published an article: Feed-Forward Networks My recent 12 articles on X: - KV Cache - Paged Attention - Causal Masking - Byte Pair Encoding - Harness Engineering - Math behind Attention - Q, K, and V - Math behind √dₖ Scaling Factor in Attention - Math
2
36
109
@cap
Cap
12 days
Cap 0.4.82 is here! Edit by transcript. Exports up to 97% smaller with optimized encoding option. Cursor-only ProRes export. Faster on-device transcriptions with Parakeet. Smoother cursor and zoom animations, and a lot of performance and bug fixes.
4
1
34
@amitiitbhu
Amit Shekhar
9 days
My recent 8 articles on X: - KV Cache in LLMs - Paged Attention in LLMs - Causal Masking in Attention - Byte Pair Encoding in LLMs - Harness Engineering in AI - Math behind Attention - Q, K, and V - Math behind √dₖ Scaling Factor in Attention - Math Behind Backpropagation X
1
25
180
@UHD4k
Ultra HD 4k news 📺
14 days
Following Netflix’s latest optimization round, which introduced Film Grain Synthesis (FGS) in their AV1 encoding pipeline, the service streams some movies at bitrates as low as 200 kbps: https://t.co/hrB7D9s9TY via Fabio @Sonnati
9
18
250
@VictorTaelin
Taelin
7 days
Any local llm nerd around? I'm trying to run speculative encoding on Gemma 26B A4B. I'm newbie to running that stuff locally, got it to 200 B tokens/s on B200, I wonder if I could make it much faster?
36
4
197
@amitiitbhu
Amit Shekhar
4 days
Just published an article: Decoding Flash Attention My recent 11 articles on X: - KV Cache - Paged Attention - Causal Masking - Byte Pair Encoding - Harness Engineering - Math behind Attention - Q, K, and V - Math behind √dₖ Scaling Factor in Attention - Math Behind
1
19
132