0xAsm0d3us Profile Banner
Devansh (⚡, 🥷) Profile
Devansh (⚡, 🥷)

@0xAsm0d3us

Followers
16K
Following
2K
Media
494
Statuses
2K

Pwn, Security Research & Math ⚡ Views are personal

Joined December 2019
Don't wanna be here? Send us removal request.
@0xAsm0d3us
Devansh (⚡, 🥷)
2 days
Weekend reading: Huli's blog is a gem, and you should start reading it if you want to excel in frontend/client-side security https://t.co/lEpBsMBely
Tweet media one
1
32
150
@0xAsm0d3us
Devansh (⚡, 🥷)
21 days
Basic Integer Overflows by blexim https://t.co/tZtJh2Z1LV
Tweet media one
0
4
14
@rithlanka
Charith Lanka
6 days
Damien and I have been getting a lot of DMs from VCs lately asking about @alignoAI. Honestly wasn't expecting this much interest this early, but it's been incredible connecting with investors who get the vision. If you're building in the AI/product space, would love to connect
Tweet media one
0
1
5
@0xAsm0d3us
Devansh (⚡, 🥷)
22 days
Exploiting Logic Bugs in JavaScript JIT Engines - by saelo https://t.co/izBaPz4X6I
Tweet media one
0
11
63
@0xAsm0d3us
Devansh (⚡, 🥷)
22 days
What I stated is just bare minimum, it would be a waste if someone is just cramming up the knowledge without applying it anywhere. Mind isn't engineered to retain info, but it can recall it very well, so that's why reading is important as it builds intuition.
@0xAsm0d3us
Devansh (⚡, 🥷)
23 days
If you start reading just 5 CTF write-ups a day, by the end of this year, you'll be 10x more skilled than you are today. Just start, rn! That's 150 write-ups in a month. Assuming you learn 30–40 unique techniques from them, no course can match that. You'll be unstoppable.
1
1
30
@0xAsm0d3us
Devansh (⚡, 🥷)
23 days
"Hacking the mind for fun and profit" https://t.co/ENSgkclVwk
Tweet media one
2
14
123
@0xAsm0d3us
Devansh (⚡, 🥷)
23 days
If you v new to CTFs, just start by watching old streams on @gynvael's YT, and observe his approach while solving challs. Streams are packed with so much value. Gynvael is one of the best teacher/educator in the community.
0
2
42
@0xAsm0d3us
Devansh (⚡, 🥷)
23 days
Never miss a challenge if it is by one of the following authors, I learned so much from their challs: Orange Tsai, zardus, theKidOfArcrania, terjanq, qazbnm456, ptr-yudai, pasten, p4, justCatTheFish, hxp, FD, Chivato, AngelBoy, bestone, g0blin, ... (the list goes on..)
0
2
57
@0xAsm0d3us
Devansh (⚡, 🥷)
23 days
When I say CTFs, I mean the following: DEF CON CTF Google CTF Plaid CTF HITCON CTF SECCON CTF Dragon Sector CTF hxp CTF ASIS CTF InCTFi CSAW CTF Facebook/Meta CTF justCTF (and similar...) AVOID LOW QUALITY CTFs.
1
12
127
@0xAsm0d3us
Devansh (⚡, 🥷)
23 days
If you start reading just 5 CTF write-ups a day, by the end of this year, you'll be 10x more skilled than you are today. Just start, rn! That's 150 write-ups in a month. Assuming you learn 30–40 unique techniques from them, no course can match that. You'll be unstoppable.
21
83
725
@0xAsm0d3us
Devansh (⚡, 🥷)
1 month
Some rough thoughts on why LLMs can’t do novel vulnerability research (yet):
2
2
4
@0xAsm0d3us
Devansh (⚡, 🥷)
1 month
If asked “Find an RCE in Product X” with no known flaw, the model will most probably synthesize a realistic but fake report, because in its learned distribution, realistic-looking output is rewarded, not factual accuracy.
0
0
2
@0xAsm0d3us
Devansh (⚡, 🥷)
1 month
Finally, coming to the real cause of hallucinations: why do language models sometimes hallucinate - that is, make up information? At a basic level, language model training incentivizes hallucination: models are always supposed to give a guess for the next word.
1
0
1
@0xAsm0d3us
Devansh (⚡, 🥷)
1 month
They might report a race condition pattern as dangerous even if the lock order prevents it. The same goes for a reentrancy in a smart contract, even if a mutex is properly present.
1
0
1
@0xAsm0d3us
Devansh (⚡, 🥷)
1 month
Bugs often depend on causal chains in code execution. LLMs model correlations, not causation: P(bug ∣ pattern) ≠ P(bug is real)
1
0
1
@0xAsm0d3us
Devansh (⚡, 🥷)
1 month
Interpolation ≠ innovation. True novelty requires exploring code paths never before described, something outside LLMs' learned space.
1
0
1
@0xAsm0d3us
Devansh (⚡, 🥷)
1 month
And obviously, for "novel" stuff, interpolation bias is always there. For an unseen vulnerability v′, the model guesses based on similar training examples: v′ ≈ interpolate(v1, v2, …, vn)
1
0
1
@0xAsm0d3us
Devansh (⚡, 🥷)
1 month
Where E(v) returns 1 if the exploit works, 0 if not. LLMs can’t run this code; they only simulate reasoning in text. Without E(v), there’s no feedback to separate real from imagined exploits.
1
0
1
@0xAsm0d3us
Devansh (⚡, 🥷)
1 month
There is no real-world execution loop (a very good orchestration might actually solve this one), but verifiability is a major issue. I'll give you an example: to confirm a new bug, you must run: E(v) ∈ {0,1}
2
0
2
@0xAsm0d3us
Devansh (⚡, 🥷)
1 month
Example: If “Buffer overflow in XYZ library” appears in similar contexts during training, the model may output it, even if the XYZ library doesn’t actually have that flaw.
1
0
1
@0xAsm0d3us
Devansh (⚡, 🥷)
1 month
LLMs learn a probability distribution, say P(token ∣ context), and when you ask about a vulnerability, they return the most statistically likely sequence of tokens, not the objectively correct answer.
1
0
1