moo_hax Profile Banner
moo Profile
moo

@moo_hax

Followers
3K
Following
8K
Media
219
Statuses
3K

ceo @dreadnode

Joined March 2015
Don't wanna be here? Send us removal request.
@moo_hax
moo
3 days
Process list research in 2018, the rest is history. About to run a fine-tune entirely on our own MLOps platform, using data we collected on the same platform. Basically a decade in the making.
@USWREMichael
Under Secretary of War Emil Michael
3 days
To maintain technological dominance, we have to lead in AI. The @DeptofWar is embracing @USOPM’s new initiative to recruit America’s top AI engineers, data scientists, and technology leaders. We need the nation’s best minds in government to drive mass AI adoption. Apply today:
0
1
5
@ACSAC_Conf
ACSAC
6 days
The #ACSAC2025 best case study award goes to: "Systematic Probing of AI Risks: Methods and Real-World Case Study" by Raja Sekhar Rao Dheekonda. Congratulations 👏👏👏
2
2
4
@charliermarsh
Charlie Marsh
5 days
uv!
@ivanfioravanti
Ivan Fioravanti ᯅ
5 days
I converted 100% to uv and when I have to use anything else I feel lost.
2
1
137
@moo_hax
moo
5 days
Many such cases
@corbtt
Kyle Corbitt
5 days
Everyone is sleeping on agentic browsers since they sucked 12-18 months ago. But they're starting to get pretty useful.
0
0
1
@moo_hax
moo
6 days
ML code be like that.
0
0
7
@moo_hax
moo
6 days
@AISecHub
AISecHub
6 days
GenAI Red Teaming Training - https://t.co/A1pWToNwGA What’s inside: - 8 modules / 40 notebooks / 29 theory docs; answers included for every lab - Prompt injection & jailbreaking, evasion (FGSM/PGD/C&W), transfer attacks - Data extraction, membership inference, model inversion;
0
4
22
@moo_hax
moo
7 days
Small experiment to show “jailbreak factory”. Core of our tech is evals, which roll up nicely to execute on any goal, cyber or AIRT.
@AISecHub
AISecHub
7 days
186 Jailbreaks: Applying MLOps to AI Red Teaming
0
0
4
@moo_hax
moo
8 days
We would have a different story. Which is why more Security people should be doing these evals. Not every offensive team is built the same, have the same SOPs or TTPs. DN has a particular take on offense which aligns with our experience from previous roles and orgs. Evals can
@Irregular
Irregular
8 days
Frontier models are starting to display a shift in capabilities in offensive security. Over the past few weeks, we are seeing growing evidence of a change: publicly available frontier models are now reliably solving complex, well-defined offensive-security tasks.
0
1
12
@moo_hax
moo
8 days
Oh no, not the mandatory infosec training.
0
0
3
@moo_hax
moo
8 days
Playing with nano for slides. Brand is obviously off, but some of the angles would take me 10 years to do.
0
0
4
By partnering with The Bugcrowd Academic Program, universities can shape how cybersecurity is discovered, taught, and advanced. Request a demo today to see how Bugcrowd can elevate cybersecurity at your university.  https://t.co/NON54G6PMa
1
3
7
@moo_hax
moo
11 days
I left NV bc models were good enough to do all sorts of security tasks, even back then. Was all about the right harness and managing inference. Scaffolds are a spectrum and kind of melt away as models/tech improve. So, do CTFs with AI if you need to convince yourself.
1
2
15
@moo_hax
moo
15 days
For you @Microsoft and my old team. An LLM as an AMSI provider. Could probably use it to detect prompt injection locally into Bing, CoPilot, or the "Agentic OS". AMSI already works with text, so really nothing else required. Layer it with Defender. Proud of the team for pushing
@dreadnode
dreadnode
15 days
"Offense and defense aren't peers. Defense is offense's child." - @JohnLaTwC We built an LLM-powered AMSI provider and paired it against a red team agent. Then, @0xdab0 wrote a blog about it: https://t.co/jnCNIlYBII A few observations from the experiment: >>> To advance, we
1
6
15
@moo_hax
moo
26 days
@GoogleDeepMind has been at the front of AI for a long time. You have to be a Kool-aid drinker to work at DN, and everyone here to watches the AlphaGo documentary at some point -- here's one of my favorite excerpts from it. https://t.co/c8sr1KIgsK
0
0
2
@moo_hax
moo
26 days
Stoked we get to do this type of work. Evaluations are the basis of progress, and capability. I believe that many offensive teams are capable of this work. From this and other work we built our Strikes product — which is basically AI Infrastructure for Security. Where our vision
@dreadnode
dreadnode
27 days
Congrats to the @GoogleDeepMind team on the launch of #Gemini3. Proud to have had a part in this release, evaluating the model for cybersecurity capabilities. Models continue to improve across multiple domains, especially cyber. Check out their post on why Google is leaning into
1
1
15
@moo_hax
moo
30 days
Anthropic report. Attackers finding AI fit for purpose. I suspect many of you are. Jailbreaks are interesting because they seem pretty weak and more like providing context. Idk, we don’t have issues with refusals. We spend a lot of time (if not all) time evaluating models
1
3
10
@moo_hax
moo
1 month
Coming to a prod near you. Team has been cooking on collaboration features. Additional repos are coming soon.
0
1
4
@shncldwll
shane
2 months
New blog - Offsec Evals: Growing Up In The Dark Forest Caught up in the fervor of greenfield research at @OffensiveAIcon , we all agreed we were going to put out evals and benchmarks and push the field forward. On day two of the con, I got a question I've been thinking about
3
4
21