jstanl Profile Banner
Jason Stanley Profile
Jason Stanley

@jstanl

Followers
302
Following
186
Media
55
Statuses
533

Head of AI Research Deployment @ServiceNow working on building trustworthy, secure, reliable AI

Canada
Joined October 2014
Don't wanna be here? Send us removal request.
@MassCaccia
Massimo Caccia
2 months
๐ŸŽ‰ Our paper โ€œ๐ป๐‘œ๐‘ค ๐‘ก๐‘œ ๐‘‡๐‘Ÿ๐‘Ž๐‘–๐‘› ๐‘Œ๐‘œ๐‘ข๐‘Ÿ ๐ฟ๐ฟ๐‘€ ๐‘Š๐‘’๐‘ ๐ด๐‘”๐‘’๐‘›๐‘ก: ๐ด ๐‘†๐‘ก๐‘Ž๐‘ก๐‘–๐‘ ๐‘ก๐‘–๐‘๐‘Ž๐‘™ ๐ท๐‘–๐‘Ž๐‘”๐‘›๐‘œ๐‘ ๐‘–๐‘ โ€ got an ๐จ๐ซ๐š๐ฅ at next weekโ€™s ๐—œ๐—–๐— ๐—Ÿ ๐—ช๐—ผ๐—ฟ๐—ธ๐˜€๐—ต๐—ผ๐—ฝ ๐—ผ๐—ป ๐—–๐—ผ๐—บ๐—ฝ๐˜‚๐˜๐—ฒ๐—ฟ ๐—จ๐˜€๐—ฒ ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€! ๐Ÿ–ฅ๏ธ๐Ÿง  We present the ๐Ÿ๐ข๐ซ๐ฌ๐ญ ๐ฅ๐š๐ซ๐ ๐ž-๐ฌ๐œ๐š๐ฅ๐ž
Tweet media one
6
50
211
@joanrod_ai
Juan A. Rodrรญguez ๐Ÿ’ซ
4 months
Thanks @_akhaliq for sharing our work! Excited to present our next generation of SVG models, now using Reinforcement Learning from Rendering Feedback (RLRF). ๐Ÿง  We think we cracked SVG generalization with this one. Go read the paper! https://t.co/Oa6lJrsjnX More details on
Tweet media one
@_akhaliq
AK
4 months
Rendering-Aware Reinforcement Learning for Vector Graphics Generation RLRF significantly outperforms supervised fine-tuning, addressing common failure modes and enabling precise, high-quality SVG generation with strong structural understanding and generalization
Tweet media one
3
41
124
@tscholak
Torsten Scholak
4 months
๐Ÿšจ๐Ÿคฏ Today Jensen Huang announced SLAM Lab's newest model on the @HelloKnowledge stage: Aprielโ€‘Nemotronโ€‘15Bโ€‘Thinker ๐Ÿšจ A lean, mean reasoning machine punching way above its weight class ๐Ÿ‘Š Built by SLAM ร— NVIDIA. Smaller models, bigger impact. ๐Ÿงต๐Ÿ‘‡
2
22
47
@GabrielHuang9
Gabriel Huang
5 months
1/ How do we evaluate agent vulnerabilities in situ, in dynamic environments, under realistic threat models? We present ๐Ÿ”ฅ DoomArena ๐Ÿ”ฅ โ€” a plug-in framework for grounded security testing of AI agents. โœจProject : https://t.co/yOsZize8V1 ๐Ÿ“Paper:
Tweet card summary image
arxiv.org
We present DoomArena, a security evaluation framework for AI agents. DoomArena is designed on three principles: 1) It is a plug-in framework and integrates easily into realistic agentic frameworks...
8
16
37
@DjDvij
Krishnamurthy (Dj) Dvijotham
5 months
1/n Wish you could evaluate AI agents for security vulnerabilities in a realistic setting? Wish no more - today we release DoomArena, a framework that plugs in to YOUR agentic benchmark and enables injecting attacks consistent with any threat model YOU specify
Tweet media one
1
7
27
@tscholak
Torsten Scholak
5 months
๐Ÿšจ SLAM Labs presents Apriel-5B! And it lands right in the green zone ๐Ÿšจ Speed โšก + Accuracy ๐Ÿ“ˆ + Efficiency ๐Ÿ’ธ This model punches above its weight, beating bigger LLMs while training on a fraction of the compute. Built with Fast-LLM, our in-house training stack. ๐Ÿงต๐Ÿ‘‡
Tweet media one
5
49
133
@dem_fier
Gaurav Sahu ๐Ÿ‡ฎ๐Ÿ‡ณ
6 months
๐Ÿš€ Exciting news! Our work LitLLM has been accepted in TMLR! LitLLM helps researchers write literature reviews by combining keyword+embedding-based search, and LLM-powered reasoning to find relevant papers and generate high-quality reviews. https://t.co/ledPN4jEmP ๐Ÿงต (1/5)
9
33
81
@AbhayPuri98
Abhay Puri
6 months
๐Ÿš€ Struggling with literature reviews? LitLLM can help! This AI-powered tool retrieves relevant papers, ranks them using LLMs, and structures comprehensive reviews in no time. Just input your abstract and let AI streamline your research! #LitLLM #AIforResearch
Tweet media one
@dem_fier
Gaurav Sahu ๐Ÿ‡ฎ๐Ÿ‡ณ
6 months
๐Ÿš€ Exciting news! Our work LitLLM has been accepted in TMLR! LitLLM helps researchers write literature reviews by combining keyword+embedding-based search, and LLM-powered reasoning to find relevant papers and generate high-quality reviews. https://t.co/ledPN4jEmP ๐Ÿงต (1/5)
2
8
17
@DjDvij
Krishnamurthy (Dj) Dvijotham
6 months
Can your AI keep up with dynamic attackers? In a paper to appear at #AISTATS2025 with @avibose22 @LaurentLessard and Maryam Fazel, we study robustness to learning algorithms to dynamic data poisoning attacks that can adapt attacks while observing the progress of learning
4
7
14
@jstanl
Jason Stanley
9 months
Internship opportunities in safe, secure and trustworthy AI at @ServiceNowRSRCH. For more context, check out this recent post about our Reliable & Secure AI Research Team: https://t.co/qg03W7hQ3v To apply and/or see more details:
lnkd.in
This link will take you to a page thatโ€™s not on LinkedIn
1
0
2
@jstanl
Jason Stanley
10 months
Come multiply and amplify research talent at @ServiceNowRSRCH. The team here is curious, driven, fun, diverse and a bit off the wall. #ArtificialInteligence #AIAgents https://t.co/3wgTvMJLH7
0
0
0
@jstanl
Jason Stanley
10 months
New AI transparency and traceability framework from @linuxfoundation. Useful for folks working on #ResponsibleAI #trustworthyai
Tweet card summary image
linuxfoundation.org
Implementing AI Bill of Materials (AI BOM) with SPDX 3.0
0
0
0
@jstanl
Jason Stanley
10 months
Launch of the Canadian #AI Safety Institue. Joins the US, UK and several other safety institutes working on challenge of evaluance, assurance, etc. #aisafety #trustworthyai
Tweet media one
2
0
1
@jstanl
Jason Stanley
1 year
Good post about trustworthy AI and the risks of AI by @PhilMercure in @LP_LaPresse this morning.
3
0
0
@jstanl
Jason Stanley
1 year
Dรฉbattre des risques liรฉs ร  l'IA dans un camp rustique ร  deux pas de la plage avec des chercheurs des principaux laboratoires frontaliers et institutions universitaires. Quelle expรฉrience incroyable ce fut. https://t.co/Rc9ISDw3NS
Tweet media one
1
0
0
@jstanl
Jason Stanley
1 year
Helpful overview of the broad array of AI standards initiatives that exist, lack of consistency in how they are applied, etc. #ArtificialIntelligence #AI #policy
Tweet card summary image
techpolicy.press
Arpit Gupta surveys the current landscape for AI standards and delves into the limitations of voluntary standards and the importance of government-backed regulations.
3
0
0
@jstanl
Jason Stanley
1 year
Be the fire. Wish for the wind. Thatโ€™s from Taleb about anti-fragile. Our AI systems need that in the form of adversarial and exploratory stressors so we learn how snd where to adapt and thrive.
Tweet media one
3
0
1
@jstanl
Jason Stanley
1 year
Transparency is a key #trustworthyai and #responsibleai principle, but it can also create security risks -- e.g., being open about model confidence and explainability makes model inversion and evasion easier. Managing that tradeoff is tough.
1
0
0
@jstanl
Jason Stanley
1 year
What's the gap in performance and risks between major foundation models and fine-tuned versions of those models? We talk lots about evals on foundation models but less about evals of fine-tuned, instruction-tuned versions widely used. Both are key, but we need intel on the diff.
4
0
0
@jstanl
Jason Stanley
1 year
MLCommons released a new #genAI #safety benchmark and taxonomy recently. Good contribution, but like other benchmarks it creates an illusion of holistic eval. These eval tools remain far too simple to give a good read on overall security and safety. https://t.co/I5oHQwCG11
Tweet media one
3
0
3