
Jason Stanley
@jstanl
Followers
302
Following
186
Media
55
Statuses
533
Head of AI Research Deployment @ServiceNow working on building trustworthy, secure, reliable AI
Canada
Joined October 2014
๐ Our paper โ๐ป๐๐ค ๐ก๐ ๐๐๐๐๐ ๐๐๐ข๐ ๐ฟ๐ฟ๐ ๐๐๐ ๐ด๐๐๐๐ก: ๐ด ๐๐ก๐๐ก๐๐ ๐ก๐๐๐๐ ๐ท๐๐๐๐๐๐ ๐๐ โ got an ๐จ๐ซ๐๐ฅ at next weekโs ๐๐๐ ๐ ๐ช๐ผ๐ฟ๐ธ๐๐ต๐ผ๐ฝ ๐ผ๐ป ๐๐ผ๐บ๐ฝ๐๐๐ฒ๐ฟ ๐จ๐๐ฒ ๐๐ด๐ฒ๐ป๐๐! ๐ฅ๏ธ๐ง We present the ๐๐ข๐ซ๐ฌ๐ญ ๐ฅ๐๐ซ๐ ๐-๐ฌ๐๐๐ฅ๐
6
50
211
Thanks @_akhaliq for sharing our work! Excited to present our next generation of SVG models, now using Reinforcement Learning from Rendering Feedback (RLRF). ๐ง We think we cracked SVG generalization with this one. Go read the paper! https://t.co/Oa6lJrsjnX More details on
Rendering-Aware Reinforcement Learning for Vector Graphics Generation RLRF significantly outperforms supervised fine-tuning, addressing common failure modes and enabling precise, high-quality SVG generation with strong structural understanding and generalization
3
41
124
๐จ๐คฏ Today Jensen Huang announced SLAM Lab's newest model on the @HelloKnowledge stage: AprielโNemotronโ15BโThinker ๐จ A lean, mean reasoning machine punching way above its weight class ๐ Built by SLAM ร NVIDIA. Smaller models, bigger impact. ๐งต๐
2
22
47
1/ How do we evaluate agent vulnerabilities in situ, in dynamic environments, under realistic threat models? We present ๐ฅ DoomArena ๐ฅ โ a plug-in framework for grounded security testing of AI agents. โจProject : https://t.co/yOsZize8V1 ๐Paper:
arxiv.org
We present DoomArena, a security evaluation framework for AI agents. DoomArena is designed on three principles: 1) It is a plug-in framework and integrates easily into realistic agentic frameworks...
8
16
37
1/n Wish you could evaluate AI agents for security vulnerabilities in a realistic setting? Wish no more - today we release DoomArena, a framework that plugs in to YOUR agentic benchmark and enables injecting attacks consistent with any threat model YOU specify
1
7
27
๐จ SLAM Labs presents Apriel-5B! And it lands right in the green zone ๐จ Speed โก + Accuracy ๐ + Efficiency ๐ธ This model punches above its weight, beating bigger LLMs while training on a fraction of the compute. Built with Fast-LLM, our in-house training stack. ๐งต๐
5
49
133
๐ Exciting news! Our work LitLLM has been accepted in TMLR! LitLLM helps researchers write literature reviews by combining keyword+embedding-based search, and LLM-powered reasoning to find relevant papers and generate high-quality reviews. https://t.co/ledPN4jEmP ๐งต (1/5)
9
33
81
๐ Struggling with literature reviews? LitLLM can help! This AI-powered tool retrieves relevant papers, ranks them using LLMs, and structures comprehensive reviews in no time. Just input your abstract and let AI streamline your research! #LitLLM #AIforResearch
๐ Exciting news! Our work LitLLM has been accepted in TMLR! LitLLM helps researchers write literature reviews by combining keyword+embedding-based search, and LLM-powered reasoning to find relevant papers and generate high-quality reviews. https://t.co/ledPN4jEmP ๐งต (1/5)
2
8
17
Can your AI keep up with dynamic attackers? In a paper to appear at #AISTATS2025 with @avibose22 @LaurentLessard and Maryam Fazel, we study robustness to learning algorithms to dynamic data poisoning attacks that can adapt attacks while observing the progress of learning
4
7
14
Internship opportunities in safe, secure and trustworthy AI at @ServiceNowRSRCH. For more context, check out this recent post about our Reliable & Secure AI Research Team: https://t.co/qg03W7hQ3v To apply and/or see more details:
lnkd.in
This link will take you to a page thatโs not on LinkedIn
1
0
2
Come multiply and amplify research talent at @ServiceNowRSRCH. The team here is curious, driven, fun, diverse and a bit off the wall. #ArtificialInteligence #AIAgents
https://t.co/3wgTvMJLH7
0
0
0
New AI transparency and traceability framework from @linuxfoundation. Useful for folks working on #ResponsibleAI #trustworthyai
linuxfoundation.org
Implementing AI Bill of Materials (AI BOM) with SPDX 3.0
0
0
0
Launch of the Canadian #AI Safety Institue. Joins the US, UK and several other safety institutes working on challenge of evaluance, assurance, etc. #aisafety #trustworthyai
2
0
1
Good post about trustworthy AI and the risks of AI by @PhilMercure in @LP_LaPresse this morning.
3
0
0
Dรฉbattre des risques liรฉs ร l'IA dans un camp rustique ร deux pas de la plage avec des chercheurs des principaux laboratoires frontaliers et institutions universitaires. Quelle expรฉrience incroyable ce fut. https://t.co/Rc9ISDw3NS
1
0
0
Helpful overview of the broad array of AI standards initiatives that exist, lack of consistency in how they are applied, etc. #ArtificialIntelligence #AI #policy
techpolicy.press
Arpit Gupta surveys the current landscape for AI standards and delves into the limitations of voluntary standards and the importance of government-backed regulations.
3
0
0
Be the fire. Wish for the wind. Thatโs from Taleb about anti-fragile. Our AI systems need that in the form of adversarial and exploratory stressors so we learn how snd where to adapt and thrive.
3
0
1
Transparency is a key #trustworthyai and #responsibleai principle, but it can also create security risks -- e.g., being open about model confidence and explainability makes model inversion and evasion easier. Managing that tradeoff is tough.
1
0
0
What's the gap in performance and risks between major foundation models and fine-tuned versions of those models? We talk lots about evals on foundation models but less about evals of fine-tuned, instruction-tuned versions widely used. Both are key, but we need intel on the diff.
4
0
0
MLCommons released a new #genAI #safety benchmark and taxonomy recently. Good contribution, but like other benchmarks it creates an illusion of holistic eval. These eval tools remain far too simple to give a good read on overall security and safety. https://t.co/I5oHQwCG11
3
0
3