Antonio Cinà Profile
Antonio Cinà

@cinofix

Followers
167
Following
577
Media
15
Statuses
187

Assistant Professor (RTD-A) @ University of Genoa, Italy | Working on Trustworthy AI and ML for industries and security applications.

Joined March 2016
Don't wanna be here? Send us removal request.
@cinofix
Antonio Cinà
2 months
Exciting news! Together with @LorenzoCazz, our tutorial Towards Adversarially Robust ML in The Age of The AI Act is accepted at ECAI 2025! Learn how to secure AI in high-risk settings & meet new EU rules. 📅 Bologna, Oct 25–30.🔗 #ECAI2025 #TrustworthyAI.
Tweet card summary image
sites.google.com
Introduction Artificial Intelligence (AI) has rapidly expanded into critical domains such as cybersecurity, natural language processing, and medicine. However, AI systems often prioritize predictive...
0
0
3
@cinofix
Antonio Cinà
1 year
🌟 AttackBench is open-source, allowing researchers to contribute and update the leaderboard of existing attacks. 📜 Paper: 🧑‍💻 GitHub: 🏆 Online LeaderBoard: #OpenSource #ResearchCommunity #Robustness.
attackbench.github.io
AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples.
1
0
2
@cinofix
Antonio Cinà
1 year
5. Key Ingredients: Successful attacks often use normalization or linear approximation on the gradient, with adaptive step size schedules proving beneficial. Gradient descent is the predominant optimizer, occasionally supplemented by Adam or momentum variants.
1
0
1
@cinofix
Antonio Cinà
1 year
4. Library Impact: High-ranking attacks often come from AdvLib and Foolbox. In contrast, implementations from Art, Cleverhans, and Deeprobust generally underperform. For example, APGD's optimality drops from 90.9% to 26% when implemented in Art due to a key parameter difference.
1
0
1
@cinofix
Antonio Cinà
1 year
3. Efficiency vs. Optimality: Top-ranked attacks in terms of GO don’t always balance efficiency. For instance, APGD’s optimality for ImageNet is slightly better than PDPGD’s but is 3× slower.
1
0
1
@cinofix
Antonio Cinà
1 year
2. No Perfect Attack: None of the tested attacks reaches 100% of global optimality (GO), suggesting that combining attacks can still enhance robustness evaluations obtained with single (non-optimal) attacks.
Tweet media one
1
0
1
@cinofix
Antonio Cinà
1 year
AttackBench reveals some fascinating trends:. 1. APGD Versatility: APGD, typically a FixedBudget attack, shows excellent results as a MinNorm attack. This suggests that combining APGD with the search strategy in Sect. IV-A can efficiently find small perturbations.
1
0
1
@cinofix
Antonio Cinà
1 year
⚔️ Evaluating Defense Mechanisms: AttackBench is your go-to benchmark for identifying the most promising attacks to exploit when testing defenses. Avoid relying on buggy or suboptimal attacks and ensure fair, transparent evaluations in cybersecurity. #CyberSecurity #AttackBench.
1
0
1
@cinofix
Antonio Cinà
1 year
🔄 Shared Environment: With a shared and replicable evaluation environment, AttackBench ensures consistency in assessing attack strategies, making it easier for researchers to conduct reliable evaluations.
1
0
1
@cinofix
Antonio Cinà
1 year
🔍 Query Tracking: AttackBench includes query tracking to enhance evaluation transparency, allowing fair comparisons by standardizing the number of queries each attack can leverage. #AdversarialAttacks
Tweet media one
1
0
1
@cinofix
Antonio Cinà
1 year
🏆 Optimality Metric: We introduce a novel optimality metric, offering a fair and effective way to rank adversarial attacks based on the quality of the generated adversarial examples across entire security evaluation curves. #CyberSecurity #Benchmark
Tweet media one
1
0
2
@cinofix
Antonio Cinà
1 year
📊 Attack Categorization: AttackBench provides a unified framework for categorizing adversarial attacks, simplifying the comparison and understanding of different attack strategies. #AttackBench #AdversarialAttacks #AdversarialExample #AISecurity
Tweet media one
1
0
0
@cinofix
Antonio Cinà
1 year
🎯 What's new in AttackBench? It brings a unified framework for categorizing attacks, the novel optimality metric for comprehensive evaluation, and extensive testing of 102 attacks across 800 configurations. Say goodbye to biased evaluations and hello to reliable benchmarks!
Tweet media one
1
0
1
@cinofix
Antonio Cinà
1 year
🔍Current State of the Art Limitations.- Overfitting of metrics within individual studies.- Difficulty in comparing attacks with different perturbation budget.- Inconsistencies in attack implementations and results.- Need for a more comprehensive and reliable evaluation framework.
1
0
0
@cinofix
Antonio Cinà
1 year
🚨 New research alert! AttackBench introduces a fair comparison benchmark for gradient-based attacks, addressing limitations in current evaluation methods. 📜Paper: 🏆LeaderBoard: #MLSecurity #AdversarialAttacks #AI #adversarial
Tweet media one
2
9
16
@cinofix
Antonio Cinà
1 year
RT @piraxtor: (1/5) Super excited that I will be presenting Conning the Crypto Conman: End-to-End Analysis of Cryptocurrency-based Technica….
0
5
0
@cinofix
Antonio Cinà
1 year
RT @maurapintor: 📢 Call for Papers: Workshop on "Human Aligned AI: Towards Algorithms that Humans Can Trust." Discuss trustworthiness in AI….
0
8
0
@cinofix
Antonio Cinà
1 year
🚀Excited to share that our paper, Machine Learning Security Against Data Poisoning: Are We There Yet? has been accepted for the #TrustworthyAI special issue in #IEEE Computer. We tackle #data #poisoning attacks and defenses, exploring their limits and future research directions
Tweet media one
1
4
15