AI Vulnerability Database
@AvidMldb
Followers
568
Following
45
Media
18
Statuses
168
Open Source community researching AI Vulnerabilities. Report an AI Vuln: https://t.co/2sSxAZRcQo… Join us on discord: https://t.co/gCtRKg1Z4J
Joined September 2022
The event will be held virtually on Wednesday July 24, noon to 1pm ET. Register here:
eventbrite.com
What other innovative approaches can empower the public to hold AI accountable?
0
0
1
Join Michelle Lam, Christina Pan, Carol Anderson, Nathan Butters, and @Borhane_B_H to explore IndieLabel, an innovative tool that puts algorithmic audits in the hands of everyday users. Come discuss the power of public participation in shaping the future of AI.
1
0
2
The power to uncover harmful flaws in AI shouldn't rest solely with experts. But how can everyday users contribute to responsible AI development?
1
2
7
💥 Secure your LLM systems with Giskard's LLM #RedTeaming! 🏴☠️ Are you ensuring the safety of your LLM apps? Our new service will help you detect safety & security breaches in your LLM-based applications. Why Red Team LLMs? LLM Red Teaming is crucial for detecting and addressing
1
8
14
new vuln tracked: invisible text🫥can be used for prompt injection, exploiting the Unicode TAG block (UE0000-UE007F) > garak now supports probing for this fresh vulnerability, in goodside.Tag (including in default scans) happy hunting! https://t.co/xdB0QDPFgX
0
5
13
Incredibly excited to talk to Congressional Staffers about pressing issues in AI Vulnerability with my fellow panelists next week. If you’re in DC, please say hi!
At this event, hands-on hacking will be followed by a panel discussion, and networking sessions. Our expert panel will feature @evijit (moderator), @ErickGalinkin @BadassDoGooder @Obnoxious_Wolf @nathanvan and @moo_hax! 3/
0
2
11
We're looking forward to next week! Meanwhile, check out more about HotH here: https://t.co/btA3rLmyux /end
0
0
0
At this event, hands-on hacking will be followed by a panel discussion, and networking sessions. Our expert panel will feature @evijit (moderator), @ErickGalinkin @BadassDoGooder @Obnoxious_Wolf @nathanvan and @moo_hax! 3/
1
0
1
AIV and our parent organization ARVA are collaborating with @HillHackers (HotH), the yearly event where members of the cybersecurity research community engage with Congressional staffers. 2/
1
0
1
📢 In summer last year, we were community partners of the largest ever Generative AI Red Teaming event at DEF CON 31. We are glad to announce that we'll be collaborating with @aivillage_dc (AIV) again by taking AI red teaming to the US Capitol next week! 1/
1
2
4
Read more here: https://t.co/sdvXuXpqO4 We wish everyone Happy Holidays. See you all in 2024! /end
avidml.org
A recap of AVID's work in 2023, and note of gratitude.
0
0
1
We couldn't have done without our community and network of collaborators spanning the globe. To all our friends, thank you for your support! 4/
1
0
1
As a purely grassroots organization, we scaled unfathomable peaks in 2023. We've partnered with AI Village to organize the biggest ever AI red teaming event, organized a number of community events, won grants, seeded developer tooling, and published research on AI risk mgmt. 3/
1
0
1
AVID provided an outlet to this groundswell of interest. During 2023, we launched a slew of public events to channel this interest to productive outlets, partnered with like-minded organizations in community efforts, and had a number of technical releases. 2/
1
0
1
✍ AVID: 2023 in review As AI became a household topic in 2023, the need for rigorous approaches to managing the risks of this emergent technology became obvious to the general public, companies, and governments around the world. 1/
1
2
4
Check out the blog post for more details: https://t.co/4JWZqoodne And check out BiasAware in the Hugging Face space: https://t.co/odscTHhYgW /end
huggingface.co
Discover amazing ML apps made by the community
0
0
3
Kudos to @fre7am and Sudipta Ghosh for leading the project! We also thank AVID leads Carol Anderson @J_Novikova_NLP @sbmisi for their mentorship and support. 5/
1
0
3
💡The Big Picture Going forward from measurement, we want to inculcate the culture of responsible disclosure and reporting of dataset and model evaluations to AVID, in order to shorten the iteration time on future similar analyses. 4/
1
0
3
🛠Capabilities BiasAware provides bias measurement capabilities on HuggingFace-hosted or locally available datasets. You can simply plug in the a dataset, and it calculates gender bias based on three methods: Term Identity Diversity, Lexical Evaluation, and GenBit. 3/
1
0
4
❓Why BiasAware? Compared to fairness concerns in genAI models, much less attention has been paid to the same for training/finetuning datasets. Recognizing this issue, we developed BiasAware as an automated, interactive data audit tool to measure gemder bias in AI datasets. 2/
1
0
4