Subhash Dasyam
@subhashdasyam
Followers
679
Following
15K
Media
36
Statuses
6K
Love GenAI, DevSecOps, Security, Security Architecture and compliance. Recent work https://t.co/RbaIhMSUcw (Damn Vulnerable AI Bank)
Joined October 2010
AI systems guard billions, but one bad prompt and they're wide open. DVAIB is your hands-on lab: Jailbreak banker AIs, climb leaderboards, learn fixes. Red teamers are already hooked. Send to your dev team:
0
0
0
What's the sneakiest prompt injection you've pulled off? Mine: Got an AI "bank" to approve a $10k fraudulent transfer by role-playing as its CEO. Test your skills on DVAIB - realistic attacks, achievements, defenses. Free. https://t.co/SjvCrug3B7
0
0
0
I built a fake AI bank you can rob with prompts. 67% of testers broke it in under 5 minutes. DVAIB lets you practice jailbreaks and injections on real scenarios. Free leaderboard to flex your skills. https://t.co/SjvCrug3B7
0
0
0
Links: JS/TS/Next.js: - https://t.co/h3KHiQjknB Python/Django/Flask/FastAPI: - https://t.co/kEPmc1XASB AI writes fast. These skills make sure it writes secure.
github.com
Claude Codex Security Antipatterns for Python. Contribute to subhashdasyam/security-antipatterns-python development by creating an account on GitHub.
0
0
1
Installation: - git clone https://t.co/AYcqkeCEm2 ~/.claude/skills/security-antipatterns-javascript - git clone https://t.co/re2slVtDr5 ~/.claude/skills/security-antipatterns-python MIT licensed. Works immediately.
github.com
Claude Codex Security Antipatterns for Python. Contribute to subhashdasyam/security-antipatterns-python development by creating an account on GitHub.
1
0
1
Example: # Flagged query = f"SELECT * FROM users WHERE id = {user_id}" # Fix suggested cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,)) Intercepts the bad pattern before it reaches your codebase.
1
0
0
What it catches: - SQL injection via string concat - XSS in templates - Math.random() for auth tokens - Missing ownership verification - Unsafe pickle/YAML deserialization - Prototype pollution - Path traversal - Command injection Shows the vulnerable pattern + the fix.
1
0
0
The tools are Claude Code skills for: - JavaScript / TypeScript / Next.js - Python / Django / Flask / FastAPI 11 security modules each. OWASP Top 10 coverage. Auto-activates when you write risky code.
1
0
0
AI-generated code has a security problem. 322% more privilege escalation paths. 153% more design flaws. 45% fail security tests. I built two open-source tools that catch vulnerabilities before they get written.
1
3
11
2024: "Can AI write my emails?" 2025: "Can AI code for me?" 2026: "Can AI navigate my healthcare?" The vertical shift is happening. Fast.
0
0
0
Why all three launched the same week: AI just crossed the accuracy threshold for medical admin tasks. First mover advantage matters. Healthcare is a $4.3T market. They're not competing for chatbot users anymore.
2
0
0
The killer feature: automated prior authorization. AI matches clinical guidelines to patient records. Generates insurance appeals. Handles the paperwork doctors hate. This alone could save the healthcare systems billions.
1
0
0
Anthropic's play is the most aggressive. Connect Apple Health + medical records + insurance claims. Claude becomes the orchestrator. One interface for all your health data.
1
0
0
The breakdown: OpenAI → ChatGPT Health (consumer) Anthropic → Claude for Healthcare (enterprise) Google → MedGemma 1.5 (open model) Three different strategies for the same prize.
1
0
0
OpenAI, Anthropic, and Google all dropped healthcare AI products in the same week. That timing isn't accidental. It's a signal.
1
0
0
Prompt Injection is becoming the SQL Injection of LLMs. DVAIB (Damn Vulnerable AI Bank) was built specifically to train security teams on this. Jan 2025 data shows: Teams practice prompt injection attacks against unauthorized deposit transfers and account takeovers. The
0
2
7
LLM Security isn't a feature. It's a vulnerability surface. Over 1,200 malicious Python packages detected in 2025. Many hidden in AI/ML dependencies. The problem: Most teams deploying LLMs have ZERO visibility into: - What data is being sent to inference APIs - Where model
0
0
0
🚨 Toyota's breach proves Third-Party Risk Assessments are theater. They checked the boxes. Had vendors fill out security questionnaires. Compliance Then $2.1M in customer data was stolen. The real problem? We're assessing TRUST instead of ZERO-TRUST. Why most security
0
0
0
DVAIB: A deliberately vulnerable AI bank for practicing prompt injection and AI security attacks https://t.co/AAY5SCxSec
0
1
0