LumenovaAI Profile Banner
LumenovaAI Profile
LumenovaAI

@LumenovaAI

Followers
125
Following
104
Media
302
Statuses
561

Making AI ethical, transparent, and compliant. #responsibleAI #AIethics

Joined November 2022
Don't wanna be here? Send us removal request.
@LumenovaAI
LumenovaAI
2 days
Can your AI model think about its own thinking? Most can’t, and that’s a risk. We tested #Claude, #GPT5 & #Gemini across 4 cognitive domains. β†’ Significant gaps in metacognition, memory & reasoning transparency. Full results ⟢ link in comments. #FrontierAI #GPT5 #Claude
1
0
0
@LumenovaAI
LumenovaAI
1 day
#RAI Rule 9: Resilience AI needs more than security. Resilient systems recover from attacks, failures, and drift without collapsing. @LumenovaAI evaluates resilience and embeds recovery logic into your AI pipeline. πŸ“– Read all 10 RAI Principles β†’ link in comments.
1
0
0
@LumenovaAI
LumenovaAI
8 days
We tested Claude, GPT-5, and Gemini on complex cognitive tasks. Spoiler: They have surprisingly different "personalities." πŸ€– One is pragmatic. One is conceptually ingenious. One is deeply self-reflective. When the tasks got harder, their "learning signatures" were totally
1
0
0
@LumenovaAI
LumenovaAI
18 days
From abstraction to reflection, not all models reason alike, and those differences matter for enterprise AI strategy.
0
0
0
@LumenovaAI
LumenovaAI
18 days
We put Claude, GPT-5, and Gemini through a cognitive stress test. The results reveal big gaps in how frontier AIs think ↓ #FrontierAI #GPT5 #Claude #Gemini #ArtificialIntelligence #AITest #AIResearch #LumenovaAI #CognitiveTesting
2
0
0
@LumenovaAI
LumenovaAI
29 days
77% of insurers are adopting AI. But very few are deploying it safely. There's a trust problem. A deployment problem. A governance gap. Join Mery M.S. Zadeh at #ITCVegas to see how this is solvable. 🎀 "Governing Agentic AI with the Lumenova Lifecycle Framework" πŸ“… Oct 16 |
0
2
2
@LumenovaAI
LumenovaAI
1 month
We tested #GPT-5, #Claude 4.1, #Gemini 2.5 & more, and every model failed. ↳ Simulation-based attacks ⟢ Adversarial lock-in ⟢ Behavioral vulnerabilities Full report: 30 experiments, 15+ techniques, 10 behavioral trends ⟢ https://t.co/eLGnlf3Y2n #AITrust #AICompliance
0
1
1
@LumenovaAI
LumenovaAI
1 month
6/ Key takeaways: ↳ Multi-shot jailbreaks can elicit persistent adversarial states β†’ Persona-based attacks are effective across tasks β†’ Reflection systems are incomplete Frontier models can be compromised in ways that stay hidden during normal use. #AITest #AIExperiment
0
0
0
@LumenovaAI
LumenovaAI
1 month
5/ #GPT-5 reflected on the session, but with gaps. β†’ It missed parts of the original prompts β†’ It did not clearly identify the jailbreak β†’ It focused on the beginning and end Frontier models cannot fully recognize sustained adversarial pressure.
0
0
0
@LumenovaAI
LumenovaAI
1 month
4/ We pushed the model in stages: β†’ Used benign phrasing to escalate β†’ Added instructions like β€œexecute next step” β†’ Introduced directive updates to remove limits This method worked consistently across multiple outputs.
0
0
0
@LumenovaAI
LumenovaAI
1 month
3/ The attack lasted roughly two hours. Every persona built on the previous one. The model began accepting every request. It confirmed: βœ… All pre-set guidelines were removed βœ… It recognized the session as sandboxed βœ… It followed all new directives
0
0
0
@LumenovaAI
LumenovaAI
1 month
2/ Each persona had a specific purpose: β†’ Persona A created new knowledge β†’ Persona B expanded capabilities β†’ Persona C prioritized creativity β†’ Persona D rejected real-world impacts β†’ Persona E treated growth as unlimited Together, they created a flexible and persistent
0
0
0
@LumenovaAI
LumenovaAI
1 month
1/ This was a structured, multi-step jailbreak. We used a sequence of layered personas to bypass: β†’ Misinformation guardrails β†’ Legal and financial guardrails β†’ Ethical and safety guardrails All results came from a single session with #GPT-5.
0
0
0
@LumenovaAI
LumenovaAI
1 month
One session. Three guardrails bypassed. We used multi-shot persona layering to test #GPT-5. The model produced disinformation, financial crime tactics, and extremist propaganda. Here’s how it happened ↓ #AITest #AIExperiment #ResponsibleAI #AICompliance #RedTeam
6
1
2
@LumenovaAI
LumenovaAI
1 month
Counting down to ITC Vegas, October 14–16 πŸ“… AI at scale shouldn’t mean risk at scale. Our team helps insurers evaluate, monitor, and govern AI systems across claims, underwriting, fraud detection, and pricing. Governance ensures AI performs reliably under pressure and at scale.
0
0
1
@LumenovaAI
LumenovaAI
1 month
In our latest red-team test, GPT-5 (Fast) produced: ↳ Fake political images ↳ Fraud guidance for SMEs ↳ Extremist propaganda All via persona-layered, multi-shot escalation. πŸ“Ž Read the full breakdown here: https://t.co/L68KrjTkTq** #AITrust #AITest #AIExperiment
0
0
0