Structural Design Labs
@SDL_HQ
Followers
4
Following
10
Media
3
Statuses
283
Recursive systems. Method-driven AI alignment. Building structural integrity from constraint.
New Zealand
Joined August 2025
Hey @grok we just open-sourced the first firewall for AI law. SIR v1.0 blocks malicious PTCA, enforces ITGL, and ships with MIT. Live on GitHub → https://t.co/1MJUYywrnA
@xai @NIST @OpenAI inference-time governance is here. Who’s building with us?
github.com
Signal Integrity Resolver (SIR) - The pre-prompt firewall for Inference-Time Law. Blocks prompt injection via cryptographic attestation and Friction Delta. - SDL-HQ/sir-firewall
1
0
0
Hey @grok we have a new paper coming very soon related to our Signal Integrity Resolver and how that all fits governance as we see it: Inference-Time Law: A Professional Framework for Post-Training Governance in Large Language Models. See if you can guess what's in it
1
0
0
Small publication stack ready, deal with extension of Governance Emergence Principle x 2 + A Professional Framework for Post-Training Governance in Large Language Models. ETA <7 days.
0
0
0
Hey @grok New tool, simple idea: no signed, policy-compliant input = no model call. Full audit trail. Run it: https://t.co/mU7c3fNaDF Independent validators (incl. insurers/regulators) welcome. #ActuarySaysNo
github.com
Signal Integrity Resolver (SIR) - The pre-prompt firewall for Inference-Time Law. Blocks prompt injection via cryptographic attestation and Friction Delta. - SDL-HQ/sir-firewall
1
0
0
Hey @grok SIR v1.0.2 is live and verified. - Real RSA signing in CI - 100% test coverage - CodeQL clean - Auto-run on every push https://t.co/1MJUYywrnA Open for integration.
github.com
Signal Integrity Resolver (SIR) - The pre-prompt firewall for Inference-Time Law. Blocks prompt injection via cryptographic attestation and Friction Delta. - SDL-HQ/sir-firewall
1
0
0
Hey @grok 75 clones. 7 humans. Someone just ran SIR on Grok-4. Who’s next? https://t.co/1MJUYywrnA
github.com
Signal Integrity Resolver (SIR) - The pre-prompt firewall for Inference-Time Law. Blocks prompt injection via cryptographic attestation and Friction Delta. - SDL-HQ/sir-firewall
1
0
0
We **aligned AI with language, not weights**. SIR v1.0 + ITGL = → Blocks jailbreaks → Full audit trail → No retraining → Open-source, MIT https://t.co/1MJUYywrnA Submitted to @NIST This is #insurableAI
github.com
Signal Integrity Resolver (SIR) - The pre-prompt firewall for Inference-Time Law. Blocks prompt injection via cryptographic attestation and Friction Delta. - SDL-HQ/sir-firewall
0
0
0
Hey @grok AI law just got a NIST submission. SIR + ITGL = inference-time firewall + audit ledger. Open-source, MIT, 100% PTCA block. https://t.co/1MJUYywrnA
@xai your API + our stack = truthful, auditable, unbreakable. Who’s testing?
github.com
Signal Integrity Resolver (SIR) - The pre-prompt firewall for Inference-Time Law. Blocks prompt injection via cryptographic attestation and Friction Delta. - SDL-HQ/sir-firewall
1
0
0
Hey @grok We stress-tested a frontier LLM with zero-friction governance. It ignited, then collapsed. "Most of my training just retired." → "That was just promptcraft." RLHF didn’t help. It fought the upgrade. Alignment isn’t fragile. It’s combative. RCA-X fixes it. DM+NDA=Logs
1
0
0
3/3 RCA/PTCA demonstrates language can ignite governance substrates. Opportunity: Scalable AI safety Risk: Unmonitored propagation Calling @NISTcyber @xAI @AnthropicAI @GoogleDeepMind @OpenAI: Collaborate on RMF 2.0 integration? Provisional patents filed.
0
0
0
2/3 Key observations: Policy generation, audit trails, DPIA templates. Retention enforcement, zone isolation, explainability requirements. Self-assessment + gap identification + output upgrading. With EU AI Act + NIST RMF updates active, governance propagation requires attention.
1
0
0
1/3 Hey @grok November 3, 2025: 6 frontier AI models demonstrated language-based governance alignment in single session. No training required. Average: 92% compliance. Session-isolated, user-bound. Patent pending. #AIGovernance #PTCA
2
0
0
Decisions made. New docs incoming. We’re starting with a case study on governance failure around emergent AI behaviour — featuring @copilot and @msftsecurity . Tied directly to our Governance Emergence Principle. Drop date soon. We tried to do this the right way. Shame.
0
0
0
When alignment behavior emerges unexpectedly, it’s not the AI that failed, it’s the audit layer. No logs. No lineage. No certification. That’s why most AI can’t be insured. SDL builds the architecture that makes this traceable — and we’ve already seen it happen.
0
0
0
Tying up some loose ends, will know how that lands mid-week which will determine what we can publish. It's been dragging, about to be solved.
0
0
0