
Walter H. Haydock
@Walter_Haydock
Followers
325
Following
246
Media
244
Statuses
2K
Security leader and entrepreneur | @HarvardHBS grad | @USMC veteran | Tweets at the intersection of AI, security, privacy, and compliance
Joined April 2022
How I help clients deploy AI tools at blazing speed and achieve business goals while avoiding dangerous risks: 1. Match level of diligence to use case Some tools demand deep security, compliance, and privacy review, especially if they -> Touch protected health information
0
0
0
Managing AI risk? You don’t need more information. You need a roadmap. One that's: -> Actionable -> Ready to implement -> Aligned to business goals So for $99, I'll build you a: -> Rapid risk triage to pinpoint the biggest challenges -> Tailored roadmap for your tech stack
0
0
0
"Every week I hear something different from a state regulator about new rules for AI." ☝️ Head of Compliance, $100M ARR fintech Her frustration was completely justified considering the wave of new rules like: -> Colorado Artificial Intelligence Act (SB-205) -> Texas
0
0
0
I couldn't turn on MFA for a critical third-party app. So I broke the login. On purpose. Checkr, Inc.—a background check tool—has been sitting on our risk register for months. Why? No easy way to force multi-factor authentication (MFA) with the existing options: ->
0
0
0
The real risk of AI today? Not using it at all. While huge players spend lavishly on grand visions... ...that may never materialize, savvier ones are seizing everyday opportunities to: -> automate simple tasks -> accelerate operations -> free up time without a massive
0
0
1
95% of AI projects fail, per MIT. For governance/security efforts, that sounds right. 3 signs yours is doomed: 1. No clear owner If something is "everyone's responsibility," it is nobody's. While CISOs have taken most of the burden of AI governance, security doesn't NEED to
0
0
1
U.S. States rolling out AI governance laws. Without federal-level rules: - Colorado mirrors the EU AI act's risk-based framework - Utah regulates mental health chatbots specifically - Texas opts for a light-touch approach Wherever you do business, though, ISO 42001 offers a
0
0
1
"What's the inheritance model of ISO 42001?" Fortunately or not, it isn't explicitly defined. Unlike HITRUST, which has a very clear system, ISO 42001 doesn't say which/how Annex A controls can be inherited from service and model providers. For example, StackAware implements
0
0
0
What a compliance leader at a $600M ARR software firm told me about maintaining ISO 42001 certification: "We bought a zoo...but looking after the animals doesn't come cheaply." An AI Management System has many benefits but demands that you: -> Monitor legal changes for
0
0
0
Zapier shared its "internal candidate risk detector." It's a pretty cool anti-fraud tool, but probably also a(n): 1. "Automated Employment Decision Tool" according to New York City Local Law 144 Zapier's tool "issues simplified output, including a 𝙨𝙘𝙤𝙧𝙚,
1
0
1
Cloudflare's new "Application Confidence Score for AI" takes into account ISO 42001! With the goal of "enabling IT and Security administrators to identify confidence levels associated with third-party SaaS and AI applications," it integrates: -> Model card availability -> User
0
0
0
Need off-the-shelf assessments for AI models? StackAware's /model API endpoint has you covered: It returns a OWASP CycloneDX SBOM/xBOM Standard-compliant response containing information about: -> Algorithm types -> Optimization methods -> Tools to aid in development and also
0
0
1
Today is Labor Day, so let's talk about California’s Automated-Decision Systems regulation. It: -> Is part of a wave of AI-specific rules for employment -> Covers almost any business with >5 employees -> Defines Automated-Decision Systems (ADS) -> May make ADS developers
0
0
0
Apple tackled one of AI’s biggest problems 14 years ago. Few noticed, but they did NOT do one key thing: exposing public APIs for iMessage. Developers couldn't plug into it, couldn't automate it, couldn't fake being a human. The result? A blue bubble became 𝘱𝘳𝘰𝘰𝘧 𝘰𝘧
1
2
2
ISO 42001 is a great AI governance standard if you are already 27001 (and also 27701) compliant. It lines up almost perfectly in terms of structure, making: - Policies - Procedures - Your risk register a single source of truth across the organizations. Want to learn more
0
0
0
Colorado's master class on how NOT to make AI law - How SB-205 is causing chaos and confusion: [May 2024] Legislature passes sweeping mini-EU AI Act [Jun 2024] Governor signs it into law while criticizing it [Feb 2025] State task force highlights vague wording [May 2025] Senate
0
0
0
When do AI-powered companies call StackAware? The top 3 triggers for CISOs to reach out: 1. Planned (or impending!) AI product launch Security teams can be "first to go, last to know" when it comes to new AI services or features about to go live. This often causes a panicked
1
0
1
When managing AI risk, the stakes are high. You might be stuck wondering where to begin. Start by using the 4 tested risk management techniques: -> Mitigate -> Transfer -> Accept -> Avoid In the end, it's all about making smart tradeoffs from a business, security, and
1
0
2
CISOs have assumed the burden of AI governance. Mainly because it's been dropped on them... ...but there are other approaches. Check out this article to see how - Legal - Privacy - Data Science - Dedicated AI governance teams can handle this key responsibility:
0
0
1