D. Casey Flaherty
@DCaseyF
Followers
4K
Following
6K
Media
421
Statuses
7K
Co-Founder and Chief Strategy Officer at LexFusion. Co-Founder of Procertas. People + Process + Tech Evangelist.
Austin, TX
Joined March 2013
Best analogy I can muster: Microsoft combing through customer cloud data because it might be inflammatory or illegal—legitimate concern but insufficient to overcome the interest in shielding enterprise data from 3rd parties. On this, Microsoft has a clear and correct policy:
0
0
1
I deserve no credit for flagging this issue. The concern was flagged for me by a law firm denied an exemption. I simply ran it down because their commendable commitment to reading the fine print’s fine print sparked my curiosity and sense of duty.
1
0
0
Small share ≠ zero. I have found law departments, law firms, and legaltech vendors painfully aware of abuse monitoring. Some have been granted exemptions. Most have been denied exemptions because they are “unmanaged.”
1
0
0
Microsoft Copilot is on a relatively short (not blank) list of exemptions. Some anecdata from talking to law departments, law firms, and legaltech vendors about this subject indicate:
1
0
0
Seems like a bad look. Further, I would be willing to wager a not insubstantial sum that Microsoft has exempted Microsoft—i.e., as an enterprise—from abuse monitoring (pure speculation).
1
0
0
But you know who does qualify to turn off abuse monitoring, and has, in fact, opted out of abuse monitoring by Microsoft employees? Microsoft Specifically, Microsoft Copilot
1
0
0
Absent anything official from Microsoft, my only option is to fall back on anecdote as the best available evidence. Based on the conversations I’ve had thus far, it seems very few customers and partners are managed and, therefore, eligible to turn off abuse monitoring.
2
0
1
And this, dear reader, is where my search skills reach their limit. What are the eligibility criteria and process for becoming a “managed customer” or a “managed partner” and therefore eligible to have abuse monitoring turned off? ¯\_(ツ)_/¯
1
0
0
Again, like turning off abuse monitoring, this appears, superficially, to merely be a matter of paperwork. An enterprise can register for Limited Access, BUT only if they are a managed account—a status for which they cannot register, per the same Limited Access resource:
1
0
0
This “managed customers” language is similarly present in the accompanying resource on Limited Access.
1
0
0
"Limited Access criteria" starts to come into focus in the Azure OpenAI Limited Access Review form used to apply for “modified abuse monitoring”—also sometimes termed an “exemption” and meaning that abuse monitoring is turned off for specific use cases.
1
0
0
I underestimated how few would read down and take the steps necessary. AndI failed to read the fine print’s fine print to understand how few enterprises—clients, law firms, and legaltech companies—meet the “additional Limited Access eligibility criteria.”
1
0
0
Candidly, I had seen this before. But I did not think much of it because there is a mechanism for turning off abuse monitoring. I figured everyone would turn it off, and that would be that. How wrong I was.
1
0
0
If a prompt or generated content triggers the abuse monitoring system, the prompts and generated content are subject to human review by a Microsoft employee.
1
0
0
Excellent. But fine print follows. Below the blue box, we eventually learn that Microsoft has setup a process for “abuse monitoring.” Under their abuse monitoring program, Microsoft stores all prompts and generated content for 30 days.
1
0
0
For most law departments, law firms, and legaltech vendors I have spoken to, this translates into an (erroneous) assumption that there is no added risk of disclosure when pairing AOAIS with private company data already stored on Azure—the same misimpression I was under.
1
0
0