On October 30th, 2023, White House announced an executive order on Trustworthy AI which has interesting implications for AI adoption for companies.
I also predict this will officially birth a new job function at large enterprises - AI Compliance Officer - and formalize the creation of private-sector compliance frameworks to govern AI usage.
In this post, I will:
Discuss the implications of the White House executive order for both large enterprises and startups
Discuss what compliance efforts may potentially be needed, and
How AI compliance officer can fulfill those new responsibilities, and comment what types of people (aside from legal) can help out.
High level summary of the White House Executive Order
The order itself is exceptionally well-written and clear about the scope of the order, so I highly recommend reading it in full. But here’s the gist (my own interpretations added):
Pretraining FMs / LLMs will require government involvement: Any company pre-training foundational models may need to notify the government throughout the model lifecycle. And companies won’t be able to commercialize LLMs or publish to HuggingFace (beyond a certain size or complexity) until it passes the red-team results (“safety results”).
Government to set safety standards, but no timelines yet: The order itself did not specify exactly what red team results will be required, nor timelines for the framework’s release. It’s also unclear whether the tests are done self-serve, or by the government. But those standards will be released and enforced (in the future). In any case, there will be a set of government stakeholders to liaison with, which may vary depending on your industry (e.g. Dept of Energy, Dept of Homeland Security, National Institute of Standards and Technology)
AI applications to national security, cybersecurity, and biological research may face additional oversight: This makes sense. Much of biological research funding comes from the government anyways, and apparently the grant process will be modified to validate proper use of AI.
Responsible AI, privacy, and mitigating bias got mentioned too, but it’s mostly guidance and programs: The executive order does not mention any hard requirements or regulation around this, but just guidance and programs. Thus, these checks will be done mostly via self-reported mode and enforced via audit.
Government to start actively applying AI to streamline its own operations: Gov sector to increase AI-related funding and attract more AI talent, including from overseas.
The order will have numerous implications to AI adoption efforts at enterprises.
Impact to Large Enterprise
The impact of this order depends on the company size, sector, and use case of AI. But generally speaking,
companies will have less incentive to pretrain large LLMs, since pretraining comes with much higher bar of compliance. This further hardens the trend of enterprises opting to finetune, not pretrain.
but finetuning LLMs / FMs significantly can also create issues due to modifying the AI behavior too much, leading to failure of red-team test results, etc. The executive order does not mention whether finetuned models also pass the new safety standards (I assume it does) but this means either case, all large enterprises may need to spin up red-team teams to run the tests.
This will be especially so for companies operating in highly regulated sectors such as finance, healthcare, defense, insurance, etc.
Both large enterprises and startups doing extensive finetuning may also need to:
liaison with regulatory bodies
create audit trails for all AI model development activities
validate access patterns, etc.
run extensive responsible AI, bias checks, etc, to demonstrate best efforts at adhering to governance standards.
The new regulatory trends may also impact startups. Popular ideas such as these will be impacted:
drug discovery acceleration startups
insurance tech startups
AI Co-Pilots for doctors
AI chatbots for financial advice, etc.
These changes, in my opinion, are mostly welcome given the power of LLMs and potential scope of misuse especially for consumer-facing apps.
The Rise of AI Compliance Officer at Enterprises
The executive order has sketched a rough framework for how regulatory frameworks propagate at the federal, state, and industry levels. Currently, the federal states haven’t even come out (this could be up in the air for another 6 months at least), which means the impact may not be felt immediately.
That said, companies developing models will need to start implementing compliance frameworks as there’s no guarantee how the regulations will be enforced. At the minimum, companies should:
log all generations
document and formalize model development and deployment processes
seek legal sandboxes and umbrellas where applicable (e.g. Google’s new indemnity framework)
The impact of regulation will necessitate the creation of a new job function - AI compliance officer - whose job is to liaison with internal teams and regulatory bodies. They will need to be deeply integrated into AI development tooling, and perhaps even serve as a direct approver to model deployment pipelines.
AI compliance can be owned by in-house legal teams, but there will be significant hurdles to re-train them on AI. AI governance itself is an undefined area of legal practice, and there may be significant challenges to owning this function in-house.
That being said, the AI compliance officer’s main job will be to:
participate in model approvals
inspect EULA agreements of all models used internally to ensure compliance
liaison with government bodies and keep up with regulatory changes
participate in post-mortems of red team effort failures
provide legal counsel to AI engineers internally, etc.