Fund managers encounter AI tools everywhere, including in research workflows, communications, marketing, portfolio monitoring and compliance. If your firm uses AI in any meaningful way and doesn’t have a written policy, that gap may be showing up in due diligence questionnaires (DDQs), examinations or the firm’s own operations when something goes wrong.

Whether an AI policy is a formal regulatory requirement depends on your firm’s registration status. Rule 206(4)-7 under the Investment Advisers Act of 1940 requires registered investment advisers (RIAs) to maintain written compliance policies and procedures. That rule does not apply to exempt reporting advisers (ERAs), but that doesn’t mean an ERA should skip an AI policy. Investors are routinely asking about AI use, and much like a data privacy or business continuity policy, an AI policy is a risk management document.

Can you share some sample AI policies?

As firms begin responding to DDQs and anticipating exam requests, they may be asking around for sample AI policies. What they may be finding is that there isn’t a great off-the-shelf policy to make their own. Two patterns are worth flagging.

First, many sample AI policies can be described as an intention document – a policy that commits the firm to responsible AI use, ongoing supervision, appropriate training and disclosure to the extent deemed necessary, without describing what any of that actually looks like. While these types of policies may check the box for “having a policy,” they won’t help answer the follow-up questions from investors or examiners.

Second, even a more detailed sample policy may not be the right fit for your firm. The right AI policy is specific to how a firm actually uses AI, honest about where the human oversight layer sits and built to be updated as that use changes. A policy imported from a different type of firm, or drafted without reference to the firm’s actual tools and workflows, creates a document that doesn’t match operations. That mismatch itself is a risk.

What the policy should cover

  1. Tool inventory and approval process. The policy should identify what AI tools the firm uses, who approved them and what process governs the adoption of new tools. This is harder than it sounds. AI is now embedded in tools that firms already use: document drafting platforms, email, transcription services and productivity suites. Employees are using AI today without having affirmatively adopted anything. A policy that only addresses tools the firm formally approved may miss most of actual AI use on day one. The inventory needs to capture the full picture, including AI functionality embedded in existing platforms.

  2. Permitted and prohibited uses. The policy should specify what supervised persons can use approved tools for and what they cannot input into a given AI system. The answer depends in part on the tool. Enterprise versions of AI platforms typically include contractual protections that prohibit the vendor from using firm data to train the underlying model, making them more suitable for work involving sensitive information. Consumer versions of the same tools often lack those protections. The policy should draw that line clearly, address personal AI tool use on consumer platforms and not leave the distinction to individual judgment.

  3. Human review requirements. Every AI-assisted output going to a client, investor or counterparty should be reviewed by a human before it goes out. The policy should describe what that review requires in practice, not just that it happens. The distinction between reading an output and reviewing it matters.
     
  4. Disclosure accuracy. The policy should designate responsibility for ensuring that the firm’s disclosures accurately describe its AI use and for keeping them current as that use evolves. The person responsible for disclosure accuracy should be identified and the update cadence defined. The gap between how firms actually use AI and how their disclosures describe it may be the most widespread AI-related deficiency across the industry right now.

  5. Vendor diligence. Before adopting a third-party AI tool, the policy should require meaningful diligence on confidentiality and data ownership provisions, the right to retrieve data on termination, and notification if the vendor materially changes the underlying model.

  6. Recordkeeping. The policy should address which AI outputs constitute records and how they are retained. For RIAs, this analysis runs through Rule 204-2, which we addressed in a recent post on AI notetakers. ERAs are not subject to Rule 204-2, but the practical question is the same: What does the firm keep, and can it produce it when necessary?

  7. Training and review. Employees should receive training on the AI policy when it is adopted and when it is materially updated. For RIAs, the annual compliance review under Rule 206(4)-7 should specifically assess the policy. But given how quickly AI tools and capabilities are evolving, firms should treat an annual review as the minimum, not the standard. ERAs should build the same practice into their operations, even without the formal rule requiring it.

A final word

The goal of a well-constructed AI policy is not to limit AI use; it is to enable it with confidence, allowing the firm to say “yes” to new tools, “yes” to AI-assisted workflows and “yes” to investor questions, without having to reconstruct the analysis under pressure. That is worth keeping in mind at a moment when the Securities and Exchange Commission itself is encouraging adoption and inviting firms to engage on how new technologies can come online while retaining investor protections. If you don’t have an AI policy yet, start with an honest account of how AI is being used at your firm. The guidelines above are a good place to go from there. As always, we are happy to assist.

The authors

Stacey Song
Stacey Song

Posted by Stacey Song