FINRA Reminds Financial Firms How AI Use Poses Significant Risks
FINRA Reminds Financial Firms How AI Use Poses Significant Risks
- RumbergerKirk attorneys analyze AI’s impact on financial firms
- FINRA’s scrutiny of AI includes rules for advisers to follow
These days, artificial intelligence is everywhere we look, and the financial services industry is no different. As AI use increases, regulatory bodies are updating their rules and guidelines detailing how companies should and shouldn’t use AI. Financial advisers and firms must learn to navigate these rules to ensure compliant communication with clients and investors.
The Financial Industry Regulatory Authority reports that many broker-dealers have begun using AI to reach potential customers by analyzing their online footprint, including website and mobile app usage and past searches, to deliver targeted communications or curated information.
FINRA issued a regulatory notice in June to remind members of their regulatory obligations when they use generative AI and large language models. According to Regulatory Notice 24-09, FINRA’s rules “are intended to be technology neutral,” meaning they should function dynamically as firm technology and processes evolve.
The rules—and securities laws in general—apply, regardless if firms use generative AI, large language models, and other technologies, including chatbots. FINRA’s rules also apply to both firm-developed generative AI tools and third-party technology. The rules applicable to a firm’s generative AI use will depend on how the firm uses the technology.
FINRA Rule 2210, addressing communications with the public, and Rule 3110, regarding supervision requirements, can apply to a firm’s use of generative AI and LLM, including chatbots.
FINRA’s guidance notes that depending on “the nature and number of persons receiving the chatbot communications, they may be subject to FINRA communications rules as correspondence, retail communications, or institutional communications,” all defined in Rule 2210(a) and subject to their own requirements.
Institutional communications are made only to institutional investors, including banks, insurance companies, Securities and Exchange Commission and state-registered investment advisers, governmental entities, employee benefit plans, FINRA members and registered persons, and any natural person or entity with assets totaling at least $50 million. A firm’s internal communications aren’t institutional communications.
Correspondence and retail communications pertain to retail investors, who are persons other than institutional investors, whether or not they have an account with the firm. This term covers most individual investors. Correspondence covers communications to or made available to 25 or fewer retail investors within any 30-day period. An individual interaction between a retail investor and a chatbot likely would be covered under FINRA’s rules for correspondence.
Retail communications are communications to or made available to more than 25 retail investors within any 30-day period. Standard chatbot responses to particular inquiries and AI-generated marketing materials that will be made or provided to more than 25 retail investors a month are likely retail communications.
FINRA Rule 2210(b) provides the approval, review, and record-keeping obligations for all three types of communications and states: “All correspondence is subject to the supervision and review requirements of Rules 3110(b) and 3110.06 through .09.” Rule 2210(b) and 3110 obligations vary by communication type.
However, chatbot and AI-generated communication with the public must comply with the requirements applicable to the specific type of communication—correspondence, retail communication, or institutional—that they constitute.
All communications must comply with FINRA’s content standards. Rule 2210(d) requires all communications to “be based on principles of fair dealing and good faith,” be “fair and balanced,” and “provide a sound basis for evaluating the facts in regard to any particular security or type of security, industry, or service.”
The rule prohibits “false, exaggerated, unwarranted, promissory or misleading statement[s],” and no communication may “omit any material fact or qualification” that “would cause the communications to be misleading.” Applicable FINRA guidance clarifies that firms are “responsible for their communications, regardless of whether they are generated by a human or AI technology.”
Although AI can be a powerful tool in the financial services industry, it must be carefully monitored and controlled. As noted in FINRA’s AI in securities publication, reports of numerous incidents of concerning interactions between customers and AI, some of which amount to discrimination or fraud, and human supervision and review within the AI development process is essential.
The FINRA publication further makes clear that AI communications, just like human communications, must be appropriately reviewed. Firms should have procedures detailing the supervision and governance of AI applications. Verification and supervision of AI model data and outputs can guard against incorrect outputs and hallucinations.
Firms should fulfill their record-keeping requirements for any AI-generated communications. FINRA encourages firms to be aware of specific challenges related to model explainability, data integrity, and consumer privacy.
Model explainability means a firm should explain and show appropriate oversight and supervision of its AI models. Models that are difficult to explain or understand can hamper a firm’s ability to prove compliance with applicable rules and regulations. Firms should employ risk management strategies in the development and use of their AI models, which may include maintaining an inventory of all models and testing each model against certain benchmarks.
Data integrity is another concern regarding the use of AI models, according to FINRA. A model’s dataset may include skewed or irrelevant information, leading to incorrect or even discriminatory outputs. FINRA recommends that firms carefully review the data, algorithms, parameters, and output of AI models to ensure accuracy and eliminate potential bias. Firms should ensure that their data is current and conduct more extensive verification of any external data the firm intends to use.
FINRA recommends firms develop policies and procedures to keep control of their data to prevent unauthorized access or data leakage. Firms should maintain and test such access procedures, and employ encryption techniques when necessary to protect sensitive data. When firms use data from multiple outside sources, FINRA has expressed concern regarding the additional security risks from AI.
To guard against these risks, firms should carefully evaluate any potential third-party platforms and implement cybersecurity policies during the development of AI models, and throughout the course of their use.
Customer privacy is a primary concern for many, including FINRA, regarding the use of AI. FINRA requires firms to ensure the protection of their customers’ personally identifiable information when using AI. FINRA recommends firms implement and maintain written procedures addressing customer privacy concerns involving the use of AI, including encryption of some personally identifiable information and customer consent to collect data.
Although AI has many potential uses in the financial services industry and is becoming a powerful tool, its use poses additional risks and concerns firms should consider.
Firms using AI must ensure they remain in compliance with applicable rules and regulations, including applicable FINRA rules, and should identify the rules and guidance relevant to their specific use of AI applications.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
This article was reproduced with permission from Bloomberg Law. Published December 13, 2024. Copyright 2024 The Bureau of National Affairs, Inc. 800-372- 1033. For further use, please visit http://www.bna.com/copyright-permission-request/.